This 1000 times. Extrapolating Deep Blue's 11GFlop supercomputer to today with Moore's law would be a 70TFlop cluster. AlphaGo is using 1+PFlops of compute (280GPUs referenced for competition in [0]). That's an insane amount of compute.
While it's fun to hate on IBM, it's not really fair to say Deep Blue was throwing hardware at the problem but AlphaGo isn't. Based on the paper AlphaGo will perform much worse in terms of ELO ranking on a smaller cluster.
While it's fun to hate on IBM, it's not really fair to say Deep Blue was throwing hardware at the problem but AlphaGo isn't. Based on the paper AlphaGo will perform much worse in terms of ELO ranking on a smaller cluster.
[0] http://www.economist.com/news/science-and-technology/2169454...