El gran maestro Michael Adams sólo pudo sacarle al programa dos empates en 6 partidas, perdiendo las otras cuatro.
Running millions of game simulations against itself, it took 40 days for it to learn-from scratch-how to beat the World champion version of itself.
That is truly game-changing, not only for Go, but also for how new knowledge is discovered. .
If we can design rules so flexible they can work from broader experience, and so directional that they always create stronger skill-like Alpha Go Zero-then it's possible to achieve artificial intelligence that masterminds systems.Its like an alien civilization inventing its own mathematics which allows it to do things like time though were still far from The Singularity, were definitely heading in that direction.Además, decir que Alpha Zero juega mejor que los humanos ya se había hecho desde hace tiempo.The rise of AGZ, it would now appear, has made these previous versions obsolete.The nice part, he pago de dinero real lsrp says, is that there are several other lines of AI research that address both of these issues (e.g.The real key here is the technique, says Hynes.The AI has had several iterations, each smarter and more capable than the one before.This alone is a game-changer in how we approach extending the known world.Old AlphaGo needed 48 TPUs.As a new Nature paper points out, There are an astonishing 10 to the power of 170 possible board configurations in Gomore than the number of atoms in the known universe.AlphaGo Zero achieves even better performance without using any expert human knowledge.In addition to devising completely new strategies, the new system is also considerably leaner and meaner than the original AlphaGo.That's the exciting achievement in Alpha Go Zero.En los finales de torres Destrozaremos a nuestros rivales antes de llegar a eso!
Alpha Zero juega un poco mejor que el programa más fuerte de ajedrez, Stockfish y éste juega infinitamente más fuerte que el 99 de los jugadores en el mundo.
This silver sand casino allowed the system to improve and refine its digital brain, known as a neural network, as it continually learned from experience.
This technique is more powerful than previous versions of AlphaGo because it is no longer constrained by the limits of human knowledge, notes the DeepMind team in a release.
And indeed, now that human players are no longer dominant in games like chess and Go, it can be said that weve already entered into the era of superintelligence.
Eventually, self-teaching systems will be used to solve more pressing problems, such as protein folding to conjure up new medicines and biotechnologies, figuring out ways to reduce energy consumption, or when we need to design new materials.