A team of Google researchers recently unwrapped the first computer program capable of learning largely impressive tasks independently, in what can be contemplated as a radical accomplishment in the field of artificial intelligence.

The artificial intelligence system spends all of its time playing dozens of Atari videos games from the 80s like a crazy teenager.


The research project carried out by DeepMind Technologies, the British startup that Google acquired last year for $400 million, whose aim was to build computers that can learn from scratch through trial and error, one of the main methods used by humans to learn.

Machines running the Deep Q-network, the computer program were exposed to retro Atari games, without any instruction on the set of rules. The only input the machines received were the raw screen pixels, game score and the set of available actions. The computers successfully taught itself the rules, eventually figuring out strategies for victory and did better than humans in 29 out of 49 games on the Atari 2600. The games ranged from well-known favorites like Space Invaders, Q’bert, Pong and Breakout, in addition to side-scrolling shorts like River Raid, and sports sims like Tennis and Boxing. By learning the rules, the program surprised its creators with novel winning strategies and could play at least three-quarters as well as a professional human games tester.

Google DeepMind’s Game Playing AI

The general rules-learning technology could eventually help Google with things like search, language translation and voice recognition. Furthermore, researchers believe that it could equally be useful for robots and driverless cars.

DemisHassabis of Google DeepMind in London described the AI program as a single general learning system that bridges sensory knowledge and learning. Games are Hassabiss’ passion. A teenage chess and programming prodigy, he wrote clever game logic for popular titles like Theme Park, much before earning a degree in computer science tripos from Cambridge University and a Ph.D in cognitive neuroscience from University College London

Games are to AI researchers what fruit flies are to biology— a stripped-back system to test theories.

Richard Sutton, a fellow of the Association for the Advancement of Artificial Intelligence.

Named Deep-Q network or D.Q.N., the program is mastering and understanding structure. D.Q.N, in some ways wasn’t even as smart as a toddler in developing analogies. It is incapable of developing conceptual knowledge or abstract knowledge, like transferring what it learned from one game to excel at another.

But, the new findings certainly underline how far AI research is from developing human-type intelligence. It sets as a cornerstone in AI research where, for the first time anyone has built a single learning system that can independently learn from experience and manage a breed of challenging tasks.

Google’s system bridged AI technology with a memory and reward systems that can be found in human and animal brains. The research team was able to fuse these together and develop a system that can learn from its surrounding environment, refer back to previous actions and adjust its behavior accordingly. It indicates a huge development over previous AI achievements. Popular image-recognition systems developed by IBM, Clarifai, Microsoft, and MetaMind require both supervision and annotated pictures in order to learn how to recognize objects. Google is developing a much more sophisticated technology and hopes to draw inspiration from cognitive science to add systems for long-term memory as well as strategic planning.


Google DeepMind is now working on bringing machines into a 90s capability, by learning to navigate complicated three-dimensional spaces in games such as Grand Theft Auto. By exposing the machines to more complicated games, the research team hopes to eventually bring it into real life.

So, at some point, when the systems gets scaled up, it could work like a personal assistant planning an entire trip to Asia, book the hotel and flights on its own. Google’s self-driving cars could also learn how to drive based on experience and learning from mistakes, rather than needing to be taught.

Hassabis asserts it could eventually happen in the next five years. He strongly believes that someday machines will be capable of some form of creativity, like creating computer games. But, we’re not there yet.

A computer beating a game is far from the humanoid genius that we see in sci-fi movies, but it sure is a square one to creating a more sumptuous AI computer system. The development is similar to IBM’s Watson, specifically developed to win the quiz show Jeopardy. However, in some way D.Q.N is more subtle, unlike Watson or Deep Blue, it doesn’t need to be taught strategies or rules to succeed at games.

Google DeepMind researchers wrote in a letter published in science journal Nature that machines can utilize games to infer proficient representations of the environment from high-dimensional sensory inputs, and thereby use it to sum up past experience to new circumstances. Since the algorithm has no real memory, it is unable to devise long-term strategies that entail planning. Researchers are currently trying to build a memory component to the AI system and apply the same to more realistic 3D computer games.

Millionaire entrepreneur Elon Musk describes artificial intelligence as humanity’s greatest existential threats. Hassabis soft-pedaled the concerns, saying we’re decades away from that kind of technology.