Google Inc. is on board to win the crown of artificial intelligence and rule the AI universe. Of course, we do not doubt their plans, as Google DeepMind has created the most powerful AI in this world. No wonder why Elon Musk at Code Conference indirectly expressed his concern regarding Google’s burgeoning power in the world of AI. Google AI worries Elon Musk, even if he won’t unveil it openly, we know it.

Google DeepMind

Image: Google DeepMind Official Website

Also Read:Ā There’s Only One AI Company Which Elon Musk is Afraid of

We, as human beings, do not prefer ā€˜not’ being at the top end of the food chain. It’s a human psychology. We don’t want our creations to practice power over us, as it’s not ideal. And, we certainly do not wish artificial intelligence to dominate in the future. This is why influential folks like the genius behind Tesla, Elon Musk, world renowned astrophysicist Sir Stephen Hawking, and Microsoft mastermind Bill Gates are so determined to warn humans by hinting towards the terrifying future of AI.

As much as Google is trying to bring the almighty ā€˜AI’ revolution, the company is also keen to keep the disasters at bay, in case, anything goes wrong. Google has published a paper, which explains the initiatives of DeepMind team. The paper includes the details of work done by the team to ensure there’s a ā€œSTOP BUTTONā€ to prevent AI from turning dangerous. Or let’s just say to keep the chances of ā€˜robocalypse’ away.

Also Read:Ā Guess What Uber Will be Driving Next

More importantly, Google has developed a framework, which will not allow the AI to move out of control. This is the same team responsible for programming, AlphaGo, which successfully defeated Lee Sedol, the champion of Go game. The game is full of endless strategies and is known to be the world’s most complex board game in existence.

In the published paper, Google assumes ā€œit’s unlikely for AI agents to behave in an optimal manner all the time, especially in the real world interaction.ā€ As such, it’s crucial that human operators are capable of preventing AI from turning dangerous, which involves a fatal sequence of actions by AI algorithms and bots. In such conditions, human operators have to be powerful enough to lead the situation into a safer direction.

Further, the researchers write,

ā€œSafe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences, or to take it out of a delicate situation, or even to temporarily use it to achieve a task it did not learn to perform or would not necessarily receive rewards for this.ā€

Just in a year, DeepMind has taken a huge leap in artificial intelligence. Google has kept a keen eye towards security and safety as a part of its advancement. The creation of ā€œSTOP BUTTON’ is one of the many efforts, which the company has been working on to halt the chances of robocalypse and prevent AI from turning dangerous.