Let’s just keep our thoughts at the bay regarding what we saw in Terminator or iMom. AI technology is likely to be beneficial and useful in an overwhelming manner for humanity. Despite this assurance, being a steward of latest technology the innovators need to think about potential challenges and the best ways to address the ones associated with risks. Google CEO Sundar Pichai is not only concerned about his company’s advancements in AI technology, but for Google AI safety as well.

Google AI Safety

Recently, during the Code Conference, Tesla CEO expressed his concerns regarding artificial intelligence employed by Google. Sure, the high-profile executive of Tesla didn't point out his worries directly, but in a very simple gesture, he gave out a strong indication of what he feels about Google and its proceedings in the world of artificial intelligence. It seems Google wants to wipe off Tesla CEO Elon Musk’s worries regarding AI safety. Just a few days ago, Google made an announcement regarding the "STOP BUTTON" which it has been using in case if AI goes wrong in future.

Of course, this is a concern, which badly needed to be addressed by the search engine giant considering the pace with which it’s advancing in the field. On Tuesday, Google addressed concrete issues for Google AI safety in a technical paper in association with Berkeley, Stanford, OpenAI, and Google. Basically, it’s an approach to move beyond hypothetical and abstract concerns surrounding development and use of artificial intelligence by offering researchers with certain questions to use in real-world testing.

It’s one of the most dangerous scenarios potentially. For unknown reasons, the researchers opted to illustrate these hazards surrounding Google AI safety using the example of a cleaning bot instead of Superintelligence, which wants to enslave us. Strange!

Below listed are a few points, which Google CEO Sundar Pichai and Eric Schmidt think are crucial to focus on:

Avoiding Negative Side Effects: How can we ensure that an AI system will not disturb its environment in negative ways while pursuing its goals, e.g. a cleaning robot knocking over a vase because it can clean faster by doing so?

Avoiding Reward Hacking: How can we avoid gaming of the reward function? For example, we don’t want this cleaning robot simply covering over messes with materials it can’t see through.

Scalable Oversight: How can we efficiently ensure that a given AI system respects aspects of the objective that are too expensive to be frequently evaluated during training? For example, if an AI system gets human feedback as it performs a task, it needs to use that feedback efficiently because asking too often would be annoying.

Safe Exploration: How do we ensure that an AI system doesn’t make exploratory moves with very negative repercussions? For example, maybe a cleaning robot should experiment with mopping strategies, but clearly it shouldn’t try putting a wet mop in an electrical outlet.

Robustness to Distributional Shift: How do we ensure that an AI system recognizes, and behaves robustly, when it’s in an environment very different from its training environment? For example, heuristics learned for a factory workfloor may not be safe enough for an office.