Perhaps observation is the best teacher to learn from, and the easiest form of learning is to imitate. NVIDIA researchers have adopted this learning theory to training robots to learn human behavior by observing, none other than, humans.
NVIDIA GPUs and AI are fueling the prime advancements in autonomous machines. It has developed a system around the seamless robot-human interaction via learning by observation. The researcher team presented the developments at the International Conference on Robotics and Automation (ICRA) in Brisbane, Australia, on May 20. Claire Delaunay, Vice President of Engineering and Dieter Fox, Head, Robotics Research Lab, Seattle are representing NVIDIA in ICRA.
NVIDIA’s Teaching Robot System
A team of NVIDIA researchers led by Stan Birchfield and Jonathan Tremblay at the new robotic labs at Seattle have invented a method to teach robots human behavior through observation. The system deduces and implements a human-readable program from a real-world demonstration. It includes a series of neural networks that need to understand perception, generate a program and finally execute the program. These networks are trained in simulation set-up at the development and pilot testing phases. However, NVIDIA researchers are looking forward to testing this system in the real world.
“The team wants to enable this next generation of robots that can safely work in close proximity to humans,” Fox said while interacting with TechCrunch. “But to do that, those robots need to be able to detect people, tracker their activities and learn how they can help people. That may be in small-scale industrial setting or in somebody’s home.”
Robot Training Demonstration
The team demonstrated the concept using Baxtar robot operating in a real world. To begin with, they gave the robot a pick-and-place task comprising stacking colored cubes. Though stacking is a simple problem, for a robot to learn by observing human makes it complex.
As presented in the demo video, to start with training robot, the researchers created a step-by-step guideline for the neural networks to follow. Firstly the robot detects the location of the objects, and then it sets a relationship among them. Lastly, it generates a program to imitate the actions that it saw the human perform. The robot learned the stacking task with a single demonstration and that too, in a real world.
Here, the most important element of the system is that during training robot, it generates a human-readable description of the actions that it is taking. In this manner, if the things go wrong while robots learning by observing humans, the researchers can identify the root cause.
For the basic training of the robot, NVIDIA researchers generally used synthetic data from a simulated environment. In this way, the robots can learn things at a faster rate. The advanced version of these next-generation robots will need to recognize human, identify what human wants them to do, and then learn from observing humans. Hence, there is a long way for these robots to land in the real world.
What a relief! Otherwise, with this development, we will soon come across companies posting for ‘apprentice required (only robots can apply)’.