MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) has invented a system that can sense human body postures and movements through the other side of the walls.

The project is called “RF-Pose” wherein the MIT researchers made use of a neural network to study radio signals that rebound people’s bodies. It can then create a dynamic stick figure that mirrors actions performed by that particular person such as walking, standing, sitting, and moving limbs.

The team working on this ‘seeing through walls’ project is led by Andrew and Erna Viterbi Professor Dina Katabi, and Ph.D. student and lead author Mingmin Zhao. The other members of the team are – MIT Professor Antonio Torralba, post-doc Mohammad Abu Alsheikh, graduate student Tianhong Li, and Ph.D. students Yonglong Tian and Hang Zhao.

Artificial Intelligence RF-Pose

The MIT team is primarily working with doctors to experiment with RF-Pose applications in healthcare. Source: MITCSAIL

Application of RF-Pose

The MIT team is primarily working with doctors to experiment with this artificial intelligence application in healthcare. According to them, this system comprising of radio signals can be applied to monitor diseases like Parkinson’s, multiple sclerosis (MS), and muscular dystrophy. It will allow medical practitioners to passively supervise the patient’s activities and determine further treatment. The patients will now be able to move independently. They need not stay under the camera vision constantly for medical supervision.

“We’ve seen that monitoring patients’ walking speed and ability to do basic activities on their own gives healthcare providers a window into their lives that they didn’t have before, which could be meaningful for a whole range of diseases,” adds Katabi.

Other than healthcare, the MIT researchers are also thinking to apply this artificial intelligence sense in the gaming world. They said that a new category of video games could be devised using RF-Pose where players would move around the house, or help locate survivors in the search and rescue missions.

How the Neural Network System Works?

Basically, the neural network is trained through instructions labeled by hands. For instance, to identify a cat, a researcher takes a huge set of images and labels each image as ‘cat’ or ‘not cat’. However, radio signals cannot be labeled by humans and so the MIT researchers faced a major challenge.

To overcome this issue, the MIT team gathered thousands of images using both their wireless device and camera. The images included pictures of people doing physical activities like walking, sitting, talking, opening doors and waiting for elevators.

Following this, they extracted the stick figures from the captured images and showed to the neural network along with the corresponding radio signal. Surprisingly, this arrangement of illustrations enabled the artificial intelligence system to discover the connection between the radio signal and the stick figures of the people in the picture.

Post-training, RF-Pose then generalized its knowledge to estimate a person’s posture and movements standing behind the wall. This time it did it without cameras, utilizing only the wireless radiations reflecting from people’s bodies.

“If you think of the computer vision system as the teacher, this is a truly fascinating example of the student outperforming the teacher,” commented Torralba.

Privacy Concern

For the purpose of testing and documentation, the team collected data with the subjects’ consent and encrypted it for complete anonymity. Even in future, when the project will land in the real world, the team will install a “consent mechanism”. The idea, here is that the person who mounts the system will be cued to do a particular set of movements. This, in turn, will boot up the system which will monitor the environment.

Titled as ‘Through-Wall Human Pose Estimation Using Radio Signals’, the team of MIT researchers will present this paper at the Conference on Computer Vision and Pattern Recognition (CVPR) in Salt Lake City, Utah.