Stay up-to-date on tech news, read in-depth tech reviews and analysis with Technowize Magazine.
Researchers at McGill University have come up with a remote-controlled buggy the can negotiate rough terrain with Artificial intelligence. The AI uses visual inputs like aerial shots and sensor-based imagery to navigate off-road terrain. This breakthrough will help autonomous vehicle producing companies like Tesla and Wayve to give better results.
The research team has used a hybrid model of both model- based imagery and model-free elements to find objects in the path. They provide both first-person and overhead aerial image inputs to negotiate. The fusion of both inputs— sensors and aerial views improves the driving experience and makes the model more sensitive to environmental obstructions.
The results of the research showed the ability of the vehicle to generalize to environments with vegetation, rocks, and sandy trails.
Remote Controlled Buggy Navigates Rough Terrain
The purpose of the research and experiment is not only to show how a vehicle preferentially chooses to navigate on smooth roads per se but to show how a vehicle learns to negotiate off rough road terrain using a synthesis of aerial and first-person sensing or onboard sensors on the ground. The vehicle additionally does self-supervised learning and labeling of training data.
The off-road vehicle has an electric motor with a mechanical brake that’s wirelessly connected to an Intel i7 NUC computer running the open source Robot Operating System (ROS), says the report published by the researchers. The buggy is equipped with both a short-range lidar sensor and a forward-facing camera coupled with an inertial measurement unit. There is a microcontroller that relays all sensor information to the NUC computer.
During the experiment, terrain roughness was estimated using an on-board Inertial Measurement Unit (IMU) while obstacles were measured using a short-range Lidar. They fused input images from an on-board first-person camera and local aerial view. The addition of aerial imagery improved predictive performance, especially in visual obstructions and at sharp turnings. Their research found that the AI-driven vehicle, using the hybrid model, chose 90 percent of the time to negotiate on smoother terrain and reduced the proportion of going over rough terrain by 6.1 times compared to a model using only first person imagery.
Robotic vehicles, researchers say, are easy to guide over smooth terrain, but it is difficult to remotely guide them through a rougher environment compared to larger-scale robot vehicles.
The researchers say that the small size of the vehicles makes it easier to deploy and operate them in remote environments with this technology Updates. They also are least likely to disturb the natural environs.
Modern Micro Air Vehicles or MAVs have the advantage of low cost, ease of use, and the ability to take high-quality aerial pictures with an unrestricted field of view.
Bigger robotic vehicles use terrain classification based on human intervention and geographically labeled data. Sensor modelling generally relies on complex geometric modeling combined with terrain classification gained from labeled data. Model-based approaches are more sample efficient but may perform poorly due to cascading model errors. Model-free methods that learn from vision – are rarely used in realworld scenarios due to sample inefficiency.
The researchers combined the aspects of model-free and model-based methods into a single computation graph to use the strengths of both methods while offsetting their individual weaknesses.
The researchers’ off-road vehicle has an electric motor with a mechanical brake that’s wirelessly connected to an Intel i7 NUC computer running the open-source Robot Operating System (ROS). The buggy is equipped with both a short-range lidar sensor and a forward-facing camera coupled with an inertial measurement unit, and with a microcontroller that relays all sensor information to the NUC computer.