This post is also available in: עברית (Hebrew)
A new implementation of machine learning to the unmanned aerial vehicle sphere. Machine learning is becoming an increasingly important artificial intelligence approach to building autonomous and robotic systems. One of the key challenges with machine learning is the need for many samples, the amount of data needed to learn useful behaviors is high. In addition, the robotic system is often non-operational during the training phase. This requires debugging to occur in real-world experiments with an unpredictable robot.
Microsoft’s Aerial Informatics and Robotics platform has a solution for these two problems: It will provide realistic simulation tools for designers and developers to generate the training data needed and will also leverage recent innovations in physics to create accurate, real-world simulations.
Given the speed of innovation in hardware, software, and algorithms, it must be flexible enough to be easily extended in multiple dimensions. The Aerial Informatics and Robotics framework follows a modular design to address these challenges.
Its cross-platform (Linux and Windows), open-source architecture is easily extensible to accommodate diverse new types of autonomous vehicles, hardware platforms, and software protocols. All this machinery allows users to quickly add custom robot models and new sensors to the simulator.
According to Microsoft’s site, the platform is also designed to integrate with existing machine learning frameworks to generate new algorithms for perception and control tasks. Methods such as reinforcement and imitation learning, learning-by-demonstration, and transfer learning can leverage simulations and synthetically generated experiences to build realistic models.
Quadrotors are the first vehicles to have been implemented in the platform. These aerial robots have application in precision agriculture, pathogen surveillance, weather monitoring and more. A camera is an integral part of these systems and often the only way for a quadrotor to perceive the world to plan and execute its mission safely.
The platform enables seamless training and testing of such perception systems as cameras via realistic renderings of the environment. These synthetically generated graphic images can generate orders of magnitude more perception and control data than is possible with real-world robot data alone.