Microsoft introduces technology to train autonomous drones like pets
Microsoft has made available a beta version of an advanced virtual world for training autonomous drones, as well as other gadgets that move on their own.
The software, which is available on GitHub, recreates conditions like shadows, reflections and other potentially confusing real-world conditions in a highly detailed, highly realistic virtual environment without the risk of the real thing.
The research project called Aerial Informatics and Robotics Platform includes software that allows researchers to quickly write code to control aerial robots and other gadgets, and a highly realistic simulator to collect data for training an AI system and test it in the virtual world before deploying it in the real world.
Ashish Kapoor, a Microsoft researcher who is leading the project, said they hope the tools will spawn major progress in creating artificial intelligence gadgets to drive cars, deliver packages and maybe even do laundry.
“The aspirational goal is really to build systems that can operate in the real world,” he said
However, it is different from other artificial intelligence research projects, which focus on teaching AI systems to be successful in more artificial environments that have well-defined rules such as playing board games.
Kapoor said this work aims to help researchers develop more practical tools that can safely augment what people are doing in their everyday lives. “That’s the next leap in AI, really thinking about real-world systems,” Kapoor said.
Due to advances in graphics hardware, computing power and algorithms, Microsoft researchers can create simulators that offer a much more realistic view of the environment.
The simulator is built on the latest photorealistic technologies, which can accurately render subtle things, like shadows and reflections that make a significant difference in computer vision algorithms.
Researchers can also turn the simulator as a safe, reliable and cheap testing ground for autonomous systems.
That has two advantages as it they can “crash” a costly drone, robot or other gadget an infinite number of times without burning through tens of thousands of dollars in equipment, damaging actual buildings or hurting someone.
Second, it allows researchers to do better AI research faster which includes gathering training data, used to build algorithms that teach systems to react safely and conduct the kind of AI research that requires lots of trial and error.
The researchers say the simulator should help them get to the point more quickly where they can test, or even use, their systems in real-world settings where there is very little room for error.
The project includes a library of software that allows developers to quickly write code to control drones.
Furthermore, the tools could help researchers develop better perception abilities, which would help the robot learn to recognize elements and understand complex concepts like differentiate between a real obstacle, like a door, and a false one, like a shadow.
The entire platform is designed to work on any type of autonomous system that needs to navigate its environment.
“I can actually use the same code base to fly a glider or drive a car,” Kapoor said.
The researchers have worked on the platform for less than a year, but draw on decades of experience in fields including computer vision, robotics, and machine learning and planning.
Kapoor said the quick progress was due to unique structure of Microsoft’s research labs which allows easy collaboration for researchers with vastly different backgrounds.
The researchers note that robotics and artificial intelligence researchers cannot develop these or do this kind of testing in the real world.
“We want a democratization of robotics,” said Debadeepta Dey, a researcher working on the project.
Kapoor noted that are no standard set of protocols for artificial intelligence agents, but this system allow develop some best practices which can be applied across the board to improve safety as autonomous systems become more mainstream.