This post is also available in: עברית (Hebrew)
Innovative development in the human-machine teaming would enhance the abilities of the US Armed Forces. Project Maven focuses on helping U.S. Special Operations Command intelligence analysts identify objects in video from small ScanEagle drones. Earlier this month at an undisclosed location in the Middle East, computers using special algorithms helped intelligence analysts identify objects in a video feed from a small ScanEagle drone over the battlefield.
A few days into the trials, the computer identified objects — people, cars, types of building — correctly about 60 percent of the time. Within a week, the machine’s accuracy improved to around 80 percent.
According to defenseone.com, over the next year, the team members plan to expand the project to help automate the analysis of video feeds coming from large drones — and that’s just the beginning.
“What we’re setting the stage for is a future of human-machine teaming,” said Air Force Lt. Gen. John N.T.“Jack” Shanahan, director for defense intelligence for warfighter support, the Pentagon general who is overseeing the effort. Shanahan believes the concept will revolutionize the way the military fights. “This is not machines taking over,” he said. “This is not a technological solution to a technological problem. It’s an operational solution to an operational problem.”
In coming months, the team plans to put the algorithms in the hands of more units with smaller tactical drones, before expanding the project to larger, medium-altitude Predator and Reaper drones.
Before it deployed the technology, the team trained the algorithms using thousands of hours of archived battlefield video captured by drones in the Middle East. As it turned out, the data was different from the region where the Project Maven team deployed.
The team has paired the Maven algorithm with a system called Minotaur, a Navy and Marine Corps “correlation and georegistration application.” As Shanahan describes it, Maven has the algorithm, which puts boxes on the video screen, classifying an object and then tracking it. Then using Minotaur, it gets a georegistration of the coordinates, essentially displaying the location of the object on a map. “Having those things together is really increasing situational awareness and starts the process of giving analysts a little bit of time back — which we hope will become a lot of time back over time — rather than just having to stay glued to the video screen,” Shanahan said.
After the Predator and Reaper video feeds get the algorithms, the plan is to put them to work on Gorgon Stare, a sophisticated, high-tech series of cameras carried by a Reaper drone that can view entire towns.