A screen capture from the Alphadogfight challenge produced by DARPA on Thursday, August 20, 2020. DARPA / Patrick Tucker A screen capture from the Alphadogfight challenge produced by DARPA on Thursday, August 20, 2020. DARPA / Patrick Tucker

An AI just beat a human F-16 pilot in a dogfight — again

In five rounds, an artificially-intelligent agent showed that it could outshoot other AI’s, and a human. So what happens next with AI in air combat?

 

The never-ending saga of machines outperforming humans has a new chapter. An AI algorithm has again beaten a human fighter pilot in a virtual dogfight. The contest was the finale of the U.S. military’s AlphaDogfight challenge, an effort to “demonstrate the feasibility of developing effective, intelligent autonomous agents capable of defeating adversary aircraft in a dogfight. “

Last August, Defense Advanced Research Project Agency, or DARPA,  selected eight teams ranging from large, traditional defense contractors like Lockheed Martin to small groups like Heron Systems to compete in a series of trials in November and January. In the final, on Thursday, Heron Systems emerged as the victor against the seven other teams after two days of old school dogfights, going after each other using nose-aimed guns only. Heron then faced off against a human fighter pilot sitting in a simulator and wearing a virtual reality helmet, and won five rounds to zero. 

The other winner in Thursday’s event was deep reinforcement learning, wherein artificial intelligence algorithms get to try out a task in a virtual environment over and over again, sometimes very quickly, until they develop something like understanding. Deep reinforcement played a key role in Heron System’s agent, as well as Lockheed Martin’s, the runner up.

Matt Tarascio, vice president of artificial intelligence, and Lee Ritholtz, director and chief architect of artificial intelligence, from Lockheed Martin told Defense One that trying to get an algorithm to perform well in air combat is very different than teaching software simply “to fly,” or maintain a particular direction, altitude, and speed. Software will begin with a complete lack of understanding about even very basic flight tasks, explained Ritholtz, putting it at a disadvantage against any human, at first. “You don’t have to teach a human [that] it shouldn’t crash into the ground… They have basic instincts that the algorithm doesn’t have,” in terms of training. “That means dying a lot. Hitting the ground, a lot,” said Ritholtz.

Tarascio likened it to “putting a baby in a cockpit.”

For the rest of this article please go to source link below.

REGISTER NOW

By Patrick Tucker / Technology Editor

Patrick Tucker is technology editor for Defense One. He’s also the author of The Naked Future: What Happens in a World That Anticipates Your Every Move? (Current, 2014). Previously, Tucker was deputy editor for The Futurist for nine years. Tucker has written about emerging technology in Slate, The Sun, MIT Technology Review, Wilson Quarterly, The American Legion Magazine, BBC News Magazine, Utne Reader, and elsewhere.

(Source: defenseone.com; August 20, 2020; https://is.gd/IEsrUg)
Back to INF

Loading please wait...