top of page
Line Concept Level 3 page 2.PNG

After the Merge – Marija Jovanovich

2020’s time dilation makes Elon Musk’s comments about the end of the fighter jet era at February’s Air Warfare Symposium seem far more than six months ago, despite the visceral responses they generated at the time. DARPA’s recent third round of its AlphaDogfight Trials not only reminded people of Musk’s remarks, but it also appeared to vindicate them, with the AI achieving a clean sweep over its human competitor. In this article, regular contributor Marija Jovanovich looks beyond the heat and noise that ensued to reflect on what this really means for Air Forces and Air Combat, and, more importantly, what it does not.

On 20 August 2020, the US military’s AlphaDogfight challenge reached a fascinating conclusion. The challenge sought to ‘demonstrate the feasibility of developing effective, intelligent autonomous agents capable of defeating adversary aircraft in a dogfight.’ After successive rounds in which several Artificial Intelligence (AI) competitors fought each other, the winning AI, produced by Heron Systems, took on a human pilot, a graduate of the USAF’s elite Weapons School. The finale comprised five rounds of virtual dogfighting. The AI won 5-0.

The response to the news was predictable. Those who believe in AI as the panacea rejoiced that the age of crewed fighter jets was over. Those who worship at the altar of air combat called the experiment unfair and its result inadmissible. If the result had been reversed, the same arguments would have been made, just in a different arrangement. This is the harnessing of reason in the pursuit of post-hoc justification of prior belief at play, which Jonathan Haidt discussed at length in The Righteous Mind.

As always, the truth lay somewhere in between. The fight was indeed ‘unfair’ – but it was unfair to both sides in different ways. The AI was prevented from ‘learning’ during the trials, while the human pilot was able to learn from each evolution and adapt his tactics accordingly. The AI was also restricted to the performance model of General Dynamics’ F‑16, designed with human physiological limitations in mind. On the other hand, the AI possessed total situational awareness in this fight, something current sensor capabilities only allow in a simulated environment, whereas the human pilot had to search for and maintain visual contact with his opponent.

We can debate, and some will for some time yet, whether the ‘unfairness’ inherent in the experiment’s design helped one side more than the other (possible), and even whether it alone was responsible for the result (unlikely). That debate is and will remain unproductive. Instead, what we should do is focus on what this event means, both in the specific and general sense.

n operational F-16 pilot, call sign 'Banger,' flies in a virtual reality simulator against the champion F-16 AI agent developed by Heron Systems. The Heron AI agent defeated the human pilot in five straight dogfights to conclude the AlphaDogfight Trials. (Source: DARPA)

What does this event tell us about dogfighting specifically?[1] In the interest of full disclosure, I am not a fighter pilot. However, I am an aviator whose education, training and experience as a test pilot lend themselves to a detailed understanding of both aircraft and human performance, and the physics and mechanics of dogfights. This perspective allows me to observe the simple reason why a sufficiently advanced machine should beat a human in a dogfight: what unfolds after the merge – that point in time and space when two aircraft in an air-to-air engagement first meet and commence dogfighting – is largely a physics problem. A dogfight is an ‘all-in’ event, so there is no nuanced risk management. It is a mano a mano affair, so there is no intricate human coordination. There is just the physics problem. As such, it is possible for it to be fully described and defined in terms of Newtonian mechanics, although the system of equations would be quite complicated and dynamic. A human pilot, unable to run those calculations explicitly, solves this problem through intuition based on years of training and experience. This intuition can even appear as creativity. However, a sufficiently advanced machine can actually run the calculations without intuition’s margin for error. All things being equal, such a machine should always win.

Reducing ACM to a physics problem does not diminish its difficulty or the skills and determination required to execute it well. It is hard and unrelenting, one of the most intense and demanding experiences within aviation. However, at its core, ACM is still a physics problem, as are all of its complicators. For example, if you increase the number of players on either side, the physics problem gets more complicated, but it remains a physics problem.

In this context, the result of AlphaDogfight – a post-merge one versus one ACM engagement – was entirely predictable. However, what does this experimental validation of a long-held hunch mean more generally? Perhaps it would be easier, to begin with, what it does not mean.

  • Can AI replace fighter pilots? Not so fast. Even if we take the somewhat premature leap of faith that AI makes a better dogfighting pilot based on this one event, fighter pilots do more than just dogfight. These other roles are not pure physics problems, particularly when they involve target identification, nuanced risk management, coordination with other humans or creative solutions to previously unseen operational problems.

  • Can AI replace pilots and aircrews in general? Not for the foreseeable future. For roles that involve nuanced risk management, coordination with other humans, or creative solutions to previously unseen operational problems, humans in the loop remain a requirement. These roles are many and varied, such as multi-INT ISR and anti-submarine warfare. Moreover, acknowledging human nature, do we really see a time in the near future when people will happily pile onto an aircraft flown only by a computer?

So, what does it mean, and where should it take us? I venture that this event should be a catalyst to ask and more widely discuss some critical questions about the future. The shortlist below is by no means exhaustive, and readers should take the opportunity to pose their own questions in the comments for discussion.

  • What jobs better suit humans, and what jobs is AI likely to crush? At the risk of grossly oversimplifying a complex matter, machines excel at tasks that rely on rapid data access and crunching, whereas humans have the edge when it comes to multi-input fusion, nuance, creativity, and relationships. This leads to the obvious question of whether apportioning tasks on that basis might make us a better fighting force? Or is the crucial integrated rather than delineated human-machine teaming?

  • What sacred cows should we slay? ACM has long been the jewel in the crown of fighter pilot culture, but its very nature as a physics problem may make it ideally suited for early transfer to the ‘machine to do’ list – hit the merge and literally go on autopilot. If this does prove to be the case, we cannot afford to allow its status as a sacred cow to impede this development, because our potential adversaries certainly will not. Meanwhile, we should ask ourselves – what other sacred cows are out there?

  • What about the way air forces manage personnel and groom talent? We currently select fighter pilots largely (albeit not exclusively) on their potential to successfully execute ACM. If a currently available, weak AI can execute ACM better than a human expert, is this really the skill set we should be using to identify our ‘best and brightest’ for the future, or should we be shifting our focus?[2]

Ultimately, we should not be distracted by what cyber-specialist Jacquelyn Schneider referred to as ‘AI theatre’ in an incisive Twitter thread following this event. AlphaDogfight was satisfying in the same way sporting spectacles are – and was maybe even more so given most sporting spectacles remain on pandemic hiatus. However, there are immediate and profound gains to be made by investing in AI innovation in more mundane and data-heavy areas, such as logistics and medical records management. While perhaps lacking the theatre of AlphaDogfight, these areas offer greater returns on investment and should be where our attention is focused in the near term.

The intent of this short article is not to stake out a position, but to begin a conversation. To overstate the significance of the AlphaDogfight result would be a mistake – it is far from ‘end of an era’ material. However, neither should we understate it, nor ignore the opportunity it presents to look beyond the first-order implications for air combat. As the fighter pilot’s fighter pilot, John Boyd, once said, ‘You gotta challenge all assumptions. If you don’t, what is doctrine on day one becomes dogma forever after.’ Nothing about the year 2020 suggests that dogma is something we can afford as we move further into the 21st century.

Wing Commander Marija ‘Maz’ Jovanovich is a Royal Australian Air Force aviator. She is a distinguished graduate of both the USAF Test Pilot School and USAF Air War College who is about to assume command of No. 10 Squadron. The views expressed are hers alone and do not reflect the opinion of the Royal Australian Air Force, the Department of Defence, or the Australian Government.

[1] Hereafter, I will use the term ‘dogfight’ to refer to post-merge air combat manoeuvring (ACM). [2] Weak AI, also known as narrow AI, refers to AI that focuses on doing one task well, rather than replicating human intelligence.

Comments


bottom of page