November 7, 2024
Andrew Hill and Steve Gerras conclude their three-part series on the evolving relationship between AI and national security decision-making with a focus on the inevitable rise of empowered military AI (EMAI). They stress the urgent need for the United States to proactively prepare for this future. This preparation includes major shifts in acquisition practices, the way technology is integrated, and, most importantly, the implementation of robust and intelligent arms control measures. These steps, they argue, are just the beginning of navigating a complex and potentially turbulent landscape ahead.

Editor’s Note: This is the final installment of a three-part series delving into the role of artificial intelligence (AI) within the United States’ comprehensive national defense and security strategy. The authors assess the advantages and limitations of AI as it is employed to enhance, integrate, and potentially supplant human decision-making processes. You can find the first article here and the second article here.

Advanced AI seems to tap into some primal human fears.

An empowered military AI (EMAI) that independently makes lethal decisions is scary. Ancient mythology is full of stories of creators being destroyed by their creations, as when the Olympian Gods overthrew the Titans. Killer machines are a mainstay of science fiction. Long before Michael Crichton’s Westworld and James Cameron’s Terminator, Samuel Butler’s 1872 novel Erewhon described an isolated civilization that had banned complex machines out of a fear that technology would someday supplant humankind. Butler quotes one Erewhonian philosopher, “I fear none of the existing machines; what I fear is the extraordinary rapidity with which they are becoming something very different to what they are at present.”

Advanced AI seems to tap into some primal human fears. Risk expert David Ropeik highlights thirteen “fear factors” that lead humans to be more afraid of something, and advanced AI has eight of them: lack of control, trust, and choice; the fact it is man-made; its uncertainty; its potential for catastrophe; its novelty; and the personal risk it poses to us in potentially taking our jobs (or our lives). These factors make empowered AI particularly frightening, encouraging a denial of its possible implications. We want humans to perform better than machines, and we do not want machines to make life or death choices; but these are normative arguments, and wishful thinking should not masquerade as technological reality.

We must not confuse the normative argument against empowered military AI systems, compelling as it may be, with an assessment of the inherent technological potential of AI. There is a vast gulf between “we should not do it” and “we cannot do it,” and the history of war repeatedly demonstrates that we often do things that we should not do. In any case, moral or ethical concerns should not limit the development of advanced, empowered military AI. Such normative objections have a dismal record of limiting the development of horrible ways for humans to slaughter each other. We fare better in creating structures for governing the use of military technology after it has already proliferated, especially in situations in which both sides of an adversarial relationship have the technology.

Perhaps very advanced military AI systems will face inherent performance limitations and humans will remain the center of military decision-making. If this is true, different nations will struggle with similar challenges as they develop empowered military AI systems. Yet we feel confident that no such limitations exist. Why? We have three reasons for doubt.

First, as we argued in a previous essay, human decision-making capabilities are limited by human biology. Yes, we are biological marvels, and our ability to adapt and create has led to extraordinary achievements in art, science, and technology. We have explored the universe and the atom. Yet, we experience reality indirectly, mediated through our imperfect senses. Our biology anchors our intelligence. We tire, make mistakes, get emotional, and forget. We judge ourselves as intelligent only because we lack a higher comparison point. When we evaluate intelligence, looking down is easy. We can recognize lower levels of intelligence in other animals, and (perhaps more controversially) sometimes recognize a lower level of intelligence in another person. But looking up is much more difficult. We have no experience with intelligence surpassing that of, say, top theoretical physicists, and most of us cannot comprehend that level of brilliance. Even the greatest scientific leaps, such as the development of quantum mechanics, were by brilliant humans constrained by human limitations. The entire spectrum of human intelligence remains remarkably narrow. Compared to an advanced AI, Einstein and the village idiot will be indistinguishable. Our technological advancements may amplify our capabilities, but they do not make us superhuman, and they do not transcend our limitations.

Second, even if humans enjoy some persistent advantages in decision-making, current plans to keep a human “in the loop” in advanced military AI systems are unrealistic. As we wrote in a prior article, the complex, high-tempo operations of future wars will overwhelm human decision-makers, creating constant problems with decision bottlenecks. To cope, human decision-makers will resort to risky shortcuts that will ultimately undermine the AI systems. As Secretary of the Air Force Frank Kendall observed, “If the human is in the loop, you will lose. You can have human supervision and watch over what the AI is doing, but if you try to intervene you are going to lose.”

Finally, we believe that the United States’ potential adversaries are likely to be very motivated to push the boundaries of empowered military AI, for three reasons: demographic transitions, control of the military, and fear of the United States. The path of technological development is deeply influenced by non-technological forces. As scholar of technology and former defense official Michael Horowitz observes, “The relative impact of technological changes often depends as much or more on how people, organizations, and societies adopt and utilize technologies as it does on the raw characteristics of the technology.” We have ample reason to believe that our potential adversaries may not share the fears and prejudices that constrain our development of EMAI.

Regimes such as Russia and China are grappling with significant demographic pressures, including shrinking working-age populations and declining birth rates. These trends threaten to weaken their military force structures over time. AI-driven systems offer a compelling solution to this problem by offsetting the diminishing human resources available for recruitment. In the face of increasingly automated warfare, these regimes can augment their military capabilities with AI systems that process vast data streams, adapt swiftly to battlefield changes, and execute missions without the need for human intervention. In this sense, demographic constraints make the pursuit of military AI not only desirable but essential for sustaining their power projection and tactical flexibility.

Moreover, totalitarian regimes face a deeper internal challenge that encourages the development of EMAI: the inherent threat posed by their own militaries. Autonomous systems offer the dual advantage of reducing dependence on human soldiers, who may one day challenge the regime’s authority, while increasing central control over military operations. In authoritarian settings, minimizing the risk of military-led dissent or coups is a strategic priority.

As advanced EMAI systems are developed and tested, the United States government needs a new generation of arms control experts to develop effective frameworks for the control of these systems.

In light of these incentives, arms control will probably fail to stop the development of these systems, but it need not fail to prevent a costly and destabilizing arms race. Arms control must be a close companion to the rise of empowered military AIs. As advanced EMAI systems are developed and tested, the U.S. government needs a new generation of arms control experts to develop effective frameworks for the control of these systems. Given the radical nature of advanced AI, these must be unlike any prior arms control approaches. The U.S. government should work with both allies and potential adversaries to build institutional structures, processes, and technological countermeasures to protect all humanity from the worst possible futures of advanced military AI, namely, those in which EMAI systems with no safety controls have widely proliferated. But to lead such a process, we need to operate from a position of strength. There is no seat at the arms control table without arms.

From a geopolitical perspective, simple game theory further suggests that Russia and China will feel compelled to develop empowered military AI — fearing a strategic disadvantage if the United States gains a technological lead in this domain. While these regimes may share some Western concerns about delegating lethal authority to AI, the practical necessity of maintaining a competitive edge in an evolving security environment will probably override these reservations, pushing them to aggressively pursue these capabilities.

We underestimate AI at our own peril. It is natural to forget what life was like before technology arrived to make it easier. Thus, we readily overlook the capabilities of current AI systems and focus on their limitations, forgetting how wondrous these tools are compared to what we had just a decade ago. If technology simply continues advancing at its current rate of acceleration, as futurist Ray Kurzweil suggests, AI capabilities will soon reach extraordinary levels. OpenAI’s accelerated ChatGPT rollout shows this pace is staggering, leading OpenAI’s CEO Sam Altman to acknowledge that we may soon co-exist with a fundamentally different type of intelligence. This presents profound, even frightening, possibilities. Buckle your seatbelts.

Downplaying the potential of AI is not simply a distraction. It actively hampers research and development that could ensure a safer future. A proper approach should assume that AI surpassing human decision-making capabilities in warfare is not only possible, it is inevitable–and coming sooner than expected. We must reject unproven assumptions of human superiority and base our military AI development on an inductive, test-centric approach focused on meeting and beating performance benchmarks. In short, EMAI systems will have the chance to demonstrate that they outperform human experts in testing environments simulating high-uncertainty, complex battle scenarios. Rigorous, fair testing–not speculation–is the right way to assess AI’s true warfare potential, even where performance standards initially seem unattainable.

What if the skeptics are right? What if EMAI cannot consistently beat human experts? To reach that conclusion, we must rely on repeated failures in good-faith research and development. The performance standard for military AI must be clear and challenging: be better than the best. Yet we need a developmental approach that does not predestine failure. For AI to be better than the best, a lot of things need to go well. Each stage of an AI’s process–sense, process, act–and each part of an AI’s “constellation of technologies” represents a potential failure point. More precisely, we would reach the “A.I. can’t be better than the best” conclusion after some amount of time (decades, probably) and money (a huge amount that will be hundreds of billions, possibly trillions of dollars). At that point, having failed to produce systems that work, we would conclude that additional research and development expenditures are not worth the risk of continued failure. But we have not yet made this effort. We have, in fact, barely begun. The current trajectory of technological change gives us little confidence in predictions that empowered military AI systems will inevitably fail.

Unappealing as it may be, the United States needs to be a leader in developing empowered military AI. Paradoxically, we may need to enable the robot apocalypse if we want to avert it.

Andrew Hill is Professor of Strategic Management in the Department of Command, Leadership, and Management (DCLM) at the U.S. Army War College. Prior to rejoining the War College in 2023, Dr. Hill was the inaugural director of Lehigh Ventures Lab, a startup incubator and accelerator at Lehigh University. From 2011-2019, Dr. Hill was a member of the faculty at the U.S. Army War College. In 2017, he was appointed as the inaugural U.S. Army War College Foundation Chair of Strategic Leadership. Dr. Hill is also the founder and former Director of the Carlisle Scholars Program, and the founder and former Editor-in-Chief of WAR ROOM.

Stephen Gerras is Professor of Behavioral Science at the U.S. Army War College. Colonel (Retired) Gerras served in the Army for over 25 years, including commanding a light infantry company and a transportation battalion, teaching leadership at West Point, and serving as the Chief of Operations and Agreements for the Office of Defense Cooperation in Ankara, Turkey during Operations Enduring and Iraqi Freedom. He holds a B.S. from the U.S. Military Academy and an M.S. and Ph.D. in Industrial and Organizational Psychology from Penn State University.

The views expressed in this article are those of the author and do not necessarily reflect those of the U.S. Army War College, the U.S. Army, or the Department of Defense.

Photo Credit: Gemini AI

3 thoughts on “BEYOND BELIEF: THE IMPERATIVE TO DEVELOP EMPOWERED MILITARY AI

  1. Great series, immensely informative. I particularly liked the Horowitz quote, which I think he and I both benefit from Andrew Marshall and Andrew Krepinevich’s comments on the MTR/RMA.

  2. Andrew and Steve: Excellent series of articles on enhanced military AI. You are correct that we embrace at our peril the normative arguments against the development of EMAI. Whether we like it or not, this genie is not going back into the bottle, and our adversaries are sure to exploit whatever limitations we impose on ourselves. It will be interesting to see how artificial intelligence–and EMAI in particular–will play out at the combat training centers and in the Mission Command Training Program. The US military has a big advantage over its adversaries in the quality of its training infrastructure, so one hopes that we will leverage it to test the boundaries of EMAI. As scary as EMAI might be, it’s a form of warfare that can be managed, just as world powers have done with nuclear weapons. I appreciate your emphasis on this point in your third essay.

Leave a Reply

Your email address will not be published. Required fields are marked *

Send this to a friend