November 7, 2024
The human mind, with its intricate interplay of brain, emotions, and unconscious thought, has long been revered for its intuitive abilities. However, Steve Gerras and Andrew Hill posit that this era of human intuition is drawing to a close. They argue that military AI, empowered by advanced technology, is poised to surpass the decision-making capabilities of even the most experienced human commanders. This technological shift represents a seismic change in the nature of warfare, one that the United States must proactively address through a reorientation of its research and development efforts. This is the first installment of a three-part series delving into the role of artificial intelligence (AI) within the United States' comprehensive national defense and security strategy. The authors will assess the advantages and limitations of AI as it is employed to enhance, integrate, and potentially supplant human decision-making processes.

Editor’s Note: This is the first installment of a three-part series delving into the role of artificial intelligence (AI) within the United States’ comprehensive national defense and security strategy. The authors will assess the advantages and limitations of AI as it is employed to enhance, integrate, and potentially supplant human decision-making processes. You can find the second article here.

“You’re not understanding, are you? The brain does the thinking. The meat.”

  “Thinking meat! You’re asking me to believe in thinking meat!”

“Yes, thinking meat! Conscious meat! Loving meat. Dreaming meat. The meat is the whole deal! Are you getting the picture?”

  “Omigod. You’re serious then. They’re made out of meat.”

Terry Bisson, They’re Made Out of Meat, 1991

From the seasoned expert who just knows the right move to survival instincts that have evolved over millennia, human intuition is a powerful guide to decision-making. As we grapple with the rise of advanced artificial intelligence (AI), such intuition is presented as an inherent human advantage over AI, and a reason for humans to maintain direct control over lethal and other high-stakes decisions. In this way, a belief in the power of human intuition is therefore foundational to current U.S. defense policies constraining the use of artificial intelligent systems and “keeping a human in the loop,” because such decision-making is something only humans can do.

We believe this is a mistake. The indispensable superiority of human intuition is a dangerous assumption that is unjustified by the facts. In the development of AI, we need to treat the advantage of human intuition not as a credo to be mindlessly repeated, but a mark on a wall to relentlessly strive to surpass. As the biologist Stephen Jay Gould observed, “‘Impossible’ is usually defined by our theories, not given by nature.”

Our investments in technology development do not simply reveal the world to us. They are also representations of the future worlds that we believe to be possible. In this sense, the limits of our imagination can be self-fulfilling: If we believe that human intuition is something AI can never surpass, we will invest in paths of AI development that preclude the creation of truly advanced AIs. Perhaps this would not be such a bad thing, except that actors other than the United States—some with very different views of the world—may have less constrained imaginations.

The human mind, while an extraordinary biological creation, is “made of meat,” not magic. We have no reason to believe that the human brain possesses inherent advantages that sufficiently advanced technology cannot challenge or surpass. As a basis for research and development (R&D), we must instead assume that human intuition has no inherent advantage over AI, and we should invest accordingly. The future of decision-making—even in war—may belong to the machines we create.

The Mysterious Power of Human Intuition

According to Nobel Prize-winning decision theorist (and a founder of artificial intelligence) Herbert Simon, “we use the word intuition to describe a problem-solving or question-answering performance that is speedy and for which the expert is unable to describe in detail the reasoning or other process that produced the answer.” This mysterious aspect of human intuition, combined with its apparent speed and accuracy, makes intuition a nice foundation for the argument that military AI should be developed and deployed in a “humans with AI” framework (HWAI) that keeps humans in control of key choices. After all, how can AI outperform something that is so effective, yet we ourselves do not fully understand? By extension, Empowered Military AI (EMAI), that is AI that makes lethal or other high-stakes decisions amidst high uncertainty, should not be developed.

But how good is human intuition, really? How secure is this supposed human advantage over emerging artificial intelligence technologies? As we begin an era of proliferating AI technologies and uses, we need to be exceptionally clear-eyed in our assessment of the true capabilities and limitations of human intuition. 

When people speak of “intuition,” they do not always mean the same thing. We have seen three different versions of intuition invoked in arguments for the inherent superiority of human decision-making. Metaphysical intuition is access to truth by and through metaphysical or spiritual sources. Evolutionary intuition is access to the superior decision-making algorithms developed through human evolution. A third form, expert intuition, is access to truth through unique human experience, feedback, and creative insight. All three concepts are interesting potential sources of human advantage, and all have different problems.

Each form of intuition serves as a lens through which we understand how the mind processes information beyond conscious reasoning. Expert and evolutionary intuitions are grounded in experience and biology, and often extraordinarily effective. Yet they are also fallible, rife with the natural constraints of human experience and existence. Metaphysical intuition—knowledge rooted in the supposed special status of humans in the universe—is most problematic, so we will explore it first.

Metaphysical Intuition

Many religious or spiritual beliefs suggest that humans have a unique access to truth through metaphysical intuition. It seems obvious that religious or spiritual beliefs affect normative attitudes about Empowered Military AI, that is, the “should we?” questions. But do these beliefs also affect what we believe about the inherent technological limitations of EMAI? We think they may.

Despite having varying religious affiliations, a large majority of Americans believe in the human soul and attribute human existence to divine intervention. Even those who do not profess an affiliation with a religious faith may believe in the special status of humans in the universe. If humans are special in a spiritual or metaphysical sense, then it seems likely that humans have access to the truth through spiritual or metaphysical channels, especially with respect to morally significant questions such as whether to kill or risk the lives of others.

A universe in which humans are created by and communicate with God is unlikely to be a universe in which human-made machines enjoy the same communication privileges. Thus, for the large segment of believers in our society, the special status of humans has practical implications for the performance of advanced artificial intelligence.

The problem with metaphysical intuition is obvious: it is outside of the realm of reason and evidence. It relies on inherently subjective and unverifiable experiences, making it an unsound foundation for decision-making. What test of human versus AI decision-making would cause a believer to reject his faith in humans’ special status? This untestability (and non-falsifiability) means that metaphysical intuition cannot stand as a legitimate argument against aggressive investment in AI. Its premises are based on personal belief systems outside the scope of empirical scrutiny. Relying on metaphysical intuition for critical decisions in the development of advanced AI introduces an untenable foundation for society-wide decisions.

Evolutionary Intuition

“I’ve got a bad feeling about this.” Thus spoke Han Solo, Luke Skywalker, and numerous other heroes (and victims). Evolutionary intuition is a second form of potential advantage of humans over AI. Because AI did not evolve as humans did, it cannot have the full range of evolutionary decision-making tools possessed by humans. Evolutionary intuition was shaped by humanity’s struggle for survival over eons. Humans have gut feelings, and machines do not. These gut feelings signal potential dangers, guide social interactions, and motivate us to act. These “visceral notions,” to borrow Eric Oliver and Thomas Wood’s term, make us extremely efficient decision-makers. Rather than use the complex, algorithmic processing of the idealized homo economicus,human subjects in innumerable decision-making experiments have demonstrated a reliance on what pioneering psychologists Amos Tversky and Daniel Kahneman called “heuristics”—simplifying procedures and rules that we use when making judgments under uncertainty. While some heuristics may be learned, others appear to be the product of evolutionary psychology. Evolutionary heuristics are a manifestation of what Kahneman calls “System 1” processing, referring to fast, automatic, typically nonconscious decision-making similar to animal behavior. Evolution optimized our brains for pattern detection in a resource-scarce environment.

Alas, all of this grossly exaggerates the quality of human evolutionary intuition. E.O. Wilson’s observation that humanity grapples with “Paleolithic emotions, medieval institutions, and god-like technology” highlights the mismatch between our intuitive capabilities and the complexities of the modern world. Our heuristics may be shortcuts, but many of them are riddled with biases and other errors. Intuitive thinking, driven by emotion and bias, often skews our perception of reality, leading to flawed decisions. Oliver and Wood observe that our intuitions “often serve us well, but they don’t always give us an accurate picture of the world.  Our intuitions generally tell us that the earth is flat, that vaccines are dangerous, and that attractive people are smarter than the rest of us.”

In fairness to proponents of evolutionary intuition, it is not necessary for it to be infallible, only that it is useful enough to be essentially complementary to AI. On this same theme, Daniel Kahneman writes:

Leaders say they are especially likely to resort to intuitive decision making in situations that they perceive as highly uncertain. When the facts deny them the sense of understanding and confidence they crave, they turn to their intuition to provide it. The denial of ignorance is all the more tempting when ignorance is vast.

From ancient times to modern, history provides numerous catastrophic examples of military leaders who denied ignorance and trusted their own intuition, disregarding reasonable doubt, contrary evidence, and the advice of others as they descended to disaster. Our experience suggests that far from being encouraged to rely on evolutionary intuition, leaders need to exercise more deliberative approaches to decision-making, such as critical thinking.

A second argument for the human advantage of evolutionary intuition focuses not on its quality but on how it came about. Because AI cannot evolve in the same way that humans have, it cannot compete with the human cognitive capabilities that resulted from that evolutionary process. Throughout evolution, humans have faced unpredictable situations, navigated complex and competitive social dynamics, and made snap judgments crucial for survival and reproduction. No AI can be developed from this evolutionary environment.

This supposed human evolutionary advantage is rooted in the dubious idea that AI’s main task is to be like us—or, as one expert defines AI, “the use of computers to simulate the behavior of humans that requires intelligence.” From this perspective, the fundamental inhumanity of AI—what it is (hardware and software) and how it operates (the technological menagerie of algorithms, machine learning, neural networks, and so on)—becomes an inherent, intractable limitation. It follows that AI cannot challenge human supremacy because AI will never do what we do the way we do it. This is an error. It confuses ways for ends, a dangerous and unfortunately persistent blunder in the history of innovation. As John McCarthy, one of the founders of AI, argues, AI’s success should be measured by how efficiently and effectively it helps us achieve human goals, not by whether it employs human-like processes to do so.

Expert Intuition

Expert Intuition allows human decision-makers to make quick, informed choices in complex situations. M.L. Cummings describes human and machine intelligence as a loop of perception, cognition, and action. While machines excel in predictable, repetitive tasks, humans have an advantage in complex, uncertain scenarios due to their adaptability, judgment, and ability to draw on a wide range of experiences. Human experts, we are told, can access a broader variety of variables, drawing on experience to outperform AI in unpredictable environments. Human experts, we are told, can swiftly adapt and consider alternatives, setting them apart from AI, which relies on large datasets for learning and prediction.

Some research supports the notion that expert intuition yields fast and effective decisions. In field research on experts such as nurses, emergency responders, pilots, military personnel, firefighters, and pilots, Gary Klein found that experts leverage their extensive experience to match the current situation with similar past scenarios, thus identifying a promising course of action. For example, expert firefighters arriving at a fire will rapidly survey the scene and scan their memory for similar experiences. If a successful approach in their memory is an approximate match to the current fire, they will then more closely evaluate the suitability of the potential solution, examining the current fire in greater detail. If the initially chosen option proves unsuitable upon closer inspection, experts then consider alternative solutions, one at a time. Crucially, this process is rapid, automatic, and in the case of the best experts, extremely effective.

Yet human expert intuition, powerful as it is, has problems. Influential research by Paul Meehl scrutinized the accuracy of clinical predictions made by psychologists and other professionals, comparing them with statistical methods. The (often very crude) statistical methods consistently matched or exceeded the accuracy of experts’ judgments. Meehl argued that this discrepancy arises because even expert human judgment is susceptible to a range of biases and errors, such as overconfidence, confirmation bias, and the influence of irrelevant factors.

So how do we know when to trust an expert’s intuition? What’s the difference between Klein’s experts and Meehl’s experts? Kahneman posits that good expert judgment relies on three fundamental conditions: (1) the predictability of the environment, which allows for regularities to be learned; (2) the opportunity for repeated practice within this stable setting; and (3) prompt, clear feedback regarding the quality of a decision. Without these stable conditions, expert intuition should not be trusted. As Meehl found, statistical algorithms (including simple models) surpass human judgment in more complex, noisy settings. These algorithms are much better at recognizing weakly valid cues amidst noise and uncertainty.

Together, these perspectives paint a nuanced picture of expert intuition as a powerful but fallible tool, whose effectiveness is deeply influenced by the environment’s regularity and the individual’s experiential learning. Given these limits, we should feel much less confident that human experts enjoy an inherent advantage over artificial intelligence. Indeed, humans may suffer from some significant disadvantages relative to an advanced AI.

First, human expert intuition becomes less reliable in environments in which humans are supposedly most effective, specifically, highly complex and uncertain environments.

A key insight from Meehl’s work is that in “high-validity” environments, where there are consistent and predictable relationships between cues and outcomes, expert judgment can be quite accurate. However, in “low-validity” environments, characterized by complexity and unpredictability, statistical models have a clear advantage because they are immune to the biases that affect human judgment. Human experts are also terribly inconsistent when evaluating complex information. Studies have shown that when experts reassess the same data, their conclusions often vary. Judgments that lack reliability cannot serve as valid predictors, underscoring a fundamental flaw in expert intuition in complex environments.

Second, although we are told that human experts have a decisive advantage over AI in data-poor environments, this argument is rooted in an already-outdated model of how AI learns. Recent advances have significantly improved AI systems’ ability to make accurate decisions with minimal data, overcoming the traditional need for large datasets, as innovations like few-shot and zero-shot learning demonstrate.

Finally, there is a special problem in creating human expert intuition for warfare. The cultivation of reliable expert judgment, as argued above, hinges on three critical conditions: a predictable environment to learn from regularities, the chance for repeated practice in a stable setting, and immediate, unambiguous feedback on decisions made. Wars, by their nature, seldom offer these conditions. Realistic training helps, of course, but training is not war itself, as many a veteran will attest. The formation of genuine expert intuition for warfare is almost impossible outside of long-duration conflicts, and difficult even inside them, as conditions change, and feedback is delayed or obscured in the fog of war. Crucially, war is very hard on people who make bad calls–lethal consequences limit opportunities to learn. An advanced military AI can survive its mistakes more easily than a human, presenting opportunities for learning that are not available to human experts. It is also less likely to ignore the urgent lessons of its decisions.

Notwithstanding these points, expert intuition may yet prove itself to be an inherent human advantage. But we should not accept this conclusion before subjecting it to intense challenges in the research and development of empowered military AI systems.

What Next?

The advantage of human intuition must be treated as a theory to be tested, not an enchanted, sacred realm beyond the scrutiny of science. As the geologist Richard Fortey wrote, “the theories that cause much more trouble are those that can twist and turn in a breeze of new facts without ever fracturing completely.” In this regard, we should be especially wary of arguments that invoke metaphysical intuition as a human advantage. We must assume that we are not special.

Furthermore, even if human judgment maintains a consistent edge over AI in some contexts, the HWAI approach to military AI systems may not work because the integration of humans with AI-enabled kill chains will almost certainly negate those narrow advantages. (We will explore this idea in a future essay.)

Our argument is not for accepting the inevitable inferiority of human judgment, but for rejecting the premise of the inevitable superiority of human judgment. We need an R&D strategy that treats empowered military AI as an inevitable, epochal change in the technological character of war, and we need to prepare ourselves for it.

Andrew Hill is Professor of Strategic Management in the Department of Command, Leadership, and Management (DCLM) at the U.S. Army War College. Prior to rejoining the War College in 2023, Dr. Hill was the inaugural director of Lehigh Ventures Lab, a startup incubator and accelerator at Lehigh University. From 2011-2019, Dr. Hill was a member of the faculty at the U.S. Army War College. In 2017, he was appointed as the inaugural U.S. Army War College Foundation Chair of Strategic Leadership. Dr. Hill is also the founder and former Director of the Carlisle Scholars Program, and the founder and former Editor-in-Chief of WAR ROOM.

Stephen Gerras is Professor of Behavioral Science at the U.S. Army War College. Colonel (Retired) Gerras served in the Army for over 25 years, including commanding a light infantry company and a transportation battalion, teaching leadership at West Point, and serving as the Chief of Operations and Agreements for the Office of Defense Cooperation in Ankara, Turkey during Operations Enduring and Iraqi Freedom. He holds a B.S. from the U.S. Military Academy and an M.S. and Ph.D. in Industrial and Organizational Psychology from Penn State University.

The views expressed in this article are those of the author and do not necessarily reflect those of the U.S. Army War College, the U.S. Army, or the Department of Defense.

Photo Credit: Gemini image generator

3 thoughts on “MEAT VERSUS MACHINES: HUMAN INTUITION AND ARTIFICIAL INTELLIGENCE

  1. If conflict today — whether we are talking about the “war on terror” conflict with the U.S./the West today, the “great power competition” conflict with the U.S./the West today, the lesser state and societal conflicts with the U.S./the West today and even the internal conflict occurring within the U.S./the West itself today — if all of these such conflicts have a common aspect; this being, that these are conflicts between the U.S./the West and entities who (a) do not wish to be transformed more along ultra-modern/ultra-contemporary U.S./Western political, economic, social and/or value lines and who (b) are willing to use whatever means are available to them (for example, AI?); this, to prevent these such transformations,

    Then, from that exact such “political objective” perspective, how can/will AI be developed, deployed and employed by the U.S./the West (to achieve “transformations”) and by our both here at home and there abroad opponents (to prevent such “transformations)?

    Thus, from the perspective of the “political objective” of both sides of these such conflicts, to properly consider such things as “Meat Versus Machines,” “Meat and Machines,” “Meat Using Machines,” “Machines Using/Manipulating Meat,” etc. matters today?

    (In this regard, while we know that the “Meat” certainly understands [via “intuition,” etc.,?] such things as the need for “transformation” — and also the threat that transformation poses to certain individuals and groups’ hard-won degree of power, influence, control, status, prestige, privilege, safety, security, etc. — can the “Machines” be made to understand, feel, emphasize with, react to, etc., these self-same such — opposed — understandings, emotions, motivations, etc.? Or, being just “Machines,” will they only be able to be used as “tools” in these such endeavors?)

    1. With regard to the conflict environment, and the opposed political objectives therein that I describe in my initial comment above; with regard to these such matters, let us pose an (irregular warfare?) problem for the “Machines” to help us with — this, via their “as good as and/or better than human intuition, etc.” capabilities.

      In this regard, let us start by considering the following from retired LTG Charles Cleveland and retired GEN Joseph Votel:

      “In the same way that the conventionally focused American way of war is defined by America’s technical and industrial capacity and technological edge, the American way of irregular war is tied to our notions of religious pluralism, democracy, and, above all, human rights. And although the American way of war protects us against near-peer powers and guarantees the lanes of global commerce, the American way of irregular war protects our way of life by both promoting our worldview and giving people the tools to realize the same opportunities that we have had. … ” (See last paragraph of Page 5 of the Introduction chapter to Rand paper by LTG [ret.] Charles Cleveland entitled: “The American Way of Irregular War: An Analytical Memoir.”)

      “The Achilles’ heel of our authoritarian adversaries is their inherent fear of their own people; the United States must be ready to capitalize on this fear. … An American way of irregular war will reflect who we are as a people, our diversity, our moral code, and our undying belief in freedom.” (See the “Conclusion” of the Rand paper “The American Way of Irregular War: An Analytical Memoir” by Charles T. Cleveland and Daniel Egel.)

      “Advocates of UW first recognize that, among a population of self-determination seekers, human interest in liberty trumps loyalty to a self-serving dictatorship, that those who aspire to freedom can succeed in deposing corrupt or authoritarian rulers, and that unfortunate population groups can and often do seek alternatives to a life of fear, oppression, and injustice. Second, advocates believe that there is a valid role for the U.S. Government in encouraging and empowering these freedom seekers when doing so helps to secure U.S. national security interests.” (See the National Defense University Press paper “Unconventional Warfare in the Gray Zone” by Joseph L. Votel, Charles T. Cleveland, Charles T. Connett, and Will Irwin)

      Problem — for the Machines to help us with:

      Today, unlike just yesterday in LTG Cleveland and GEN Votel’s time, many people in the U.S./the West, and many people throughout the rest of the world also it would seem, no longer seem to believe in — and no longer seem to subscribe to — such things as LTG Cleveland and GEN Votel’s suggested — primary/principle/important — “weapons of irregular warfare,” to wit: our belief in such things as “religious pluralism,” “democracy,” “human rights,” “diversity,” our “moral code,” our “belief in freedom,” our belief in “helping unfortunate population groups who seek alternatives to a life of fear, oppression, and injustice,” etc.

      Thus:

      a. With (Right-leaning?) Americans and others largely eliminating these such “classic” — and highly successful over time — “weapons of irregular warfare” from the U.S./the West’s arsenal today,

      b. How then would the Machines suggest that we should compensate for — and/or reverse/eliminate this such (irreplaceable?) loss — and/or achieve our noted political objectives in spite of same?

      (I now await the Machines advice, suggestions, strategy, etc. Note: Tick Tock!!)

  2. “Humans have gut feelings, and machines do not.”
    And how is that and advantage exactly? Machines have sensors, they do not need feelings.

Leave a Reply

Your email address will not be published. Required fields are marked *

Send this to a friend