November 21, 2024
As if watching the news or social media wasn't already complicated enough, now you have to deal with deep fakes. The stuff of information nightmares, deep fakes map the face of a celebrity, a political or military leader to another person's head. The result is a video of the leader/celebrity enacting the facial expressions, behaviors, mouth and eye movement of the target individual, and even worse saying whatever the faker desires. And the deep fakes are getting tougher to spot. WAR ROOM welcomes Matthew Fecteau as he considers the implications for national security, information operations and propaganda in the modern age as deep fakes become even more sophisticated. He looks at the near and long term actions the United States has to take to harness the technology as well as guard against it.

Within the information environment, the human-centric cognitive dimension remains most important.

“OMG! Did you see Kim Jong Un’s speech about the fragility of democracy? You didn’t. Oh, I’ll send you the link.”

Except, of course, the video isn’t real. But that does not mean it won’t spread like wildfire and be believed by those who want to believe it. Incorporating artificial-intelligence-enhanced deep-fake audio visuals will be a game changer for information operations, but getting there before our enemies requires a sense of urgency.

Per military doctrine, information operations are focused on targeting the adversary’s decision-making ability, and nothing would do this better than a deep fake that gives orders, tells troops to stand down, or shows a leader corroborating with the enemy. Even the accusation of being a deep fake can cause political unrest, as was seen in Gabon

Within the information environment, the human-centric cognitive dimension remains most important. The U.S. joint force can destroy enemy combatants, but a war will not end if enemy combatants continue to fight. The U.S. joint force saw this in Iraq, with insurgents even going so far as to publish a magazine in PDF format to promote their alleged victories.

In Iraq, the U.S. joint force never lost a direct battle to the terrorists and insurgents, but that didn’t stop these groups from fighting. This was primarily due to terrorist and insurgent propaganda inspiring combatants to keep fighting. Indeed, ISIS’s former leader, Abu Bakr al-Baghdadi, occasionally released videos calling on his supporters to keep fighting.

In the age of hyper-realistic deep fakes, this could become a common theme even after the mark is neutralized. After all, nothing impacts the cognitive dimension more than a manipulated hyper-realistic audio visual designed for motivation. This can include a charismatic leader or propaganda that enrages viewers. If digital artists can bring Salvador Dali or Tupac Shakur to “life” again, it’s only a matter of time before Usama Bin Laden or any other deceased terrorist will continue to motivate followers from the great beyond.

Visuals are a fantastic marketing tool that can influence consumer behavior, but relatively few studies have been done regarding visuals as they relate to a combat zone. When it comes to messaging within the private sector, one research study concluded that, after three days, a user retained almost 65% of visual information versus a fraction of that for written or spoken communication. If the same holds true on the battlefield, using a hyper-realistic deep-fake audio visual would be a superior means to disrupt an adversary’s decision-making ability. 

At the moment, the technology behind deep fakes is rudimentary and is still in development. For now, the main use for deep-fake technology is for pornography. However, this will change in the coming years with advances in artificial intelligence capabilities known as generative adversarial networks (GANs). Such capabilities can produce a realistic visual from an image in seconds (though audio is typically deficient in a video clip). These holograms can be used either in support of U.S. objectives or against them. 

Near-peer competitors already use GAN models. Russia’s Internet Research Agency — the same agency that meddled in the U.S. election — has created Facebook pages with GAN models to convey the supposed authenticity of a fake news site known as Peace Data, which even went so far as to hire U.S. journalists to write stories. These efforts were in an apparent, but marginal, attempt to influence the 2020 U.S. election. The People’s Republic of China (PRC) uses GAN models to support 24/7 news reporting.

This is just the beginning of a deep-fake arms race. Deep-fake technology is in the nascent stages of development, but the U.S. joint force can expect to see further weaponization of undetectable deep fakes, such as the notorious perfect deep fake or hyper-realistic deep fake. Digital fingerprinting – ways to detect deep fakes — may be more pervasive in the future, but that could soon change if further resources are not dedicated to detection. 

The U.S. government is attempting to rein in deep-fake technology, and the U.S. joint force should ensure that it understands how to detect deep fakes. On that front, the Defense Advanced Research Projects Agency (DARPA) is developing a number of programs that will make detection far easier for military members by looking at basic physics and mismatched accessories.

To ensure a continued competitive advantage, the U.S. joint force must closely examine deep-fake technology in the near future and incorporate it in military operations to support its own objectives.

Don’t think it matters? In the age of video conferencing, video tele-conferences often stand in for face-to-face communication, but if a call is hacked and in place of a commander is a deep-fake representation giving commands, this would seriously jeopardize any mission.

Potential scenarios are almost limitless. Hackers could penetrate the Democratic People’s Republic of North Korea’s lines and use deep fakes to launch a nuclear strike. Likewise, a leadership decapitation strike could be rendered ineffective if the leader is continuously resurrected with the help of a hyper-realistic deep-fake model inspiring followers to attack coalition forces. 

These scenarios may seem far-fetched, but these are the types of challenges that deep-fake technology may soon pose to the U.S. joint force, and since this technology is already impacting the operating environment, it is imperative that U.S. service members understand how to identify deep fakes at the lowest level to mitigate their influence.

Deep fakes could be a game-changer for U.S. military operations as well. Psychological operators currently focus primarily on leaflet drops or the dissemination of products online through sock puppet accounts. While this is an important function, the information environment has since evolved to the point where, thanks to Photoshop and similar tools, edited images are so pervasive (and relatively cheap) that almost everyone is capable of creating them. To truly influence an enemy within the information environment, hyper-realistic deep fakes with near-perfect audio-visual representation could be used to disrupt enemy operations in their tracks. 

To ensure a continued competitive advantage, the U.S. joint force must closely examine deep-fake technology in the near future and incorporate it in military operations to support its own objectives. Photoshopped images of malignant actors in compromising positions may be beneficial, but delivering a hyper-realistic portrayal of a high-value target on video takes psychological warfare to an entirely new level. Hyper-realistic audio visuals can sow self-defeating insecurity within an enemy unit if released through the correct avenue, and hyper-realistic deep fakes could be a game-changer for all levels of war — strategic, operational, and tactical.

The U.S. joint force needs to do everything in its power to harness this technology. Deep-fake technology needs to be taught within the confines of primary military education and must be incorporated in the military planning process.

Moreover, the U.S. joint force cannot afford to have critical detection technology such as digital fingerprinting bottlenecked at places like DARPA. The U.S. joint force needs this technology now. It must be pushed down to the lowest levels of military elements to ensure that deep fakes can be identified and flagged before they impact the information environment.

Clearly, GAN models are the ultimate military deception tool, and they need to be incorporated into all military plans. Otherwise, adversaries will begin to use them against U.S. assets at an alarming rate, and the examples shown here are just the tip of the iceberg. 

Maj. Matthew Fecteau is a graduate of the Harvard Kennedy School of Government and an information operations officer with the U.S. Army. Follow him on Twitter @matthewfecteau. He can be reached at matthew.fecteau@gmail.com. The views expressed in this article are those of the author and do not necessarily reflect those of the U.S. Army War College, the U.S. Army, or the Department of Defense.

Photo Description: Screen capture from the DeepTomCruise series. Left: Chris Ume, VFX and AI Artist and creator of the series. Right: Digitally created deep fake of actor Tom Cruise.

Photo Credit: Chris Ume is a Freelance VFX and AI Artist. He created the DeepTomCruise series to demonstrate deepfake technology. His work can be found on his website. Used with permission.

1 thought on “THE DEEP FAKES ARE COMING

  1. As relates to the topic above (and many other topics as well?), let us ask the seemingly important question; this being, “deep fakes” (etc., etc., etc.) to what strategic end?

    In this regard, let us consider that:

    a. The strategic end to which our opponents/our competitors are focusing their attention versus the U.S./the West today, this is:

    b. Much the same strategic end that we, the U.S./the West, focused our efforts versus the Soviets/the communists during the Old Cold War.

    This such strategic end being (in both cases above) to contain, to roll back, and to ultimately eliminate the power, influence and control of one’s opponent throughout the world.

    As we (and, indeed, our opponents!!!) learned during the Old Cold War:

    a. One of the best ways to contain, to roll back and to ultimately eliminate the power, influence and control of one’s “progressive” opponent, this is by:

    b. Successfully “painting” this opponent as a “godless” deviant and, thus, as a pariah state that was part of an “evil empire.”

    This is exactly what the U.S./the West did versus the Soviets/the communists during the Old Cold War of yesterday, and this is exactly what our opponents/our competitors are now trying to do versus the U.S./the West in the New/Reverse Cold War of today, as the following information seems to confirm:

    “Russia and the United States are once again becoming screens for each other, on which corresponding actors project some of their own images arising from the internal logic of corresponding societies. The transformation, the inversion of each other’s images that has taken place since the Cold War, is remarkable. During the Cold War, the USSR was perceived by American conservatives as an ‘evil empire,’ as a source of destructive cultural influences, while the United States was perceived as a force that was preventing the world from the triumph of godless communism and anarchy. The USSR, by contrast, positioned itself as a vanguard of emancipation, as a fighter for the progressive transformation of humanity (away from religion and toward atheism), and against the reactionary forces of the West. Today positions have changed dramatically; it is the United States or the ruling liberal establishment that in the conservative narrative has become the new or neo-USSR, spreading subversive ideas about family or the nature of authority around the world, while Russia has become almost a beacon of hope, ‘the last bastion of Christian values’ that helps keep the world from sliding into a liberal dystopia. Russia’s self-identity has changed accordingly; now it is Russia who actively resists destructive, revolutionary experiments with fundamental human institutions, experiments inspired by new revolutionary neo-communists from the United States. Hence the cautious hopes that the U.S. Christian right have for contemporary Russia: They are projecting on Russia their fantasies of another West that has not been infected by the virus of cultural liberalism.”

    (See the December 18, 2019, Georgetown University, Berkley Center for Religion, Peace and World Affairs article “Global Culture Wars from the Perspective of Russian and American Actors: Some Preliminary Conclusions,” by Dmitry Uzlaner — look for the paragraph beginning with “Russia and the United States as screens for each other’s projections.”)

    Bottom Line Thought — Based on the Above:

    Such things as “deep fakes” (as well as many other approaches, techniques, methods, etc. — used by our selves and our opponents today as yesterday); these, it would seem, may benefit from being considered more from the perspective — from the standpoint — of the “strategic ends” that both “we” — and “they” — are attempting to achieve. Yes?

    (This being the case, then one of the first things that I would begin looking for — based on my own use of the Georgetown University article above — are “fake” articles from prominent universities. Just look at how well [?] I was able to “weave” this article into the point that I am trying to make above.)

Leave a Reply

Your email address will not be published. Required fields are marked *

Send this to a friend