Harmful lies are nothing new. But the ability to distort reality has taken an exponential leap forward with “deep fake” technology. This capability makes it possible to create audio and video of real people saying and doing things they never said or did. Machine learning techniques are escalating the technology’s sophistication, making deep fakes ever more realistic and increasingly resistant to detection.
- from “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security” by Danielle Citron and Robert Chesney
For an example of what “deep fake” technology can do, take a look at this video of Barack Obama: it shows the former U.S. president seated, with the American flag in the background, speaking directly to the viewer and using an obscenity to refer to his successor, Donald Trump. In actuality, President Obama’s lips move in the video as the words are spoken by actor-director Jordan Peele.
This type of “deep fake” video is accomplished using computer programs that employ a form of artificial intelligence. “An algorithm is trained to recognize patterns in actual audio or visual recordings of a particular person, a process known as deep learning. As with doctored images, a piece of content can be altered by swapping in a new element -- such as someone else’s face or voice -- and seamlessly joining the two.” (“How Faking Videos Became Easy and Why That's So Scary,” Bloomberg, September 9, 2018)
U.S. lawmakers are concerned that deep fake videos could be used to harm national security. U.S. Representatives Adam Schiff (D-CA), Stephanie Murphy (D-FL), and Carlos Curbelo (R-FL) sent a letter to Dan Coats, the Director of National Intelligence, requesting that the ”Intelligence Community report to Congress and the public about the implications of new technologies that allow malicious actors to fabricate audio, video and still images.” The letter expresses these concerns:
Forged videos, images or audio could be used to target individuals for blackmail or for other nefarious purposes. Of greater concern for national security, they could also be used by foreign or domestic actors to spread misinformation. As deep fake technology becomes more advanced and more accessible, it could pose a threat to Untied States public discourse and national security, with broad and concerning implications for offensive active measures campaigns targeting the United States.
A new paper by University of Maryland privacy law professor Danielle Citron and University of Texas School of Law’s Robert Chesney provides “the first in-depth assessment of the causes and consequences of this disruptive technological change, and … explore[s] the existing and potential tools for responding to it.”
“Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security” outlines how this manipulative technology will introduce many harms. “The marketplace of ideas already suffers from truth decay as our networked information environment interacts in toxic ways with our cognitive biases. Deep fakes will exacerbate this problem significantly. Individuals and businesses will face novel forms of exploitation, intimidation, and personal sabotage. The risks to our democracy and to national security are profound as well.”
In “Deep Fakes,” Professors Citron and Chesney offer a broad examination of ‘deep fakes’ and their far-reaching consequences. The first part of the article provides a description of how the deep-fake technology creates such real-looking and sounding videos; and, discusses how social media amplifies the impact of falsified videos. The second part of the article surveys the benefits and costs of deep fakes. And in the final section, the authors examine existing and potential remedies.
Below are a few excerpts from “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security.”
Beneficial Uses of Deep-Fake Technology
Deep-fake technology creates an array of opportunities for educators, including the ability to provide students with information in compelling ways relative to traditional means like readings and lectures. This is similar to an earlier wave of educational innovation made possible by increasing access to ordinary video. With deep fakes, it will be possible to manufacture videos of historical figures speaking directly to students, giving an otherwise unappealing lecture a new lease on life.
Indeed, the benefits to creativity generally are already familiar to mass audiences thanks to the use of existing technologies to resurrect dead performers for fresh roles. The startling appearance of what appeared to be the long-dead Peter Cushing as the venerable Grand Moff Tarkin in 2016’s Rogue One was made possible by a deft combination of live acting and technical wizardry. This was a prominent illustration that delighted some and upset others. The Star Wars contribution to this theme continued in The Last Jedi, when the death of Carrie Fisher led the filmmakers to fake additional dialogue, using snippets from real recordings.
Not all artistic uses of deep-fake technologies will have commercial potential. The possibilities are rich and varied. Artists may find it appealing to express ideas through deep fakes, including but not limited to productions showing incongruities between apparent speakers and their apparent speech. Video artists might use deep-fake technology to satirize, parody, and critique public figures and public officials. Activists could use deep fakes to demonstrate their point in a way that words alone could not.
Harmful Uses of Deep-Fake Technology
Lies about what other people have said or done are as old as human society, and come in many shapes and sizes. Some merely irritate or embarrass, while others humiliate and destroy; some spur violence. All of this will be true with deep fakes as well, only more so due to their inherent credibility and the manner in which they hide the liar’s creative role. Deep fakes will emerge as powerful mechanisms for some to exploit and sabotage others.
Harm to Individuals or Organizations
Like sexualized deep fakes, imagery depicting non-sexual abuse or violence might also be used to threaten, intimidate, and inflict psychological harm on the depicted victim (or those who care for that person). Deep fakes also might be used to portray someone, falsely, as endorsing a product, service, idea, or politician. Other forms of exploitation will abound.
In addition to inflicting direct psychological harm on victims, deep-fake technology can be used to harm victims along various other dimensions due to their utility for reputational sabotage. Across every field of competition—workplace, romance, sports, marketplace, and politics—people will have the capacity to deal significant blows to the prospects of their rivals.
The nature of today’s communications environment enhances the capacity of deep fakes to cause reputational harm. The combination of cognitive biases and algorithmic boosting described above increases the chances for salacious fakes to circulate. The ease of copying and storing data online—including storage in remote jurisdictions—makes it much harder to eliminate fakes once they are posted and shared. Ever-improving search capacities combine with these considerations to increase the chances that potential employers, business partners, or romantic interests will encounter the fake.
Harm to Society
Deep fakes are not just a threat to specific individuals or entities. They have the capacity to harm society in a variety of ways. Consider the following possibilities:
- Fake videos could feature public officials taking bribes, displaying racism, or engaging in adultery.
- Politicians and other government officials could appear in locations where they were not, saying or doing horrific things that they did not.
- Soldiers could be shown murdering innocent civilians in a war zone, precipitating waves of violence and even strategic harms to a war effort.
- A deep fake might falsely depict a white police officer shooting an unarmed black man while shouting racial epithets.
- Falsified video appearing to show a Muslim man at a local mosque celebrating the Islamic State could stoke distrust of, or even violence against, that community.
- A fake video might portray an Israeli official doing or saying something so inflammatory as to cause riots in neighboring countries, potentially disrupting diplomatic ties or sparking a wave of violence.
- False audio might convincingly depict U.S. officials privately “admitting” a plan to commit an outrage overseas, exquisitely timed to disrupt an important diplomatic initiative.
- A fake video might depict emergency officials “announcing” an impending missile strike on Los Angeles or an emergent pandemic in New York City, provoking panic and worse.
As these hypotheticals suggest, the threat posed by deep fakes has systemic dimensions. The damage may extend to, among other things, distortion of democratic discourse on important policy questions; manipulation of elections; erosion of trust in significant public and private institutions; enhancement and exploitation of social divisions; harm to specific military or intelligence operations or capabilities; threats to the economy; and damage to international relations.
Undermining Public Safety
A century ago, Justice Oliver Wendell Holmes warned of the danger of falsely shouting fire in a crowded theater. Now, false cries in the form of deep fakes go viral, fueled by the persuasive power of hyper-realistic evidence in conjunction with the distribution powers of social media. The panic and damage Holmes imagined may be modest in comparison to the potential unrest and destruction created by a well-timed deep fake. In the best-case scenario, real public panic might simply entail economic harms and hassles. In the worst case scenario, it might involve property destruction, personal injuries, and/or death. Deep fakes increase the chances that someone can induce a public panic. And they need not capitalize on social divisions to do so.
Undermining Journalism
As the capacity to produce deep fakes spreads, journalists increasingly will encounter a dilemma: when someone provides video or audio evidence of a newsworthy event, can its authenticity be trusted? That is not a novel question, but it will be harder to answer as deep fakes proliferate. News organizations may be chilled from rapidly reporting real, disturbing events for fear that the evidence of them will turn out to be fake.
It is not just a matter of honest mistakes becoming more frequent. One can expect instances in which someone tries to trap a news organization in exactly this way. We already have seen many examples of “stings” pursued without the benefit of deep-fake technology. Convincing deep fakes will make such stings more likely to succeed. Media entities may grow less willing to take risks in that environment, or at least less willing to do so in timely fashion. Without a quick and reliable way to authenticate video and audio, the press may find it difficult to fulfill its ethical and moral obligation to spread truth.
Beware the Cry of Deep-Fake News
But not all lies involve affirmative claims that something occurred (that never did): some of the most dangerous lies take the form of denials.
Deep fakes will make it easier for liars to deny the truth in distinct ways. First, a person accused of having said or done something might create doubt about the accusation by using altered video or audio evidence that appears to contradict the claim. This would be a high-risk strategy, plainly, though less so in situations where the media is not involved and where no one else seems likely to have the technical capacity to expose the fraud. In situations of resource-inequality, we may see deep fakes used to escape accountability for the truth.
Read the full article: “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security.”
Danielle Citron is the Morton & Sophia Macht Professor of Law at the University of Maryland Francis King Carey School of Law. Her scholarship focuses on information privacy, civil rights, and administrative law. Professor Citron’s publications include Hate Crimes in Cyberspace (2014) and the book chapter “Civil Rights in the Information Age” in The Offensive Internet: Speech, Privacy, and Reputation.
Robert Chesney holds the James Baker Chair and also serves as the Associate Dean for Academic Affairs at the University of Texas School of Law. In addition, he is the Director of the Robert S. Strauss Center for International Security and Law, a University-wide research unit bridging across disciplines to improve understanding of international security issues.