Imagine a technology that is potently, uniquely dangerous — something so inherently toxic that it deserves to be completely rejected, banned, and stigmatized. Something so pernicious that regulation cannot adequately protect citizens from its effects.
That technology is already here. It is facial recognition technology, and its dangers are so great that it must be rejected entirely.
“Amazon Needs to Stop Providing Facial Recognition Tech for the Government” by Professors Evan Selinger and Woodrow Hartzog
According to Professors Evan Selinger, Rochester Institute of Technology, and Woodrow Hartzog, Northeastern University, “we’ve been led to believe that advances in facial recognition technology will improve everything from law enforcement to the economy, education, cybersecurity, health care, and our personal lives.” Unfortunately, they say, “we’ve been led astray.”
In their article, “Amazon Needs to Stop Providing Facial Recognition Tech for the Government,” Professors Selinger and Hartzog recognize that “a litany of technologies, from the automobile to the database to the internet itself, has contributed immensely to human welfare. Such technologies are worth preserving with rules that mitigate harm but accept reasonable levels of risk.” However, the professors stress that facial recognition innovations are “not among these technologies.”
They can’t exist with benefits flowing and harms adequately curbed. That’s because the most-touted benefits of facial recognition would require implementing oppressive, ubiquitous surveillance systems and the kind of loose and dangerous data practices that civil rights and privacy rules aim to prevent. Consent rules, procedural requirements, and boilerplate contracts are no match for that kind of formidable infrastructure and irresistible incentives for exploitation.
Below are some additional excerpts from “Amazon Needs to Stop Providing Facial Recognition Tech for the Government:”
Can Amazon Be an Agent of Social Change?
The American Civil Liberties Union (ACLU) and other coalition partners this week wrote to Jeff Bezos demanding that Amazon stop providing government agencies with facial recognition technology and services associated with Rekognition, the company’s facial recognition system. The system poses a major threat to civil liberties because it “can identify, track, and analyze people in real time and recognize up to 100 people in a single image,” scanning data against a set of tens of millions of faces.
In sum, there is a general problem (no anonymity in public) with unevenly distributed consequence (more threatening to minority and other vulnerable groups). Facial recognition enables surveillance that is oppressive in its own right; it’s also the key to perpetuating other harms, civil rights violations, and dubious practices. These include rampant, nontransparent, targeted drone strikes; overreaching social credit systems that exercise power through blacklisting; and relentless enforcement of even the most trivial of laws, like jaywalking and failing to properly sort your garbage cans.
Amazon rejected the appeal. Matt Wood, Amazon’s general manager of artificial intelligence, compared facial recognition technology to the internet. He noted that there are both positive and negative uses of facial recognition technology, and he argued that the threat of bad actors doesn’t outweigh all the good that responsible use of facial recognition technology can yield, like “preventing human trafficking, inhibiting child exploitation, reuniting missing children with their families.”
In short, Amazon’s response suggests that if problems ever do arise, appropriate policy correctives can and should be followed. From this perspective, Amazon is acting as if the law is up to the tasks ahead and the company has an ethical obligation to stay the course.
Facial Recognition Technology Creep
As we see it, our procedural pessimism is rooted in a defensible notion of facial recognition technology creep. Facial recognition creep is the idea that once the infrastructure for facial recognition technology grows to a certain point, with advances in machine learning and A.I. leading the way, its use will become so normalized that a new common sense will be formed. People will expect facial recognition technology to be the go-to tool for solving more and more problems, and they’ll have a hard time seeing alternatives as anything but old-fashioned and outdated. This is how “techno-social engineering creep” works — an idea that one of us discussed in detail in Re-Engineering Humanity with Brett Frischmann.
To appreciate why facial recognition technology creep is a legitimate way to identify slopes that are genuinely slippery, you have to be a realist and accept the fact that some, though not all, technological trajectories can be exceedingly difficult to change. This is especially so in the case of trajectories formed by infrastructure that grows significantly, where the growth is propelled by strong interest across sectors; heavy financial investments; heightened expectations from consumers, citizens, and politicians; increased social, personal, regulatory, and economic dependency; and limited legal speed bumps that stand in the way.
Our face is one of our most important markers of identity, and losing control of that is perhaps the greatest threat to our obscurity. We often recognize others by their faces, even as people age. Faces are also the easiest biometric for law enforcement to obtain, because they can be unobtrusively, inexpensively, and instantly scanned and tend to be hard to hide without taking drastic or conspicuous steps.
The main difference between what we see as being a realist about the power of infrastructure and being a technological determinist comes down to different takes on alternative pathways. It might sound like a contradiction in terms, but the realist can believe in the transformative potential of ideals. Ideals like civil rights that should matter more than worshipping at the altar of efficiency. These ideals are damned hard to champion for, but they’re not preordained to fail. Such ideals place moral progress ahead of the technological variety, and they take courage — not mere will — to create and preserve.
Read the full article on Medium: “Amazon Needs to Stop Providing Facial Recognition Tech for the Government.”
Evan Selinger is a Professor of Philosophy at Rochester Institute of Technology, where he is also Head of Research Communications, Community & Ethics at the Center for Media, Arts, Games, Interaction, and Creativity (MAGIC). His research primarily addresses ethical issues concerning technology, science, the law, and expertise.
Woodrow Hartzog is Professor of Law and Computer Science at Northeastern University School of Law and holds a joint appointment in the College of Computer and Information Science department. Professor Hartzog teaches privacy and data protection issues, and his research focuses on the complex problems that arise when personal information is collected by powerful new technologies, stored, and disclosed online. Professor Hartzog is an internationally recognized expert in the area of privacy, media, and robotics law.