Design Is a Poor Guide to Authorization

By Edward Felten

Posted on May 27, 2013


James Grimmelmann has a great post on the ambiguity of the concept of “circumvention” in the law. He writes about the Computer Fraud and Abuse Act (CFAA) language banning “exceeding authorized access” to a system.

There are, broadly speaking, two ways that a computer user could “exceed[] authorized access.” The computer’s owner could use words to define the limits of authorization, using terms of service or a cease-and-desist letter to say, “You may do this, but not that.” Or she could use code, by programming the computer to allow certain uses and prohibit others.

The conventional wisdom is that word-based restrictions are more problematic.

He goes on to explain the conventional wisdom that basing CFAA liability on word-based restrictions such as website Terms of Use is indeed problematic. But the alternative, as James points out, is perhaps even worse: defining authorization in terms of the technical functioning of the system. The problem is that everything that the attacker gets the system to do will be something that the system as actually constructed could do.

What this means, in other words, is that the “authorization” conferred by a computer program—and the limits to that “authorization”—cannot be defined solely by looking at what the program actually does. In every interesting case, the defendant will have been able to make the program do something objectionable. If a program conveys authorization whenever it lets a user do something, there would be no such thing as “exceeding authorized access.” Every use of a computer would be authorized.

The only way out of this trap—short of giving up altogether the notion of “authorization” by technology—is to say that the designer’s intent that matters.

[This approach] requires us to ask what a person in the defendant’s position would have understood the computer’s programmers as intending to authorize. What the program does matters, not because of what it consents to, but of what it communicates about the programmer’s consent.

But even this underestimates the difficulty of relying on behavior. To see why, consider one of James’s examples: an ATM that was programmed so that when it did not have a network connection, it would dispense $200 cash to anyone, whether or not they even had an account at the bank. An Australian court convicted a Mr. Kennison who withdrew money without having a valid account. Notice that everything about the system’s behavior conveys the message that cash should be dispensed to anyone when there is not a network connection. This behavior of the system was pretty clearly not an error but a deliberate choice by the designers. If the system’s behavior conveyed anything to Kennison, it was that cash was supposed to be dispensed, and that the designers had chosen to make it behave that way. If you conclude Kennison’s use was unauthorized, then you have to get there by arguing that there was an understanding, not expressed in any words or behavior, that spoke more loudly than the system’s behavior. The lack of authorization did not stem from code, and it did not stem from words. Kennison was just supposed to know that the act was unauthorized. This seems plausible for ATM withdrawals, but it can’t extend very far into less settled technical areas.

Why did the ATM’s designers choose to make it dispense money? Presumably they figured that almost all of the users who asked for $200 would in fact have valid accounts of at least $200, and they wanted to serve those customers even at the risk of dispensing some cash that they wouldn’t have dispensed under normal circumstances. But this design decision seems to assume that people won’t do what Kennison did—that people will not take advantage of the behavior. It’s tempting to argue, then, that it is precisely the lack of technical barriers to Kennison’s act that conveys the designers’ belief that acts of that type were not authorized. But this argument would prove too much—if the existence of a fence conveys lack of authorization, then the non-existence of a fence cannot also prove lack of authorization. The conclusion must be that a system’s behavior is not a very reliable signpost for authorization.

Is there any case where a system’s behavior is a reliable guide to authorization? One possibility is where the system is clearly designed with a particular behavior in mind, but there was an obvious engineering error that created a loophole. For example, if a system requires passwords for account access, but the implementation treats a zero-length password as valid to access every account. Contentious CFAA cases are rarely like this. Text-based definitions of authorization may be problematic; but behavior-based restrictions are often worse.

The preceding is re-published on TAP with permission by its author, Professor Ed Felten, Director of the Center for Information Technology Policy at Princeton University. “Design Is a Poor Guide to Authorization” was originally published May 13, 2013 on Freedom to Tinker.

Note: The James Grimmelmann post that Professor Felten refers to at the top of this piece was re-published on TAP last week. “Computer Crime Law Goes to the Casino” can be read on Professor Grimmelmann’s blog, The Laboratorium, or here on TAP’s blog.



About the Author

Recent TAP Bloggers