The Internet’s Fort Knox Problem

By Jonathan Zittrain

Posted on June 3, 2010


Share

The following post is re-published on TAP with permission by its author, Professor Jonathan Zittrain.

A few weeks ago Internet security firm McAfee released an update to its Windows PC customers designed to protect them against a newly detected virus threat.  Instead, for some, the update destroyed a legitimate, and crucial, system file.  Uncountable numbers of PCs – likely hundreds of thousands, even millions – were rendered unusable.  The University of Michigan medical school lost the use of 8,000 of 25,000 PCs.  State troopers in Kentucky abandoned their cruisers’ mobile PCs and resorted to writing reports by hand.  Some hospitals in Rhode Island turned away non-trauma patients from their ERs.


The issue is larger than one firm’s unfortunate misstep.  It echoes across the entire Internet.  Call it the Fort Knox problem.


Fort Knox represents the ideal of security through centralization: gunships, tanks, and 30,000 soldiers surround a vault containing over $700 billion in American government gold.  It’s not a crazy idea for a nation’s bullion; after all, the sole goal is to convincingly hoard it.  But Fort Knox is an awful model for Internet security.


Our IT environment has traditionally been immune from many Fort Knox issues, because its architecture has encouraged decentralization.  One PC might be compromised, or Web site might fall, but others stand.  Bad guys on one side of the spectrum, and well-intentioned regulators on the other, each had to sweat to have an impact on Internet activities.


But the bad guys were clever and industrious.  Their digital robots came to costlessly crawl the Web looking for computers and sites to compromise, leveraging their reach.  Operators of well-financed Web sites have dealt with rising anxieties about security by spending enormous amounts of money on digital bunkers and backups for their data, while littler ones have hunkered down and simply hoped they wouldn’t be hit.


The public sector has been confused about how to help.  Governments know how to maintain and defend their roads and waterways, but have been stymied in cyberspace: so much of it is rightly privatized that there’s no obvious place to station a guard and no way to fill a digital pothole.  Worse, since identifying those behind intentional attacks online is exquisitely difficult, the traditional state tools of deterrence and punishment are ineffective.


That’s why we now see centralization under a few major corporate umbrellas under which disparate activities can be gathered.  The lures of security, interoperability and economies of scale have propelled much of the Web from a vibrant ecosystem of different, and differently managed, PCs and sites to one where a handful of private Fort Knoxes take responsibility for security.


But we can’t simply put our precious data into a single well-protected vault and peek in every few years.  We need to guard our PCs and data, but we also need them to be part of a worldwide network.  When we’re not masking our digital trail, we’re eagerly sharing it.  If we try to centralize its protection, it’s not a one-time transaction: rather, we need a constant gatekeeper who signs our data in and out every time we want to make use of it.  That’s a thread that runs from the McAfee debacle, where millions of people and firms turned the keys to their computers over to a third party to handle, through to cloud-based platforms like Facebook, where the company’s assent is increasingly needed to run unrelated applications on its platform or to log in to unaffiliated Web sites that no longer care to maintain their own digital borders.


If McAfee makes a mistake, many people pay at once.  If Facebook’s computers go down or are compromised, thousands of otherwise-independent applications and sites suddenly go down with it.  It’s not just our own data and transactions at risk, but our collective memory: the flip side of a centralized defense against bad guys is vulnerability to well-meaning good guys.  For example, if the generally laudable Google Books project is a spectacular success, we’ll see libraries give up their moldering, isolated archives of regular books in exchange for PC terminals where patrons can peer at an ephemeral digital copy drawn from Google’s central archive.  It makes sense – and no doubt Google has near-impregnable backups – but it’s also an opportunity for a government to intervene in worrisome ways.


For example, if one book in the system contains copyright infringing, or defamatory, or obscene material, those aggrieved can get a court order requiring the infringing pages of the book to be deleted from the central server.  This vulnerability affects every book that is distributed and maintained through a centralized platform.  Anyone who does not own a physical copy of the book – and a means to search it to verify its integrity – will now lack access to that material.  By centralizing (and to be sure, making more efficient) the storage of content, we are building a world in which, as a practical matter, all copies of once-censored books like Candide, The Call of the Wild, and Ulysses could have been permanently destroyed at the time of the censoring, and could not be studied or enjoyed even after subsequent decision-makers lifted the ban.


So what do we do?  We have two things going for us that the real Fort Knox doesn’t: we can make copies of our digital gold, and there are lots of us, each with our own stake in security and autonomy.

First, so long as there aren’t undue barriers to extracting our own data from cloud platforms or our own PCs, backups can become more seamless, and made in a variety of ways, making a McAfee misstep or anything like it less costly.  Then we have our cake and eat it too.  The same principle applies to projects like Google Books, where participating libraries can arrange to securely maintain their own gold copies of Google’s precious trove – kept to compare against others’ copies, so omissions and changes can be detected and appropriately challenged, not leaving Google with the sole burden of holding off government speech regulation.


Second, we need to reinvigorate the Internet’s principle of open, distributed architecture that has sparked so much growth and innovation.  Our choices for security aren’t simply among government soldiers, corporate mercenaries, or our own personal barricades – though each has a valuable role to play.  Rather, we can reinforce open, shared early warning systems to enumerate and deal with security threats, whether against PCs, Web sites, or Internet connectivity.  With a few technical tweaks, we can all further help relay data from Web sites that are under attack, stabilizing their presence.  Security shouldn’t have to be purchased like a personal bodyguard.  Far more flexible than Fort Knox are people, each with their own pocketed gold and machinery, empowered to look out for one another.

Original post from The Future of the Internet Blog.


Share

About the Author


Recent TAP Bloggers