by Anupam Chander, Lauren Gelman, and Margaret Jane Radin (eds). Stanford, CA: Stanford University Press, 2008. 384pp. Paper. $39.95. ISBN: 9780804759182.
Reviewed by Gloria C. Cox, Honors College and Department of Political Science, University of North Texas. Email: Gloria.Cox [at] unt.edu.
SECURING PRIVACY IN THE INTERNET AGE, a collection of articles edited by Anupam Chander, Lauren Gelman, and Margaret Jane Radin, is the result of a major symposium at Stanford University in 2007 that brought together advocates of privacy who wanted to “protect individuals from snooping corporations” and advocates of security who wanted to “protect corporations from snooping individuals” (p.2). The editors have assembled in this four-part book a range of articles presenting viewpoints and perspectives on various aspects of the related but distinguishable issues of privacy and security. Part I covers existing security and privacy laws, while the remaining parts explore using the common law, statutory reforms, and market options to promote privacy and security.
Much of SECURING PRIVACY IN THE INTERNET AGE deals with the inadequacy of current laws in the United States, a point on which there seems to have been considerable agreement among symposium participants. While authors look at privacy and security from a number of perspectives, their general concern, according to the editors, was “the corporations and individuals who seek to profit from gathering personal information about others” (p.2). As the editors note, “The law securing privacy in the United States is cobbled together from a disparate array of federal statutes, a few state laws, and common law” (p.3) which lacks any sort of framework and therefore protect privacy only “for limited domains and in certain circumstances” (p.3).
Throughout SECURING PRIVACY IN THE INTERNET AGE, authors are careful to note the important distinctions between privacy and security. This amounts to a very interesting discussion, best appreciated by a careful reading of pertinent sections of the introduction and first five articles. In general, privacy may best be understood as a value, while security is a process. For example, while people may value being able to keep their financial information to themselves, the security of that information is jeopardized by data gathering, aggregation, and distribution, whether of the legal or illegal sort. The security of personally identifiable data rests on the effectiveness of the process in place for protecting it from those with no right to see it.
Many of the contributors recognize that people voluntarily disclose personal information all the time in order to receive a benefit of some kind, such as a mortgage, credit card, or automobile loan. Having no other choice, we entrust our information to a financial institution, hoping there will be no security breach. But such breaches occur frequently, [*717] because, generally speaking, data are poorly protected. The institution may even sell or give our information to other companies or organizations, in accord with – or in defiance of – the guarantees made to us when we provided it. Either way, the information is out there, subject to use for or against our interests. Indeed, it is quite likely that personally identifiable data will be used for purposes beyond and outside those for which they were collected. Take, for example, credit card companies that routinely compile data on the personal preferences of cardholders, creating in the process a fact-filled profile of each consumer.
Several authors also ponder what constitutes the standards of appropriate or reasonable, which is typically the legal requirement for handling data and the procedures that will be followed. Contributor Ian Ballon makes the case that, in the absence of comprehensive laws for handling data, litigation will increase to fill in the gaps. He also points to action taken by federal agencies under their rulemaking power, including the Federal Trade Commission which has investigated and ruled against several companies, including Petco, Eli Lilly, Microsoft, Guess, Inc., and Tower Records (p.48). The FTC’s tactic is to review the privacy and security guidelines posted for consumers, then see what companies are actually doing. When they do not live up to the terms they post and require consumers to agree to, the agency demands compliance.
Related to this discussion is that of authors Sobel, Petrulakis, and Dixon-Thayer who note that consumers are usually trapped when it comes to legal options in dealing with Internet activities. In order to claim a benefit, get information, or even make a purchase, the consumer must click through a portal at which he or she agrees to the contract provisions set forth by the company. Because people are generally unable or unwilling to read the entire contract – and they want the benefit available on the other side – they simply click the “I Agree” button, often to their detriment later on when they find out they have agreed to all sorts of disclosures or even mediation if he has a complaint. Author Andrea Matwyshyn studied terms of such agreements at different points in time, years ago and more recently, and reports that there have been major changes in what such agreements contain, including substantial burden-shifting to consumers and away from the companies themselves.
Finally, Tim Wu explores the international scene, with considerable attention to the sophisticated and overarching privacy protection that characterizes the European Union (EU). For decades, Europeans have exhibited much more concern about these issues than others, and laws in EU countries tend to drive legislation elsewhere in the world, because doing business within the countries of the European Union requires it. European data controllers must adhere to the principles of notice, fidelity, and proportionality (p.98). While the EU is the high point worldwide in privacy protection and security, its values and protections have not readily spread to other places. In fact, privacy is simply not a meaningful concept in many parts of the world. By way of comparison to the European Union, the author notes the existence of cameras inside all Internet cafes in China (p.91). [*718]
In Part Two of SECURING PRIVACY IN THE INTERNET AGE, authors consider ways that privacy and security may be promoted through the Common Law, beginning with Daniel Solove’s assessment of just how vulnerable individuals are to identity theft and computer hacking. He notes that improper access to huge amounts of data is a common problem, and his view, like that of others, is that a substantial amount of blame must be placed on the use of poorly designed systems that offer little protection to privacy. Although identity theft has been a federal crime since 1998, many law enforcement officials treat it as a minor crime not worthy of much time and effort on their part. Solove cites an estimate that only one in every 700 identity theft cases is ever solved, although the crime is costly to individuals and organizations (p.115).
While it seems late in the book for the issue to come up, it is in Attorney Marcy Peek’s article that we first see a serious philosophical case for protecting information privacy. The author notes that “the push for personal data control is a demand for informational autonomy, human dignity, and individual freedom” (p.138). She views third party companies that traffic in personal data as one of the most important aspects of the problem because they have no relationship whatsoever to the person on whom the data is collected and therefore have no incentive to protect it. Members of the public find their personally identifiable information in the hands of organizations with whom they have no relationship in a legal environment that provides very little legal protection. Peek notes that ChoicePoint, whose data security lapses are well documented, owns 19 billion records, or “about 65 times as many pieces of information as there are people in the United States” (p.139). The author recommends legislation that would charge such companies with “unjust enrichment” when they “mishandle proprietary data” (p.140).
Several authors express concern that it is the practice of software companies to release products prematurely to the public, even though they know it contains security flaws. They argue that these flaws represent a major but avoidable problem. Several factors contribute to the problem, including financial advantages for getting software to the market as quickly as possible and a lack of any discernible public demand for better software security. This is not surprising, of course, as the average consumer is incapable of assessing the security-related qualities of software (p.158). Additionally, some software systems dominate the market, virtually eliminating competition as an incentive toward better quality. The end result of this situation is the current system of patching software flaws that are “detected” after release. In fact, this is not an optimal solution, as the end result of patching is “sixty times as much as the cost of fixing the error at the design stage” (p.159). Also, patches often create new problems in an effort to solve old ones, and the fix may be quite insecure. Note that some well known flaws appear repeatedly in software (p.170), which should be taken as a sign that they are correctible before it reaches the market.
A fairly novel legal argument is offered by Shubha Ghosh and Vikram Mangalmurti, who suggest that computer hazards are really not much different from other problems such as workplace [*719] safety hazards and defective consumer products. In their view, mandating a certain level of computer security is just as important to society as specifying that workers should not be exposed to asbestos in the workplace. The authors use the term social assurance to convey the idea that society has something at stake when data are handled inappropriately. As they note, “Although individuals are the immediate victims of data losses and leakages of private information that arise from the hacking of security systems, social institutions and systems are harmed as well” (p.188). They argue that people lose faith in the system, which harms society. This is an interesting and innovative argument that may well emerge as an important justification for more comprehensive legislation.
Consideration of such comprehensive legislation is the subject of Part Three of the book. Chris Jay Hoofnagle does a fine job of exploring identity theft, especially those cases involving credit reports. He comes down hard on the credit industry for its lax practices in granting credit – a practice confirmed by recent revelations in the credit industry – and supports the position that the default status of credit reports should be changed from liquid (available) to frozen (unavailable). As consumers of credit, data about each of us would continue to pile up in credit reports, but imposters would have trouble getting new accounts, because new accounts rely so heavily on credit reports. In fact, the author says that about 5 billion credit offers went out in 2003, and that the typical approval process took just two seconds. By using a frozen system, we could, as consumers, allow our mortgage broker or other lender to have access, but the default state would be closed. Hoofnagle argues that under such a plan, fewer people would be victimized by identity thieves, and, of course, fewer dogs and babies would get credit card offers in the mail.
One of the most interesting issues raised in Part Three is Radio Frequency Identification (RFID) tags and reader systems. Effective use is made of these tags by large businesses such as WalMart that need to track inventory, and also by those who need to track livestock and automobiles. The heartwarming stories of pets returned to owners because of implanted chips suggest another use for RFID tags. Interestingly, although RFID tags are unique identifiers, they can be read by anyone with the proper equipment, not just those who paid for their installation. The United States is currently using RFID tags in all I-94 visas for immigrants, giving the U.S. government a complete travel history for every visitor to this country. Opponents characterize this practice as having the effect of fitting every visitor with a remote control tracking device. If you recently acquired or renewed a passport, as I did, you probably have your own personal tracking device in the form of a passport-implanted RFID. A wireless reader can detect the information included with your passport, letting others know, whether they need to or not, where you are at all times. Other countries go much further: China has issued ID cards with RFID tags for all Chinese citizens. Welcome to the twenty-first century version of brave new world.
Finally, A. Michael Froomkin takes on the issue of national identity cards. Few [*720] proposals evoke such a negative response as national ID cards, but the truth is that they are much less of an issue now than they were a couple of decades ago simply because they already exist in the virtual sense. One important difference is the amount and types of information that could be loaded on cards if the United States were to decide to require them. Currently, advocates suggest loading the cards with a tremendous amount of information: biometric information (eye color, for example); marital status; educational background; voter eligibility; salary and place of employment; residence address and telephone numbers; banking data; citizenship status; visas; religion; legal cases (bankruptcies, lawsuits, for example); and medical history, including DNA information. Such cards could be tied to all sorts of privileges and responsibilities, including the right to vote and whether we had already participated; eligibility for jury pools; the right to work; and whether child support or alimony is owed. A person might even have to show his card to pay a toll or cross a bridge, for example. Such coding would allow the enforcement of all kinds of social policies in a comprehensive manner heretofore unknown in the United States.
Froomkin notes the tremendous – and inevitable – risks tied to such a system of ID cards, beginning with the chance or even likelihood of inaccurate data. Were inaccurate data to be incorporated into a person’s record, it would undoubtedly be difficult to remove. While birth certificates are heavily relied upon for citizenship information, we should not forget that birth certificates are the result of the work of numerous hospitals, each with its own system of recordkeeping and standard of accuracy.
Inaccurate information is only the beginning, however, as accurate information may be even more harmful if medical conditions, financial problems, or past criminal convictions were revealed. The author speaks of such cards creating what he calls social lepers by the information they reveal. And revealed it would be – to many people, including the police, school authorities, government officials of various types, and many others with a reason and/or right to know. The author cites numerous costs people would incur, but few resonated with me as profoundly as what he calls the moral and psychological costs of losing the right to move through life without having to produce an ID card. As he notes, “the prospect of J. Edgar Hoover with a computer and a national ID database is not an attractive one” (p.306).
Clearly, this is a book that moves among many interesting and even compelling topics on the important issues of privacy and security. The richness of this book is its incorporation of a variety of perspectives, including copious footnotes for every chapter. It is thought-provoking and interesting and would serve to introduce the novice on this issue to many aspects of privacy and security. While the extraordinarily high level of legal detail will not be appealing to every reader, it can be consumed at a level that puts this book and the important subject it covers available to every college level reader with an interest.
© Copyright 2009 by the author, Gloria C. Cox.