NO MAGIC WAND: THE IDEALIZATION OF SCIENCE IN LAW

by David S. Caudill and Lewis H. LaRue. Lanham, Maryland: Rowman & Littlefield Publishers, Inc. Co-published with The Center for Public Justice, 2006. 170pp. Cloth $65.00. ISBN: 0742550222. Paper $24.95. ISBN: 0742550230.

Reviewed by Timothy M. Hagle, Department of Political Science, The University of Iowa. Email: timothy-hagle [at] uiowa.edu.

pp.300-303

“Political Scientists (and other social scientists) are no doubt well aware of both the requirements and limitations of what might be termed ‘scientific knowledge.’ We routinely consider matters of reliability and validity, but two questions persist: what do we know and when did we know it. Even the most rigorous study with highly significant results must be couched in probabilistic terms that reduce the certitude with which we can claim to know something.”

“There are certainly disagreements about what constitutes ‘good science,’ and various disciplines likely have variations concerning their practices, but the overall model is relatively stable within the scientific community. Like most models, however, it is only valid within a certain context. Problems ensue when results derived from scientific studies are used in other contexts, such as legal proceedings.”

The above two paragraphs were the beginning of my 1999 review for the Law and Politics Book Review of Foster and Huber’s JUDGING SCIENCE: SCIENTIFIC KNOWLEDGE AND THE FEDERAL COURTS. Eight years on, they still provide a good introduction to the topic addressed by the authors of NO MAGIC WAND.

Unlike Foster and Huber, David Caudill and Lewis LaRue take a more philosophical approach in NO MAGIC WAND. They begin by briefly describing the two sides in the “science wars.” As they describe them, one side consists of the “believers in science as an enterprise that reports on natural reality, or at least successfully represents nature with models that correspond to reality.” The other side consists of those “who view science as a social, rhetorical, and institutional enterprise that only manages to convince us that it deals in natural reality.” I suspect that we would be hard pressed to find strict adherents to either position – science as revealed truth versus science as social construct – and the authors reject the need to choose between them as a false dichotomy. To do so is to fall prey to the “idealization of science” of the book’s subtitle.

Caudill and LaRue describe three basic mistakes that result in idealizing science. The first is to view science as merely a social construct, and therefore virtually meaningless. Those who take this view [*301] place undue emphasis on the human side of science; the mistakes, biases, and human failures of scientists. The result is that all science is minimized. The second mistake is to unduly glorify science. Those who take this view, according to the authors, make one of two mistakes. They either blithely accept anything the scientist says without considering the human side, or they punish scientists for not living up to their lofty expectations of what science should be. Not surprisingly, Caudill and LaRue recommend a middle ground.

Before proceeding to a fuller explanation of these three mistakes, the authors review the legal context in which judges, as the gatekeepers of scientific evidence used in court, operate. The authors briefly review DAUBERT v. MERRELL DOW PHARMACEUTICALS (1993), as well as two post-DAUBERT cases that clarified the Supreme Court’s position on the use of scientific evidence. DAUBERT, of course, is the Supreme Court’s decision that clarified how Rule 702 of the Federal Rules of Evidence should be interpreted in considering the admission of expert evidence. Although DAUBERT is well known at this point, the review is helpful as it highlights the lower court decisions and how the Supreme Court corrected them. In particular, the authors note that most lower court decisions focused on four critical paragraphs in Justice Blackmun’s opinion for the DAUBERT Court. These paragraphs contain what are known as “the DAUBERT criteria,” which Caudill and LaRue characterize as testability, low error rate, peer-reviewed publication, and general acceptance. Although important, the authors argue that commentators placed more emphasis on these criteria than the Court intended. They note that the Court qualified the use of these criteria in the paragraph immediately preceding and the one following the four in question. In particular, Justice Blackmun emphasized in these two paragraphs that the standard the Court laid out was “a flexible one.” Although I generally agree with the authors that commentators and others may have placed too much emphasis on the four criteria the Court specified, one can hardly blame them for doing so. When the Court indicates that it approves of a specified set of criteria, the smart bet is to follow that set of criteria in arguing a case – at least until the Court provides additional guidance.

Having provided the legal context, in the next three chapters Caudill and LaRue flesh out the mistakes identified in their initial chapter. They do so by using passages from the opinions of federal Court of Appeals decisions that highlighted mistakes the district courts had made. The material in these chapters is interesting, as the authors examine a variety of aspects of each mistake, providing concrete case examples along the way. The authors rely heavily on passages from the Courts of Appeals decisions explaining the errors made by the trial courts. Although informative, these passages are too specific to the individual cases. Moreover, the analysis would be stronger had the authors provided additional explanation on each mistake and correction. Perhaps they believe that by recognizing that the mistakes exist they can be avoided. Either way, at [*302] the end of these chapters readers are left expecting more guidance. The fifth chapter attempts to explain how science works as a practical (i.e., not idealized) activity. This information is interesting and useful, but, again, falls short of offering direction as to how to avoid or correct for the three basic mistakes. The final chapter moves in this direction, but only slightly. The authors suggest the need for science studies for law but do not delve into specifics.
I must give credit to Caudill and LaRue for highlighting problematic aspects of using science in the courtroom, but the fact that science is a messy business is not new – even within the legal community. Thus, it is fair to ask how does this recognition advance the discussion regarding how judges are to serve in their role as gatekeepers of scientific evidence post-DAUBERT?

Even to the extent that one considers this an introduction to the practical aspects of science, it would have helped if the authors had paid greater attention to the different ways that science enters the courtroom. At a very fundamental level, a judge may need to inquire whether any particular activity can even be considered “science.” Phrenology and astrology are certainly not considered sciences today, though there was a time when they were certainly in fashion. Few would reject psychology outright, but it can often be on shaky ground when it seems that any number of experts can be lined up on either side of an issue, the unending discovery of new syndromes, and approaches that have been widely rejected. On the other hand, how can judges evaluate whether a new technique should be admitted? John Douglas, who developed criminal profiling techniques for the FBI, speaks in one of his books of the initial difficulty in getting courts to admit his profiling testimony. Thus, judges must have some criteria to determine whether an activity can properly be considered scientific. Of course, because these criteria depend on outside evaluations of the activity’s merit – i.e., acceptance in the “scientific community” – they are subject to change as relevant scientific communities accept or reject various theories and approaches.

Of course, even if a particular theory is considered scientific, the question remains as to whether appropriate procedures were followed to arrive at the result. For example, DNA matching is firmly established as a valid and reliable technique for identification – and I use those terms in both their scientific and legal senses. Even so, that does not necessarily mean that an identification made as a result of DNA matching cannot be challenged based on faulty or improper handling by the testing facility or by the police. One could fairly say that such issues are beyond the essential question of DAUBERT and its progeny, as the question addressed there was simply whether the judge should let the evidence in, not what conclusions the fact finder (judge or jury) was to draw from it. True enough, but the authors do not draw this distinction. This may be unfair to the authors, however, as it was clear from their examples that many judges drew their conclusions as to the worth of the proposed scientific evidence at the gatekeeping stage. [*303]

On the whole, NO MAGIC WAND is readable and interesting. It is useful in that it provides specific examples from cases to show how various judges have interpreted the DAUBERT criteria. The book will be more useful to those less familiar with the problems associated with using scientific evidence in the courtroom.

REFERENCE:
Foster, Kenneth R., and Peter W. Huber. 1999. JUDGING SCIENCE: SCIENTIFIC KNOWLEDGE AND THE FEDERAL COURTS. Cambridge, Massachusetts: MIT Press.

CASE REFERENCE:
DAUBERT v. MERRELL DOW PHARMACEUTICALS, 509 U.S. 579 (1993).


© Copyright 2007 by the author, Timothy M. Hagle.