by Rebecca C. Harris. New Brunswick, New Jersey: Rutgers University Press. 208pp. Cloth. $65.00. ISBN: 9780813543680. Paper. $24.95. ISBN: 9780813543697.

Reviewed by Timothy M. Hagle, Department of Political Science, The University of Iowa. Email: timothy-hagle [at]


Questions of the reliability and validity of scientific evidence have long been of interest to legal practitioners and those who study the judicial process. The US Supreme Court’s decision in DAUBERT v. MERRELL DOW PHARMACEUTICALS (1993), produced a fundamental change in the way federal courts evaluate evidence produced by new and emerging science. Prior book-length treatments have examined how DAUBERT changed the evidentiary standard (Foster and Huber 1999) or, more generally, the relationship between law and science (Caudill and LaRue 2006). Missing from these analyses is an empirical examination of how state high courts have handled the evidentiary questions surrounding new science. In BLACK ROBES, WHITE COATS, Rebecca C. Harris aims to provide such analysis.

The first chapter is an introduction to the notion that judges are gatekeepers. This comes as no surprise, of course, but it is still worthwhile to recognize how state court judges function in this gatekeeping role regarding the admissibility of new types of scientific evidence.

The second chapter provides a combination literature review and explanation of the factors that likely influence whether state judges grant or deny the admission of new scientific evidence. The factors that appear relevant to the decision to admit or reject the evidence are similar to factors commonly examined in other models of judicial decision making. Some factors are clearly attitudinal (ideological orientation), while others derive from institutional characteristics (repeat players and litigant status). A third set of extra legal factors include professionalization of the science and third-party (non-legal) approval.

Another important factor for any state court decision is the legal standard used. Unlike the federal courts which must follow the standard mandated by the US Supreme Court, state courts may follow any of the three primary standards (or something else). As Harris observes, the easiest is the “reliability” standard characterized by state versions of Federal Rule of Evidence (FRE) 702. The most difficult is the “general acceptance” or FRYE standard, after the federal court of appeals decision in FRYE v. UNITED STATES (1923). The DAUBERT standard forms a middle ground.

With these various factors in hand, Harris turns to an examination of how state courts have dealt with three different types of scientific evidence: forensic DNA, lie detectors (polygraphs), and syndrome evidence [*308] (both rape trauma syndrome and battered woman syndrome). These three types of evidence provide a good contrast in that they differ in terms of being “hard” or “soft” science, by which party tends to support admission, and by the amount of interpretation required by experts.

Harris begins with forensic DNA and shows how, although DNA matching is not as straightforward as is often assumed, most courts approved of its admission even when the first cases appeared in the state high courts in 1989. Several tables are included to demonstrate the relationship of the identified factors with the admissibility decision. For example, one table shows the relationship between the scientific standard used and the acceptance rate. The results here clearly support the author’s characterization of the FRYE standard being the most difficult (77% acceptance rate), the DAUBERT standard being of medium difficulty (86%), and the FRE 702 standard being the easiest (93%). Of the chronological factors, the effects of two reports by the independent National Research Council (NRC) were examined. The first NRC report in 1992 cast some doubt on the statistical calculations used to determine the chance that the sample came from someone other than the defendant. Several courts cited this report in rejecting DNA evidence in the next few years. A second NRC report in 1996 was much more approving of DNA matching, and many courts cited this report in deciding to admit DNA evidence from 1996 on. These and other factors seemed to have a direct link to state court decisions.

Polygraph evidence is examined next. Although the timeline for polygraph decisions starts much earlier (1933 for the first polygraph case as opposed to 1989 for the first DNA cases), the number of cases decided is about the same. A striking difference is that although state courts admitted DNA evidence 82% of the time (126 of 153 cases), the state courts rejected polygraph evidence by the same percentage (136 of 163 cases). Like the examination of forensic DNA, Harris examines the polygraph cases using many factors in a bivariate fashion. Some factors help to explain the different treatment of polygraph evidence by state courts, but two primary explanations seem to lie outside the main factors. One of these is the level of interpretation needed for the data. DNA evidence essentially indicates that some bit of evidence (hair, blood, and the like) found at some location belongs to the suspect with a specified degree of certainty. In contrast, the theory behind polygraph evidence is that when a person is being deceptive it causes measurable physiological changes (blood pressure, heart rate, and so on). After taking baseline measurements, a trained expert could indicate when the subject was being deceptive based on changes from the baseline. The greater level of required interpretation of the measurements and the greater possibility of error (i.e., “beating” the machine or a false positive) seems to have made more courts dubious as to its reliability.

An additional complicating factor is that the decision on polygraph evidence is not a simple accept or reject. A third possibility is that the parties may stipulate in advance to admission of the results. Even if the parties so stipulate, state court judges still may reject the [*309] results as being insufficiently reliable. As Harris notes, this results in six possible outcomes. Aside from generally complicating the analysis, the number of cases in any cross section was often quite small.

The syndrome evidence cases assessed in the next chapter further complicate the analysis. Here, two different types of syndrome evidence are considered: rape trauma syndrome (RTS) and battered woman syndrome (BWS). Both are generally a form of Posttraumatic Stress Disorder (PTSD). As such, they were developed more for diagnostic than evidentiary purposes. Between the two, RTS tends to be presented by the prosecution in sexual assault cases, whereas BWS tends to be presented by the defense as part of a self defense argument. The science behind these two types of syndrome evidence is even softer than polygraph evidence and is heavily interpretative. Even so, state courts were much more likely to accept syndrome compared to polygraph evidence (55% of 31 RTS cases and 68% of 40 BWS cases).

Although Harris again provides several tables to highlight how the various factors are related to the outcome decision, it is difficult to determine why syndrome evidence is approved at a much higher rate than polygraph evidence, because the former is also not a simple accept/reject proposition for the courts. For RTS in particular, there may be more than one purpose for the evidence. It could be used to help prove that a rape occurred, or it could be used as evidence of a lack of consent to an acknowledged sexual encounter. In addition to the multiple outcome categories, the far smaller number of cases makes rigorous analysis more difficult.

Some comparison between the approaches to each of the types of evidence is presented in the individual chapters, but Harris follows this with a short chapter that provides further comparisons. The concluding chapter is mainly a “where do we go from here” discussion.

The main strength of the book lies in the empirical analysis of each of the three types of evidence. Even without a comparison across the types, each analysis is informative and interesting. The bivariate examination of the identified factors is quite useful. Also interesting is the reaction of the state courts to science that is more complicated. The science behind forensic DNA may be highly technical, but it is straightforward. Such evidence is also used for a fairly simple purpose. As such, courts have a relatively easy accept or reject decision. In contrast, the more interpretative nature of polygraph evidence, as well as the possibility of stipulations, makes the decision more complex. Similarly, the multiple purposes for which RTS evidence might be admitted adds further difficulty to any attempts to apply a scientific standard.

There are some weaknesses to the book. One could quibble about minor things, but three items are worth noting.

First, there seems to be some important aspects of evidence that are not mentioned. For forensic DNA, Harris notes that for every case in the analysis the evidence was a tool for the prosecution. Of course, forensic DNA [*310] has been used by defense advocates to free those convicted of crimes, many of whom were on death row. One could understandably exclude these cases from the analysis, but some discussion of this would have been appropriate. Similarly, more explanation would be welcome regarding two aspects of polygraph evidence. On the question of reliability, Harris notes that studies find it to be reliable between 70% and 90% of the time. Even at the high end of that range, is it enough to carry the burden of proof using the “beyond a reasonable doubt” standard? Judges may reject polygraph evidence because it does not meet the legal standard as opposed to a scientific standard. A second issue with polygraph evidence concerns the Fifth Amendment right against self incrimination. Even to the extent that a defendant may stipulate that the results may be used at trial, the Fifth Amendment is implicated and judges may look ahead to what that may mean in a variety of judicial contexts.

Second, Harris indicates how the cases were identified using a LEXIS search. She then made a judgment whether to include a case based on the legal issues. I performed a search on one aspect of RTS and identified two cases not included in her analysis. One could understandably be rejected based on the actual grounds for decision, but the other seemed appropriate for inclusion. Thus, more information on the inclusion criteria would have been helpful. Along similar lines, intermediate appellate court decisions denied review by the state high court probably warrant assessment. If there were no other cases from a state in the database, perhaps such lower court decisions served as the law for that state. Either way, some discussion of this issue would have been helpful.

Third, some external factors directly affecting the legal analysis should have been noted early on. In the concluding chapters, Harris indicates that some legislatures passed laws specifically related to one or more types of evidence. Even if this did not result in any state courts changing their positions, it should have been mentioned as part of the court’s decisional landscape. Similarly, did the US Supreme Court hear appeals on any of the included cases? Evidentiary questions may be primarily a matter for state courts, but it is possible some federal constitutional right could be implicated.

On the whole I found much of value in the book. It would have been nice had the analysis for each of the three types of evidence been longer and more thorough, but if one views it as preliminary empirical examination, the book functions quite well as an introduction to both these evidence types, as well as to how state courts deal with emerging science. As an initial cut at these topics, BLACK ROBES, WHITE COATS does a good job of laying the groundwork for more rigorous analysis to follow.

Caudill, David S., and Lewis H. LaRue. 2006. NO MAGIC WAND: THE IDEALIZATION OF SCIENCE IN LAW, by Lanham, Maryland: Rowman & Littlefield Publishers, Inc. Co-published with The Center for Public Justice. [*311]

Foster, Kenneth R., and Peter W. Huber. 1999. JUDGING SCIENCE: SCIENTIFIC KNOWLEDGE AND THE FEDERAL COURTS. Cambridge, Massachusetts: MIT Press.


FRYE v. UNITED STATES, 293 F 1013 (1923).

© Copyright 2009 by the author, Timothy M. Hagle.