by Sean G. Overland. El Paso, TX: LFB Scholarly Publishing, 2008. 178pp. Cloth. $62.00. ISBN: 9781593323288.

Reviewed by Randolph N. Jonakait, Professor of Law, New York Law School. Rjonakait [at]


A civilized society needs a civilized means of resolving disputes. Juries are at the heart of our method. Hundreds of times every day juries make decisions that affect not only the lives, fortunes, and futures of individuals and organizations but of society generally. Even so, while anecdotes about juries abound, good research about how juries, our most direct form of participatory democracy, reach decisions is scarce. By presenting new data about juror decision-making in civil cases, Sean G. Overland’s new book, THE JUROR FACTOR: RACE AND GENDER IN AMERICA’S CIVIL COURTS, provides a valuable, but limited, addition to the research literature.

Dr. Overland, a trial consultant who assists clients with case strategy and jury selection, precedes that data with a solid summary of jury research. He reports that most studies conclude that “jurors do a very good job of understanding case evidence and using that evidence to reach decisions” (p.4), and that judges agree with the jury verdicts in a large majority of cases. Furthermore, while it may seem intuitive that the racial and gender composition of juries will affect the verdicts, “most of the academic research on juror decision-making has reached the rather surprising conclusion that jurors’ personal characteristics, including their race, gender, socioeconomic status and so on, have relatively little, if anything, to do with their verdicts in most trials” (p.11). Instead, studies consistently show that “the most powerful determinant of a juror’s verdict in both civil and criminal cases is the strength of the competing evidence. . . . . Several studies have confirmed that the litigant that presents the strongest case is likely to prevail, regardless of any other factors that might affect jurors’ decision making, including the demographic composition of the jury” (p.12).

Overland notes that researchers do not have access to jurors during trials. Instead, data are usually derived from one of three alternative sources. All have their limitations.

Interviews following the completion of juror service are time-consuming and expensive, depend on jurors’ memories and self-reported perceptions that may be faulty, and usually yield sample sizes too small for statistically significant results.

Trial simulations or mock trials, the most common source of data on jury decision-making, can control variables, such as the evidence presented, in order to study factors of interest, such as the demographic characteristics of the jurors. Many such studies, however, have used convenience samples of research participants, most often college [*781] students, who do not accurately represent the pools from which real jurors are drawn. The quality of the evidence presented in mock trials varies widely, from brief, written descriptions of a trial to more lengthy audio and videotaped presentations. In contrast to an actual trial, the mock jurors do not make decisions with real-world consequences, and many of the studies collect information from the individual participants without having them deliberate in panels. Consequently, doubts exist about whether the information gathered from mock trials really applies to actual trials.

Archival data about actual verdicts along with census data from the trial venues allow researchers to seek correlations between verdicts and awards and the venues’ population characteristics. Such information, however, is limited. Not all jurisdictions’ verdicts are available, and the collected information usually reports only the verdict, the size of any award, and the general type of case. The characteristics of the juries are not catalogued, and the correlations with the census data are valuable only with the assumption that that demographic data accurately represent the jury pools and trial juries. Furthermore, as Overland states, “a major drawback of such data is the lack of any control over the types of cases heard or the evidence presented. . . . [B]ecause these data sets include only the most general information about each trial – typically just a broad classification, such as ‘medical malpractice’ or ‘product liability’ – little or no control is possible over the variations presented in each case” (pp.37-38).

Overland further maintains that much of the research suffers from unsophisticated statistical analyses and concludes that good “data [about jury decision-making] are extremely rare, and this scarcity of reliable data may have led researchers to some incorrect conclusions about the relationship between jurors’ characteristics and their verdict decisions” (p.39). He contends that trustworthy information about civil juries is especially scarce. He concedes that the bulk of research on civil and criminal trials, alike, finds little evidence of correlations between those characteristics and verdicts, but he also reminds us that the majority of jury research has concerned criminal trials. He notes that the decisions jurors make in civil cases may allow juror demographics to play a greater role than in criminal verdicts.

Only a small percentage of filed cases result in a trial on either side of the justice system, but the prosecutor has great control in what criminal cases go to trial. As a result, the evidence in criminal trials usually favors the prosecutor, and, not surprisingly, most often the result is a guilty verdict. In contrast, neither civil party has the predominant power in settling cases. As a result, civil liability verdicts tend to be equally split between plaintiffs and defendants. Since civil cases are often more closely contested than criminal ones, it is possible, as Overland maintains, that “jurors must rely to a greater degree on their own intuitions, experiences and personal judgments when reaching a [civil] verdict, [and] their personal beliefs will have the greatest impact on verdicts” (pp.13-14). [*782]

In addition, in civil cases jurors are often asked to do more than merely determine what happened. In a negligence case, the jurors must decide whether the defendant acted as a “reasonable” person under the circumstances; in a products liability case, the jurors may have to determine whether a product was “unreasonably dangerous.” This qualitative assessment of “reasonableness,” Overland concludes, allows for an “inherent subjectivity in the civil verdict decision [that] opens the door for jurors’ personal views to play a greater role than they might in a criminal trial” (p.44).

Overland’s summary of jury research, its limitations, and the differing nature of civil and criminal verdicts, while unsurprising to those familiar with jury studies, should be of interest to others. The real value of THE JUROR FACTOR, however, is the new data presented about juror decision-making in civil cases. They come from the archives of a litigation consulting firm that runs and analyzes mock trials for corporate clients facing major civil litigation. Through random phone numbers and other similar devices, mock jurors from the actual trial venues were recruited to provide a representative sample of the jury pool. Participants supplied both demographic information and, through multiple-choice questions, their attitudes about such things as politics, lawsuits, and corporations. Then through videotapes or live performances, attorneys presented condensed versions of the evidence to the mock jurors, with presentations varying from one hour summaries to three-day sessions. Finally, the participants answered questions about their reactions to the case including their assessment of who should win.

Three different sets of civil cases, analyzed separately, generated the data. In eight mock trials, plaintiffs claimed that accidents with resulting personal injuries were caused by a design defect in an automobile. In nine trials, plaintiffs alleged that a prescription medicine caused severe side effects that were not disclosed by the manufacturer. And in seventeen instances, plaintiffs argued that an accounting firm’s malpractice caused financial losses.

In all three data sets, a higher percentage of blacks than whites and a higher percentage of women than men favored the plaintiffs, although the differences varied by the case. A multivariate analysis, however, that considered income, education, age, political ideology, and attitudes towards big business and lawsuits showed a more complex picture. Each factor, varying by the specific case, correlated with the verdicts. The strongest factor in each case was the attitude toward business. The more a juror trusted big business the more likely the juror would find for the defendant. The second strongest factor in both the car and drug case was the attitude towards litigation. The less a juror thought lawsuits were an effective way of resolving disputes, the less likely the juror would find for the plaintiff. However, even after other factors were controlled for, race and gender still correlated with the verdicts.

In addition to these mock jury studies, THE JUROR FACTOR also presents new data about the deliberations of juries. Other research has cast doubt upon the importance of deliberations. [*783] These studies show a “majority effect” where the ultimate verdict usually follows the jurors’ first vote, indicating that deliberations after the first ballot seldom affect the outcome.. Overland summarizes one such study that found if the initial ballot favored conviction, the jury convicted in 151 of 160 trials, while if that first ballot favored acquittal, the jury acquitted in 37 of 49 trials.

Overland’s deliberations data comes from post-verdict juror interviews after eleven trials involving an alleged automobile design defect. Nine of the verdicts followed the majority’s first vote, eight for the defense and one for the plaintiff. The other two juries were equally split on the first ballot, and both found for the defendant. Overland also reports that non-white jurors were more likely to switch votes during deliberations than whites, but the correlation was small. No significant correlation was found between gender and vote-switching. In addition, those initially favoring the plaintiff were more likely to change their vote than those first finding for the defendant.

The literature suggests two reasons why jurors may switch their votes. Jurors in the minority might succumb to social pressure and switch their votes. Or jurors may change their opinions as they gain new information and perspectives from other jurors. In Overland’s sample, jurors overwhelmingly said that new information caused their vote change.

Furthermore, Overland suggests that his data may show a “leniency bias” in civil cases. This term comes from criminal jury studies reporting that when a jury is equally split on the first vote, juries are more likely to return a verdict of acquittal. Since the two 6-6 civil splits also ended with defense verdicts, the author suggests that a leniency bias may also affect civil cases.

Classifying a verdict for an automobile company as a leniency bias, however, ought to give pause. The data come from jurors who rendered ten defense verdicts out of eleven. This is not generally representative of civil cases, but seemingly a sample where the evidence strongly favored the defendants, and if so, generalizations from it are suspect. Thus, it does not seem surprising in such a data set to find that jurors who first voted for plaintiffs were more likely to switch their votes than the other group. Surely, if the data had come from trials where plaintiffs won 90% of the time, jurors initially favoring the defense more likely would have changed to favor the plaintiff. That the two equally split juries returned verdicts consistent with the overwhelming majority of the other juries perhaps only shows that those favoring the defense had the better of the arguments, not that a leniency bias was operating.

In contrast, the mock trial data do seem more reliable than the data from many other jury studies, as Overland maintains. “The use of a representative sample of ‘real’ people recruited from the trial venue, combined with real attorneys and realistic courtroom conditions, increases confidence in the validity of the conclusions drawn from an analysis of the data” (p.57). However, while the data from each of the three kinds of trial were reported separately, it is not clear that the data could be validly aggregated in this way. In each set there appear to be multiple, uncontrolled [*784] variables. For instance, the mock trials were held in different parts of the country. Can we assume that jurors with similar demographic and attitudinal characteristics from Bethesda, Maryland, Laredo, Texas, Union County, New Jersey, and Riverside, California (some of the locations for the mock drug trials) all behave the same, so that the data from these disparate places can be validly lumped together? Or do the locations of the jurors matter as many attorneys seem to believe? Overland does not tell us.

The evidence presentations within each set seem to have varied from a one-hour videotape to an extended three-day session with live attorneys. We cannot tell from the presented data whether the differing forms correlated with outcomes. The plaintiffs’ cases were not all the same. For example, in the drug prescription cases, the severity of the claimed injury varied. Moreover, the length of time the person used the drugs, and their pre-existing conditions were not constant among the trials. Overland does not report in detail what those variations were, but it is possible that cases of significant differing evidentiary strengths were presented in this set. For example, if two plaintiffs claimed that a drug caused heart arrhythmia, but one used the drug for a week and had a pre-existing heart condition while another used the drug for a year and did not have the pre-existing problem, jurors, because of the evidence, might react differently to the two cases. Such a possibility could have affected the conclusions reached by Overland. For example, 1% of the Charleston, West Virginia, participants were black, while in Houston 60.8% of them were black. If, however, the Houston evidence presented a stronger plaintiff’s case than did the Charleston one, then aggregating the two might lead to the conclusion that blacks were more likely to favor the plaintiffs when the varying strengths of the cases actually produced the apparent racial effect.

In other words, even though Overland’s mock trials do seem more realistic than many other ones, variables that were not controlled could have affected the data. It simply cannot be concluded that the results Overland found truly apply to actual trials. This is a fundamental issue for mock jury studies generally.

The author, however, certainly wants to conclude that his data are generalizable. He converts regression coefficients into “verdict probabilities.” In the car accident case, for example, he concludes that holding other factors constant, women would find for the plaintiff 49% of the time compared to 32% of the men; blacks would find for plaintiffs at a 22% higher rate than other racial groups; and low income jurors would favor plaintiffs 47% of the time compared to those with high income at only 31%. While the author also concludes that the attitudinal variables produce even more striking differences, he notes, “These are large substantive differences, and directly challenge the findings in the literature that jurors’ demographic characteristics have no effect on their verdicts in civil trials” (p.73).

Certainly such a conclusion has an important consequence for our justice system. If juror attitudes and demographics correlate with verdicts, then lawyers selecting juries will wish to eliminate potential jurors based on these characteristics. Attorneys have to act [*785] with the available information, and information about potential jurors’ attitudes is often limited while demographic factors are often obvious. This leads Overland to ask, “Would striking jurors based on their race or gender – given that those easily observed traits may serve as rough proxies for attitudes – constitute unwanted and unfair discrimination. . . .?” (p.80).

The author then summarizes the Supreme Court cases holding that neither party in either a civil or criminal case can exclude potential jurors based on their race or gender through the use of peremptory challenges, a discussion which should be useful to those not familiar with these opinions. Critics, however, have maintained that the Court’s framework for interpreting and enforcing this principle have made it too easy to circumvent and that lawyers continue to challenge based on race and gender. Perhaps so, but meaningful data on this issue have not been collected.

This issue, however, highlights a difficulty about drawing inferences from mock jury studies. Jurors only serve after having passed the challenge process. Any potential juror who cannot impartially decide a case based on evidence to be presented can be struck for cause. In addition, attorneys have peremptory challenges to strike a set number of potential jurors who have survived the for-cause process. Attorneys can eliminate biased jurors, and this may include those with the kind of characteristics that most strongly correlate with verdicts in jury studies. Furthermore, if jury studies merely produce findings that coincide with the folk wisdom of attorneys, they will have little effect on jury selection. If the intuitions of plaintiff’s attorneys in Overland’s cases, for example, were that whites and those who had favorable opinions of big business were less likely to find liability than others, then their use of peremptory challenges will tend to balance out the defense use of the challenges. Potential jurors whose characteristics most strongly indicate a particular verdict will be eliminated.

And, of course, real jurors do deliberate. While the so-called “majority effect” can be interpreted to conclude that deliberations hardly matter, the data reported here indicate that verdicts in 10% of the cases do not follow the initial ballot, indicating deliberations can affect a large number of cases. Furthermore, the prospect of jury deliberations must be considered. Jurors expect deliberations to be important, and they know they will have to take a stand, justify their position, and perhaps try to persuade others. They cannot expect justifications based on gender, economic status, or ethnicity to be convincing. Instead, what the jurors all share is the trial itself. They expect to discuss that, and this can affect how they pay attention to and process the presented evidence and arguments. As I have written elsewhere, if our system returned verdicts based upon the majority vote after the first ballot, jurors would only have to decide how they were going to vote. “They would not have to prepare themselves to justify their decision or persuade others to endorse it. Surely jurors’ behavior would be different under such a system. Because they would not have to construct defensible and convincing positions from the evidence, they would not have to pay much attention to it. The evidence would have less primacy, allowing [*786] extralegal factors more play. Thus, the expectation of deliberations brings a critical focus on the evidence” (Jonakait 2003: 232).

Several studies support this. Research has shown that after deliberations jurors recall the evidence more accurately than without deliberation. Mistaken views of the evidence by an individual juror are corrected by others. In other words, the jurors debate and focus on the evidence.

Indeed, Overland reports another study that found the composition of the jury mattered in deliberations. More information was exchanged in racially diverse juries than in all-white ones. The author notes, “[J]ust the prospect of deliberating with a heterogeneous group seemed to affect white jurors’ judgments, as white jurors in diverse groups were more lenient toward black defendants in their pre-deliberation verdicts than were white jurors who knew they would be part of an all-white jury deliberation panel” (p.128).

This last finding could be important in a number of ways. It indicates that at least for race, how the information from trial is absorbed may be affected by the prospect of deliberating with a heterogeneous group. Perhaps that dynamic is more extensive. Are the first votes of those who are pro-big business affected when they know they will deliberate with those who hold other views? Are the first votes of women affected when they expect to deliberate with men? If so, mock jury studies that collect verdict preferences when the participants do not expect to deliberate may be overstating any correlations uncovered between the jurors’ characteristics and the outcomes.

We do know a crucial fact about outcomes after deliberations that should not be overlooked. Overland’s data about verdict probabilities suggest that, if twelve women comprised the jury in the car accident case, the jurors would split equally on liability. We know, however, that even when juries must be unanimous, a verdict is almost always rendered. Hung juries are rare. A prediction based on jury characteristics that half the jury would find for the plaintiff and half for the defendant almost always turns out to be false.

We still do not know how mock jury studies relate to real trials or whether the work of jury consultants can affect verdicts, and it is disappointing that THE JUROR FACTOR does not do more to advance knowledge on these issues. The presented mock jury data were not amassed as an academic exercise, but apparently collected to help attorneys representing corporations in real trials. If trials did result, presumably the mock jury data were used to make predictions about specific potential jurors that were considered in the jury selection. What predictions were made? What, if anything, can be said about the predictions in light of verdicts actually rendered? If there were a number of trials, presumably the jury composition on demographics and attitudes were not all the same. Did results vary? If so, how, and how did those variations gibe with any predictions made about potential jurors?

If such additional information were available, it would have been valuable in both assessing the external validity of the jury studies and in learning whether lawyers aided by jury consultants can truly affect verdicts. Even so, however, [*787] Sean Overland’s book presents new, interesting data and summarizes well existing research. Thus, it is valuable to those interested in way juries behave.

Jonakait, Randolph N. 2003. THE AMERICAN JURY SYSTEM. New Haven, Yale University Press.

© Copyright 2009 by the author, Randolph N. Jonakait.