[*PG1]WHY JUDGES APPLYING THE DAUBERT TRILOGY NEED TO KNOW ABOUT
THE SOCIAL, INSTITUTIONAL, AND RHETORICALAND NOT JUST
ASPECTS OF SCIENCE
Abstract: In response to the claim that many judges are deficient in their understanding of scientific methodology, this Article identifies in recent cases (i) a pragmatic perspective on the part of federal appellate judges when they reverse trial judges who tend to idealize science (i.e., who do not appreciate the local and practical goals and limitations of science), and (ii) an educational model of judicial gatekeeping that results in reversal of trial judges who defer to the social authority of science (i.e., who mistake authority for reliability). Next, this Article observes that courts (in the cases it analyzes) are not interested in pragmatically constructing legal science, but rather attempt to ensure that science itself, conceived pragmatically (i.e., without idealizing science), is appropriated in law. This Article concludes that trial judges who fail to appreciate the social, institutional, and rhetorical aspects of science tend to reject reliablealbeit pragmaticscience, welcome unreliablealbeit authoritativescience, and thereby create a body of legal science that is out of sync with mainstream science.
[M]any of the luminaries of physics, from Bohr and Heisenberg on down, took the radical step of denying the existence of an independently existing physical world altogether, and, surprisingly, got away with it. In other, i.e. nonscientific, contexts, the difference between those who are committed to an independently existing reality and those who are not is roughly correlated with the distinction between the sane and the psychotic.1
[*PG2] In Rebecca Goldsteins popular novel about quantum physicists, her protagonist Justin Childs is enraged by the nonsense . . . that measurement creates reality, so that it is simply meaningless to ask whats going on when no measurement is taking place.2 Later in the story, however, he learns (from his mentor, who failed in his objectivist challenge to Bohr and Heisenberg) something about the way science works:
[T]he last thing in the world I ever expected was to be ignored. . . . I thought that it was only the objective merits of the work itself that mattered, especially in science. If not in science, then where else? . . . I didnt know how things really work . . . , how it gets decided what should be paid attention to . . . . The big shots decide and the little shots just march lock-stepped into line.3
These brief literary representations capture what is going on nowadays in the so-called science wars4on one side are the believers in science as an enterprise that reports on natural reality, or at least successfully represents nature with models that correspond to reality; on the other side are those who view science as a social, rhetorical, and institutional enterprise that only manages to convince us that it deals in natural reality. Because the latter positionthat reality is constructed (not discovered) by scientistsis so counterintuitive, it sounds nonsensical, almost psychotic, to believers in science. And yet, if the social, institutional, and rhetorical structures of the scientific enterprise, rather than nature, effectively determine what gets paid attention to,5 then reality as we know it is to some extent constructed.
Does this academic debate among philosophers, historians, and sociologists of science really matter? After all, science progresses without regard to the science wars, and scientists are likely oblivious to the concerns of social constructivists, who do not seem to be providing useful insights to the scientific enterprise. We will argue, however, that the science wars are significant for lawthe issues raised in that debate provide insights as to what trial judges need to know about science to [*PG3]carry out their gatekeeping role with respect to proffered expert testimony. Moreover, the position a judge takes, perhaps unwittingly, with respect to the status and authority of science, actually matters: cases are often won or lost on the basis of scientific evidence, and appeals are so costly that a trial judges understanding of science is often determinative.
Given the privileged position of science in law as a stabilizer of legal disputes, one might assume that in the regime created by the Daubert Trilogy, the courtroom is closely aligned (with respect to the science wars) with the believers in science.6 Indeed, some commentators have suggested that after the 1993 U.S. Supreme Court decision in Daubert v. Merrell Dow Pharmaceuticals, Inc., the legal culture must assimilate the scientific culture;7 Michael Saks has even suggested that admissibility decisions as to most scientific evidence should be treated as matters of lawthe facts of science have quite a trans-case and law-like nature.8 Such comments suggest that expertise is grounded in reality, and is decidedly not a matter of rhetoric or social construction. Courts should therefore, from this vantage, defer to science. Conversely, some commentators suggest that science is just another cultural activity, like law, such that deference is not appropriate; the law can and should construct its own legal science, which need not be considered inferiorbecause mainstream science is also a construction.9 In a similar formulation, some argue that scientific knowledge is reconstructed and framed in court, where the scientific method is a representational device that, like other normative images (for example, general acceptance, or peer review and publication), are better understood as ex post facto explanations and a professional rhetoric.10 The science wars, it appears, have arrived in legal discourse.
In our view, the science wars present a false dichotomy to which the law should not submit. Believers in science idealize the scientific enterprise to a degree that the inevitable social, institutional, and rhetorical aspects of scienceits pragmatic featuresare neither ac[*PG4]knowledged nor discussed. Legal commentary on the Daubert Trilogy is dominated by such idealization,11 thereby marginalizing social studies of science in legal scholarship.12 Oddly, however, although social constructivists do not idealize science, they do idealize the social, institutional, and rhetorical aspects of science to a degree that the successes of science are either ignored or eclipsed. Neither option is particularly attractive, which leads many philosophers and social analysts to conclude that science is both productive of knowledge about the world and a social, institutional, and rhetorical enterprise.
Commenting on the origins of Western science, anthropologist Johannes Fabian has written that historically:
Western science derives from an earlier art of rhetoric, chronologically (i.e., with regard to the sequence of developments in our tradition), as well as systematically (regarding the nature of scientific activity). Paul Feyerabend goes as far as declaring that propaganda belongs to the essence of science, a view also held, but less outrageously formulated, by T. S. Kuhn in his theory of scientific paradigms. Far from dismissing science as mere rhetorica hopeless attempt in view of its practical and technological triumphsthis position states the obvious fact that all sciences, including the most abstract and mathematized disciplines, are social endeavors which must be carried out through the channels and means, and according to the rules, of communication available to a community of practitioners . . . .13
Although that perspective does not represent either extreme in the science warsit does not sufficiently idealize either science or social determinantsrecent fieldwork in the public understanding of science confirms that [l]ay attitudes towards science, technology and other esoteric forms of expertise . . . tend to express the same mixed [*PG5]attitudes of reverence and reserve, approval and disquiet, enthusiasm and antipathy, which [many] philosophers and social analysts . . . express in their writings.14 Even more surprising than the fact that the publics reaction to science is mixed is our finding that many federal judges are just like the public: (i) more willing to view science as an enterprise with local and practical goals and limitations, therefore (ii) less willing to idealize or defer to science than are the believers in science (in the science wars), but (iii) nevertheless willing to appropriate science, as a pragmatic enterprise, when it is a reliable producer of useful knowledge.
In Part I of this Article, we enter the discourse of what judges need to know about science by reference to (and criticism of) the recent survey concluding that gate-keeping judges are deficient in their understanding of scientific methodology.15 In Part II, while observing the occasional lapse in methodological understandings, we identify in recent cases a particular pragmatic perspective among many federal appellate judges; that pragmatic perspective is further elucidated by our analysis of cases in which trial judges tended to idealize science.16 These trial judges were reversed because they were too strict and did not appreciate the practical goals and limitations of science. In Part III, we introduce the parallel notion that in the Daubert Trilogy, the U.S. Supreme Court adopted an educational, rather than a deferential, model of judicial gatekeeping; we then demonstrate that some appellate judges in recent cases recognize that the social authority of expertise can often become disengaged from its reliability, thus confirming that they are not idealizing science.17 These appellate judges reversed trial judges who were too lenient due to their idealization of scientific authority. In Part IV, we observe that courts (in the cases we analyze) are not interested in pragmatically constructing legal science, but rather attempt to ensure that science itself, conceived pragmatically (i.e., without idealizing science), is appropriated in law.18 Part V explores this apparent breakdown of the methodologi[*PG6]cal/social dichotomywhich dichotomy persists in legal scholarships version of the science warsin recent federal jurisprudence.19 We conclude that trial judges who fail to understand and appreciate the social, institutional, and rhetorical aspects of science tend to (i) reject reliable, albeit pragmatic, science, (ii) welcome unreliable, but authoritative, science, and (iii) thereby create a body of legal science that is out of sync with mainstream science.
Survey results demonstrate that . . . . many of the judges surveyed lacked the scientific literacy seemingly necessitated by Daubert [v. Merrell Dow Pharmaceuticals, Inc.].20
The ongoing discourse concerning what judges need to know about science in order to evaluate the admissibility of expert testimony21 is premised on the notion that the Daubert Trilogy has been problematicjudges have difficulty operationalizing the Daubert criteria and applying them, which highlights the potential for inconsistent application of the Daubert guidelines.22 Indeed, the very nature of the Daubert revolution seems to be a matter of serious disagreement among the 400 state trial court judges who were recently interviewed for an article entitled Asking the Gatekeepers:
One third of the judges surveyed . . . believed that the intent of Daubert was to raise the threshold of admissibility for scientific evidence, whereas 23% . . . believed that the intent was to lower the threshold . . . . Just over one-third . . . believed that the intent . . . was to neither raise nor lower the threshold . . . . The remaining judges (11% . . . ) were uncertain as to the Supreme Courts intention.23
[*PG7]As to science itself, the same survey indicated that most judges lack a clear understanding of falsifiability and error rate, leading the authors to conclude that judges need to be trained to be critical consumers of the science that comes before them.24
Provincially speaking, of course, judges need to know more about everything, including science. The authors of the aforementioned survey, nevertheless, have produced a striking picture of confusion in the wake of Daubert.25 The blame, according to the survey authors, lies partly with the Daubert opinion (and the Courts failure to provide guidance as to the gatekeeping role), and partly with judges who generally lack scientific literacy.26 Offering scientific training for judges impliedly solves both problems because an understanding of science makes the Daubert guidelines clear (assuming the Daubert guidelines represent science).
From the perspective of those who do fieldwork in the public understanding of science, this recent survey is typical of the deficit model in traditional, quantitative studies.27 Science is presumed, in such research, to be secure and measurable knowledge that the ignorant public lacks and needsthe remedy is usually conceived to be more science education. This conception re-invokes the image of cognitive content to be delivered into a repository characterized by its social or communal features.28 More recent interpretationist, qualitative fieldwork indicates that the public uptake of science involves two communities(i) the scientific enterprise and (ii) the local public [*PG8]being advisedeach of which possess socially grounded, conditional and value-laden knowledge.29 The public, these studies have shown, is not simply ignorant, but also suspicious about the interests of scientists, and aware of scientific controversies, inconsistencies, and errors.30 This critically reflective model of the public understanding of science is an attempt to identify the social relations of trust and credibility that affect the public reception of scientific knowledge. Judges not trained in science, who are used to seeing experts disagree, are more like the public than like amateur scientists, and their relationship with science is more complex than the deficit model, exemplified in the recent survey, suggests.
How does one determine what judges are doing with the Daubert Trilogy?31 And how does one assess how well they are doing it? Of course, the second question (evaluation) cannot be asked until the first is answered, and so it is sensible to begin by trying to find out the empirical facts: What are the judges doing? Of course, finding such facts is not easy, but scholars have a duty to inquire, to attempt to discover what is happening. The six scholars who reported their findings in Asking the Gatekeepers made one such attempt to find the empirical facts of the matter by asking judges what they do.32 In the course of this Article, we will be highly critical of that study: we think that the six asked the wrong questions, and furthermore, we think that the basic methodology of their survey is flawed. Yet even though we are critical, we think that the study is important, because we also think that the views presented in that study represent an important position, one that has some social power. And because we think those powerful views are erroneous, we wish to take them seriously.
In order to ask judges what they are doing, one must proceed in a systematic way, and the six scholars that we wish to confront certainly satisfy the requirement of proceeding systematically. They describe their methodology with admirable clarity.33 First, they used the [*PG9]standard sources to generate a representative sample of the judiciary. Their sample is what statisticians would call a stratified random sample, and we agree that using a stratified sample, rather than using a simpler sampling technique, was the correct way to proceed.34 As they state, by using a stratified sample they were able to ensure that their sample was representative of [both] geographical distribution of judges and levels of court.35
The six also pre-tested their survey instrument, which is a standard precaution. They tried it out on focus groups of judges who were attending classes at the National Judicial College and the Judicial Studies Program. As it turns out, the preliminary version of the survey ruffled the feathers of the judges because they thought they were being tested.36 Of course, the judicial reaction was accurate; the whole point of the survey was to test how well the judges understood their job. On the face of the survey, it was a simple empirical inquiry; but a fair reading of the study will reveal that the six do not wish merely to report on what judges are doing; they also wish to evaluate judicial performance. But discretion is often wise, and so the six revised their survey questions to make them more diplomatic.
The next step was to try to get the sample of judges to cooperate. To do this, the six proceeded in a sensible way. They sent out an introductory letter then followed up by a phone call. In the phone call, the agenda was to persuade the judges to participate in an interview and to schedule an interview if the persuasion was successful. For the most part, it appears that the persuasion and the scheduling went well.37
The tricky part of the process was to train the interviewers how to ask the questions and how to code the answers. Asking the questions was no simple matter. Of necessity, the questions were open-ended, and so the interviewers had a difficult task. Depending on what the answer was, different sorts of follow-up questions were appropriate. We recognize that it is extremely difficult to execute well the complicated agenda that the six set for themselves in the conduct of the interviews and the coding of the answers, and we do not wish to quibble about administrative details. We are quite confident that we could do [*PG10]no better. Our criticisms of their study do not go to the technical details of administration. We wish to criticize the questions they asked, not the details of how well they asked these questions.
The question that the six authors address in their survey is: How well do the judges understand the Daubert criteria?38 As two of the four criteria, peer review and general acceptance, do not cause anyone any problems, the survey reduces to the question: Do judges understand the concepts of falsifiability and error rate? Because we think there is a subtle but important error here, let us quote a key passage from the study:
In order [for a judges responses] to be coded as judge understands concept for any Daubert criterion, the judge had to refer to the central scientific meaning of the concept. For example, with respect to falsifiability, in order for a response to be coded as judge understands concept, the judges response had to make explicit reference to testability, test and disproof, prove wrong a theory or hypothesis, or proof/disproof.39
On the face of the matter, perhaps nothing seems radically wrong with the quotation, but we wish to argue that the six are proceeding in the wrong way. To begin, one should note that the project proceeds on the assumption that there is such a thing as the central scientific meaning of the relevant concepts, which seems to us a highly oversimplified view of science. Although scientists do adhere to the ideal of falsifiability, the actual concrete meaning of this difficult concept varies across time and across disciplines.40 As we understand it, no scientist [*PG11]would argue that falsifiability is not a crucial concept, and yet one can observe (if one only looks) that scientists disagree vehemently at times about whether a particular hypothesis has or has not been falsified.
The next point to make is that the six are proceeding on the assumption that the judges somehow lack essential knowledge if they do not understand the concept of falsifiability the way that the six understand it. Again, we refer to this assumption as the deficit model.41 Those who believe in the deficit model postulate an ideal science and all non-scientists are assumed to be defective to the degree that they do not understand this scientific ideal. But how can one be defective for not understanding the ideal concept if the actual practice of science departs from the idealistic norm?42 As we have already pointed out, when one looks at the actual practice of science, one sees profound disagreements over whether particular hypotheses have or have not been falsified. Is this evidence that scientists themselves understand the concept in different ways?
We imagine that the six might respond to these two criticisms in ways that many will find cogent, for example: We six understand that the concept of falsifiability is applied in different ways, but the concept itself is not meaningless, even when it is abstract. Assuming that they would say something like this, we willingly agree. And we six think that we have captured the abstract meaning of the concept. Once again, we are happy to agree. But notice where this leaves the debate. All agree on the abstract meaning (which the six capture rea[*PG12]sonably well). The disagreements are in the application. If this is a fair statement of the matter, then one must notice that the real work is being done at the moment the concept is applied. And if the real work is done when the concept is applied, then one must consider the following question: Are all applications of the concept equally good? We will assume that we and the six would agree on a negative answer to this question. Some scientists are better than others. Some apply the concept of falsifiability with more insight that others do.
If one can agree with us that the most important understanding of a concept is demonstrated by ones skill in applying it, then perhaps one can agree with us that the six have made a fundamental mistake in the design of their empirical inquiry. They ask whether judges understand such concepts as falsifiability, yet they are satisfied to evaluate that understanding with no more evidence than the judges abstract statements of what the concept means; they do not try to observe how the judges apply the concept in court.
Could the six offer the response that the judges could not possibly apply the concept well if they do not at least understand the abstract meaning? We do not believe that this is a good response. Our first objection to this response is to remind the reader of the social context in which the judge operatesjudges perform their tasks in a courtroom, not in a study or a laboratory. At trial, the judge would never apply the concept unaided and singly. An expert would testify that a theory being used is truly scientific because it is both falsifiable and has not yet been falsified. The judge would have the benefit of the experts explanation of this thesis. Furthermore, the experts explanation would be tested by cross-examination and by the testimony of other experts, who might agree, qualify, or disagree. So the relevant question is not whether a judge can understand the concept unaided, but whether the judge can respond appropriately to disputes over falsifiability when aided by what happens in court. We see no reason to assume that there would be other than a totally random correlation between abstract understanding prior to trial and the concrete understanding that would follow a trial.
Our second objection to the response that abstract knowledge is necessary is that even outside the courtroom such an assumption is false. We rest our objection on the well-known distinction between knowing how and knowing that. To take a trivial example, most healthy children are able to learn how to ride a bicycle. But what do they know when they know how to ride a bike? The healthy child does not know that, nor do we, nor do the six, we expect. It takes a highly trained scientific mind to gain an abstract knowledge of the facts of [*PG13]bike riding. Furthermore, the example is not off the point merely because it involves a physical skill. Intellectual skills have the same quality, and for them too, one can know how to do something without knowing what it is that one knows. Merely because one is a highly skilled trial lawyer does not mean that one has a conscious intellectual understanding of what it is that one knows. Indeed, we understand that one of the major research projects in cognitive psychology is to understand precisely what it is that people know when they know how to do all of the things that they are able to do.43
For all of these reasons, we think that the six survey authors have asked the wrong questions. But their fundamental error, for the purposes of this Article, is that they have constructed an abstract ideal of science and then have used that abstraction to criticize what is done with science in the courtroom. Our fundamental disagreement is that one should not start by making this mistake.
We have set forth above our objections to the methodology that the survey authors use in their study of the judges understanding of science; we would like next to point out the mistakes that we think they make about the law. Recall that their study purports to test whether the judges know enough about scientific methodology to administer the Daubert test. But the study will be misconceived if they misunderstand the law established by Daubert. We think that they do misunderstand the law, and so we wish now to establish this thesis.
We begin with a scruple about terminology: We do not like the term, the Daubert criteria. Instead, we prefer to refer to the Daubert Trilogy. We think that it is an error to focus single-mindedly on the lead case in this trilogy because we believe that such a focus gives an undue emphasis to the four factors announced in Justice Blackmuns 1993 opinion.44 There are clues in Blackmuns opinion that one [*PG14]should not overemphasize the four factors, and the subsequent cases in the trilogy make it clear that the caveats should be taken just as seriously as the four factors. Anyone who overemphasizes the four factors will have an overly simplistic view of the relevant legal principles.
The ignored caveats begin in the paragraph in Daubert that prefaces the discussion of the four factors. After stating that the trial judge must determine two things, whether the reasoning or methodology underlying the testimony is scientifically valid and whether that reasoning or methodology properly can be applied to the facts in issue, Blackmun goes on to write two sentences that are not quoted often enough: Many factors will bear on the inquiry, and we do not presume to set out a definitive checklist or test. But some general observations are appropriate. Ordinarily . . . . Here Blackmun begins listing the four factors, starting with testability.45
We think that this language is important. Take it in reverse order. Consider the word ordinarily. The most plausible meaning of the word is that the factors apply in some, but not all, cases. The word ordinarily begins the paragraph that discusses testability, and so in context, one can read this first factor (or the four as a group) as dispensable in some cases. Consider also the phrase, general observations. Are we to discern the meaning of this sentence by contrasting general with specific? A plausible reading is that Blackmun means to establish general principles as distinguished from specific rules, and yet the six tend to treat the criterion of testability as though it were a specific rule that trial judges must administer.
Finally, note carefully the first sentence quoted above.46 We are told that many factors will bear on the determination that there is valid science that was properly applied. Furthermore, we are told that the list of four factors that follow are not a definitive checklist or test.47 We find it hard to imagine how Blackmun could have sent a clearer signal that one should not focus obsessively on the list of four factors. And one may recall that after the four factors are listed, Blackmun returned to this theme by reiterating that the inquiry . . . is . . . a flexible one.48 Given all of this, there would seem to be no grounds for [*PG15]complaining that the four factors are not specific or precise. Of course they are not, which is precisely what Blackmun intended.49
We do not wish to be ill tempered in this complaint, as in law review commentary it is customary practice to take language from a judicial opinion and treat it as though it were intended to be definitive. Many of these articles proceed hypothetically. If one takes the language quite literally, what would the consequences be? Such a hypothetical inquiry can have considerable value, although it runs the risk of becoming irrelevant. Judges regularly refuse to be tied down by their own words. They retroactively reinterpret what they have said to avoid inconvenient consequences. One cannot wish away inconvenient language in a statute, but judges do wish away words they themselves have written.
At any rate, however one reads Blackmuns initial statement of the four factors, when one reads the entire Daubert Trilogy it becomes clear that his caveats are in fact important. Consider, for example, how one puts together two assertions in Blackmuns opinion that seem to be inconsistent. Blackmun writes that the trial judge must focus . . . solely on principles and methodology, not on . . . conclusions,50 and yet he also states that whether the methodology properly can be applied to the facts in issue is a matter that the judge must decide.51 We think that one can conclude only one thing: these two sentences do not go well together. So how is this apparent inconsistency to be resolved?
In 1997, in General Electric Co. v. Joiner, the second case in the trilogy, the Supreme Court resolved the tension in favor of the application thesis.52 In this case, the trial judge emphasized the language that asserts that one must decide whether the expert is properly applying the scientific methodology and principles to the case at hand and [*PG16]thus excluded the experts testimony.53 The court of appeals reversed, putting emphasis on the methodology and not on the conclusions drawn.54 Either decision is reasonable but only one can be right. In Joiner, Chief Justice Rehnquist sided with the trial judge by stating that conclusions and methodology are not entirely distinct from one another.55 This not entirely thesis was reinforced and given bite by the following sentence: A court may conclude that there is simply too great an analytical gap between the data and the opinion proffered.56
One should note that the expert did not use improper methodology in Joiner, so it would appear that the survey authors focus on methodology is too narrow a focus. The expert in Joiner based his opinion on laboratory animal studies and epidemiological studies.57 All of the judges who looked at the case agreed that this methodology meets the test of scientific. Yet this was not the problem; the problem was the analytical gap.58
Unfortunately, the true import of Joiner is less than clear because the procedural posture of the case was a peculiar one. The precise issue before the U.S. Supreme Court was whether the court of appeals had used the proper standard to review the district courts decision to exclude the expert.59 The court of appeals thought that it should review what the trial judge had done de novo, i.e., it should ask whether the trial judge got it right.60 The U.S. Supreme Court disagreed, saying that the customary standard for reviewing evidentiary rulings, which should be followed, was abuse of discretion, i.e., one should ask whether the trial court was in the ball park.61 Consequently, one could easily be puzzled by how seriously one should take the Joiner opinion.
This puzzle disappeared when the third case in the trilogy was decided. The 1999 U.S. Supreme Court decision in Kumho Tire Co. v. Carmichael establishes that one must take the Joiner case very seriously indeed, and the overall consequence is that one must give equal em[*PG17]phasis to all three cases in the trilogy if one is to understand the law.62 We can start by noting that Kumho Tire begins by declaring that the issue on which the six are focusedi.e., what is the difference between science and non-science?is often unimportant for judging the admissibility of expert testimony.63 The trial judge must subject all experts, whether they be scientific or not, to the gatekeeping screening.64 At the very outset of the opinion, Justice Breyer stated:
We . . . conclude that a trial court may consider one or more of the more specific factors that Daubert mentioned when doing so will help determine that testimonys reliability. But, as the Court stated in Daubert, the test of reliability is flexible, and Dauberts list of specific factors neither necessarily nor exclusively applies to all experts or in every case. Rather, the law grants a district court the same broad latitude when it decides how to determine reliability as it enjoys in respect to its ultimate reliability determination.65
In Kumho Tire, an engineer had offered an opinion on how and why a tire had failed, which was relevant to whether the failure was the fault of the manufacturer in the case.66 Justice Breyer restated his opening theme in the following passage, in which he speaks to the diversity of expert testimony:
Engineering testimony rests upon scientific foundations, the reliability of which will be at issue in some cases. In other cases, the relevant reliability concerns may focus upon personal knowledge or experience. As the Solicitor General points out, there are many different kinds of experts, and many different kinds of expertise. . . . We agree with the Solicitor General that [t]he factors identified in Daubert may or may not be pertinent in assessing reliability, depending on the nature of the issue, the experts particular expertise, and the subject of his testimony. The conclusion, in our view, is that we can neither rule out, nor rule in, for all cases and for all time the applicability of the factors mentioned in Daubert, nor [*PG18]can we now do so for subsets of cases categorized by category of expert or by kind of evidence. Too much depends upon the particular circumstances of the particular case at issue.67
One could read Kumho Tire rather narrowly as saying that its language only has relevance to people such as engineers; for real science the four factors remain the key. But this seems erroneous. Whenever science comes into the courtroom, it comes in not as pure theory, but as applied science, and thus looks much like engineering. Why did this bridge fall down? Is the blood found at the scene of the crime the defendants blood? And so forth. In all such cases, one travels a long path, from pure theory down to a technician in the lab, and the expert in court may combine theory, lab results, personal observations, informed judgment and more so as to offer relevant and reliable opinions that can aid the trier of fact. Justice Breyer was correct in believing that assessing the use of science in the courtroom is both more complex and more subtle than a focus on the four factors might suggest.
Without questioning the need for judges to understand more about science, we challenge the assumption that Daubert represents the scientific enterprise. Accordingly, we question the twin notions (i) that if judges clearly understood the Daubert guidelines (for example, falsifiability and error rate), then they would possess scientific literacy, and (ii) that the problem of inconsistent admissibility decisions is caused by the failure to understand the Daubert guidelines. Granting that some recent cases clearly confirm that trial judges need more understanding of the Daubert guidelines, many others confirm the need to understand science as an enterprise with social, institutional, and rhetorical features not captured in the survey authors idealistic picture of science.
Our recurring reference below to the social, institutional, and rhetorical features of science, as opposed to its methodological features, merits a preliminary explanation. Social aspects of science include its communal, rather than individualistic, structures such as historical background, experimental conventions, shared standards of legitimacy, negotiation and consensus-building techniques, and the notion of an audience that evaluates the production of knowledge. [*PG19]Sciences institutional features, which are also social, include training, credentializing, and gatekeeping by way of granting degrees, positions, funding, or publicity. The rhetorical features of science include its narrative and textual aspects, such as techniques of persuasion, governing metaphors, and linguistic conventions. Although these social, institutional, and rhetorical aspects often fade into the background by a focus on methodological features such as testing or rates of error, there is no reason to assume that they are dispensable or insignificant in the final methodological analysis. Indeed, methodology relies on social, institutional, and rhetorical conventions. Significantly, however, there is no reason to assume that simply because of these social, institutional, and rhetorical features, nature or reality has nothing to do with scientific knowledge. Then again, the understandable sense that one must choose between a social and a natural explanation for scientific progress, which we deem a false dualism, helps explain why the social, institutional, and rhetorical aspects of science often are not discussed in Daubert Trilogy jurisprudence.
Throughout this Article we focus on recent federal appellate court opinions reversing or rejecting a trial judges decision on admissibility of an expert. In some cases, the judge allowed scientific testimony that the appellate panel found inadmissible, and in others the trial judge disallowed testimony found on appeal to be admissible.68 These types of cases, we believe, usually generate more careful and detailed opinions than do affirmances.69 Our brief analysis of each case identifies the problem with the trial judges understanding of scientific expertise, as explained by the appellate panel. Specifically, we ask whether a lack of understanding of the Daubert guidelines caused [*PG20]the problem, or whether there was a failure to understand the social, institutional, or rhetorical aspects of science. The mixed results of our analyses suggest that an understanding of science includes both an understanding of the methodological aspects of science, exemplified in the Daubert Trilogy and the Federal Rules of Evidence,70 and an understanding of the social, institutional, and rhetorical aspects of science.
We recognize that our use of the term pragmatism to denote a trend is problematical in at least three senses. First, and most obviously, defining pragmatism as an orientation or approach is as difficult as defining formalism or realism. Anthony DAmato attempts to introduce legal pragmatism, in his Analytic Jurisprudence Anthology, by offering helpful excerpts from John Dewey, Oliver Wendell Holmes, Richard Rorty, Richard A. Posner, and his own work71Posners is particularly familiar and succinct:
Pragmatism in the sense that I find congenial means looking at problems concretely, experimentally, without illusions, with full awareness of the limitations of human reason, with a sense of the localness of human knowledge, the difficulty of translations between cultures, the unattainability of truth, the consequent importance of keeping diverse paths of inquiry open, the dependence of inquiry on culture and social institutions, and above all the insistence that social thought and action be evaluated as instruments to valued human goals rather than as ends in themselves.72
[*PG21]Interestingly, Posner associates pragmatism with the scientific spirit . . . of inquiry, challenge, fallibilism, open-mindedness, respect for fact, and acceptance of change.73 The paradoxical respect for fact alongside open-mindedness hints at a pragmatic perspective on science as neither realist (facts equal nature) nor relativist (facts as merely social constructs), but oriented to local, practical problem-solving.
Second, the implied rejection by pragmatists of over-arching theoretical frameworks destabilizes any attempt to define pragmatism as a theoretical framework. In DAmatos formulation: Its hard to define Pragmatism, and well it might be, because Pragmatists dislike definitions. Definitions are themselves formal, suggesting logic and exactitude. . . . A definition, to a Pragmatist, is just a rule of thumb.74 DAmatos introduction to pragmatism is thus itself pragmatica matter of tendencies that can only be captured in specific solutions to particular problems.
Third, because we want to distinguish a pragmatic perspective on science from philosophical or legal pragmatism generally, we need to construct that pragmatic perspective even though there is no unified view among philosophers of science as to what pragmatism in science entails. Thomas Nickles, for example, reads Kuhns paradigm theory:
as retreating from a realist, Truth Now account to a sort of pragmatism in which the solved problem rather than the true theory becomes the unit of achievement in science. . . . [Kuhns] [*PG22]stress on the local contexts of research and the constraints they impose on thought and action are very important.75
Michael Ruse identifies his own tendency to be somewhat of a pragmatist, a nonrealist of a kind, as he thinks [to] advance means getting ones theory more in tune with epistemic values like consilience than progress towards knowledge of a metaphysical reality.76 Finally, Karin Knorr-Cetina, a sociologist of science, considers the focus on scientific practices, in contrast to producing normative philosophy of science or rational accounts of theory choice, to be pragmaticher statement that you dont always try to find the mechanisms behind things without considering what is on the surface characterizes her position.77 These aphorisms, by emphasizing local practices and contexts instead of global reality or truth, are useful in our own assessment of a pragmatic trend in law/science relations.
We should also distinguish our use of the term from that of David Resnik, who recently proposed a pragmatic approach to the demarcation problem (i.e., how to distinguish science from non-science), and even offered it as a framework for judicial analyses of scientific validity.78 Resnik confirms that the demarcation problem remains a site of controversy not only among historians, philosophers, and sociologists of science, but also in practical settings such as the use of scientific testing in the courtroom.79 Positivistic verifiability criteria gave way to Poppers falsification thesisscientific theories are testablebut critics argued that this thesis does not provide conditions that are necessary or sufficient for classifying statements as scientific.80 Resnik then briefly surveys historical, sociological, political, psychological, and epistemological approaches, none of which develop necessary and sufficient conditions for distinguishing between science and non-[*PG23]science.81 Science, he concludes, cannot be defined in this way, because [w]e distinguish between science and non-science in the context of making practical decisions and choices.82 Resnik states that to distinguish between science and non-science, we must know who is seeking to make the distinction and why. . . . We can reject some definitions because they do not do a good job of promoting our goals and interests . . . .83 In the legal system, a conservative and rigid definition of science, emphasizing reliability and rational consensus, seems to set useful limits on the costs and durations of trials, and to prevent mistakes such as wrongful convictions; but that definition might prevent an innocent person from gaining access to theories, concepts, and data that could exonerate that person.84 In any event, we should evaluate definitions of science in light of their probable effects on justice, due process, efficiency, and other goals of the legal system.85 If that sounds relativistic, Resnik does not claim that the definition of science rests only on practical concerns:
There are some common themes that should run through these different definitions of science, . . . . [including] testability, empirical support, progressiveness, problem-solving ability, and so on. . . . [O]ne can hold that there are some general criteria for distinguishing between science and non-science while holding that particular judgments . . . depend on contextual features, such as practical goals and concerns.86
Resniks proposed pragmatism is similar to, but does not quite capture, the pragmatic view of science that we identify in recent federal cases applying Daubert v. Merrell Dow Pharmaceuticals, Inc.
Resnik seems to be recommending pragmatism on the part of judges when they choose their definition of sciencethey should evaluate definitions of science in light of their probable effects on justice, due process, efficiency, and other goals of the legal system.87 Thus, in Resniks account, they might choose a rigid, conservative view [*PG24]of science, or a more liberal one that emphasizes problem-solving ability, testability, or other, less rigid criteria.88 Resniks argument is reminiscent of social constructivist arguments in the wake of Daubertconsider Margaret Farrells argument that law should construct its own truths rather than follow scientific constructs, because in each field facts serve different purposes.89 That style of pragmatism, whatever its merits, does not seem to be in fashion among federal judges. Rather, judges seem to be adopting a pragmatist view of the scientific enterprisenaturalistic but representational, useful but model-based, rigorous but approximate, social (institutional, rhetorical) but empirical, evidence-based but probabilistic. That framework, and its contrast with both (i) an idealized (i.e., realist, verificationist, or rationalist) view of science and (ii) Resniks pragmatically constructed legal science, will become clearer in our analysis of some recent cases.
In Cooper v. Carl A. Nelson & Co., a case brought by an injured worker seeking damages for an accident at a construction site, the plaintiffs medical experts relied on the plaintiffs statements about his past medical history as the basis for their diagnosis.90 The trial court decided the testimony was not admissible because the physicians had an inadequate foundationno scientific basisfor evaluating the cause of Mr. Coopers injury.91 The court of appeals reversed, finding the district court assumed an overly aggressive role as gatekeeper: [I]n clinical medicine, the methodology of physical examination and self-reported medical history employed by Dr. Richardson is generally appropriate. . . . [The defendant] suggests no alternative that could be employed by the conscientious clinical physician in this situation.92 Whether the doctor failed to consider other factors in the plaintiffs life related to the onset of the condition, and whether the medical history was accurate, are both quite susceptible to exploration on cross-examination; they go to the weight, not the admissibility, of the testimony.93 The methodology of a physician employing the accepted diagnostic tool of examination accompanied by physical history as related [*PG25]by the patient was acceptable under Daubert.94 The appellate panel in Cooper therefore emphasized the actual practices of clinical physicians rather than setting up, as the trial judge did, an idealized scientific basis as a standard to be met. Sometimes the data relied upon by experts comes from interviews, and as long as that is a standard investigat[ing] technique in the fieldas in arson caseswe should not demand more.95
In Walker v. Soo Line Railroad Co., an appellate panel likewise focused on the actual practices of experts to reverse a trial judges exclusion of testimony as lacking a scientific basis.96 The plaintiff, Richard Walker, claiming injury on a tower during an electrical storm, tried to introduce the testimony of three experts: Dr. Pliskin (a psychologist), Dr. Capelli-Schellpfeffer (an expert on electrical trauma), and Dr. Uman (an expert on electrical safety).97 Dr. Pliskins evaluation of the decline in Walkers Intelligence Quotient (IQ) was excluded at trial because Pliskin relied on the medical, educational and professional histories reported by Walker and his girlfriend, some of which the trial judge found to be inaccurate.98 The appellate panel, however, noting that [m]edical professionals reasonably may be expected to rely on self-reported patient histories, found Pliskins scientific methodology acceptable under Daubert (and again, that any inaccuracies could be explored on cross-examination.)99 Moreover, the defendants argument on appeal that Dr. Pliskins testimony was inadmissible because he did not state definitively that the electrical trauma caused the drop in Mr. Walkers IQ was rejected, suggesting that admissible testimony (which might be useful to the jury) does not imply certainty as to ultimate issues.100
Dr. Capelli-Schellpfeffers testimony was also excluded, seemingly because she relied on the work of her team members in forming her [*PG26]opinion that Walker suffered from post-traumatic stress disorder.101 The appellate panel again found that practice to be common: Medical professionals have long been expected to rely on the opinions of other medical professionals in forming their opinions. . . . Indeed, courts frequently have pointed to an experts reliance on the reports of others as an indication that their testimony is reliable.102 The appellate panel also rejected the defendants argument that Capelli-Schellpfeffers testimony was unreliable because she relied on Pliskins work but disagreed with his conclusion: That two different experts reach opposing conclusions from the same information does not render their opinions inadmissible. Merely because two qualified experts reach directly opposite conclusions using similar, if not identical, data bases . . . does not necessarily mean that, under Daubert, one opinion is per se unreliable.103 Finally, the appellate panel confirmed that, although Capelli-Schellpfeffer was not a psychiatrist, her testimony about post-traumatic stress disorder was admissible because she was the leader of a clinical medical team:
The team approach to medical diagnosis and treatment is employed to ensure that all relevant disciplines work together for the good of the patient. The leader . . . reconcile[s], when necessary, competing perspectives. In short, the expertise of the team leader is the capability to evaluate, in light of the overall picture, the contributions of each member of the team.104
That picture of experts from various disciplines with competing perspectives, each of whom offers limited contributions, portrays science as not only methodological, but also as social, institutional, and rhetorical.105
The trial judge in Walker also barred the testimony of Dr. Umanconcerning the different ways that lightning could have penetrated the tower in which the plaintiff was stationedas too speculative.106 The appellate panel, however, found his testimony scientifically valid, because [e]xperts are allowed to posit alternate models to explain [*PG27]their conclusion.107 All of these conclusions by the court of appeals regarding Walkers experts suggest that the trial judge mischaracterized the scientific enterprise as a field of objective measurement, definitive (or non-contradictory) conclusions, individual achievement, and singular explanatory models; for the appellate panel, however, the practices of experts often involve data from subjective narratives, an inability to conclude (and even contradictory conclusions), teamwork, and alternative explanatory models. Like the pragmatists that they are, scientists work with what they have, and with others, to produce the best and most useful knowledge.
In Jahn v. Equine Services, PSC, a veterinary malpractice case, the plaintiffs experts could not identify with any degree of certainty the specific physiological cause of a race horses death, and one of them lacked relevant surgical experience.108 The trial judge therefore ruled their testimony inadmissible under Daubert, but the appellate panel held that the district courts Daubert analysis both mischaracterized the methodology employed by the experts and ultimately employed a standard of admissibility more stringent than that expressed in Rule 702.109 The court then stated:
In order to be admissible on the issue of causation, an experts testimony need not eliminate all other possible causes of the injury. . . . Daubert and Rule 702 require only that the expert testimony be derived from inferences based on a scientific method and that those inferences be derived from the facts of the case at hand . . . .110
Because the defendants medical records were not complete, [c]ertainty is not to be found in this case.111 Although the district court viewed the experts testimony as stacking one guess on top of another, both experts (by necessity) based their opinions on the facts with which they were presented.112 If the trial judge would have explored whether the testimony reflected the same level of intellectual rigor that characterizes veterinary practice, it would have been [*PG28]clear that the experts used a methodology derived from scientific medical knowledge, although limited by the information provided to them.113 Moreover, the trial judges suspicion of testimony that contradicted a pathologists report was inappropriate: [D]etermining which is more credible should be left for the finder of fact and should not be considered when ruling on Rule 702 admissibility.114 Finally, looking at test results and physical symptoms to infer the presence of an infection is not a methodologically unsound assumption or guessit is a diagnosis.115 Here again, compared to the trial courts image of scientific knowledge, the view of the appellate court seems deflationarysometimes science is less than certain, sometimes scientists necessarily piece together a probable series of events under less than ideal circumstances, and sometimes their admissible conclusions are shaky, challengeable, or less persuasive than at other times.116
In Smith v. Ford Motor Co., the plaintiff proposed to call two experts to support his claim that the steering mechanism in his van malfunctioned, causing an accident.117 The district court concluded (i) that neither was qualified to testify, because they were not automotive engineers, and (ii) that their methodologies were unreliable because they had not been peer reviewed.118 On the first point, the appellate panel held that, although their expertise did not concern the ultimate issue, which could require an automotive engineer, their expertise nevertheless could be relevant to evaluating other factual matters.119 On the second point, peer review, the appellate panel held that the district court did not indicate whether publication is typical for the type of methodology these experts purported to employ. The district court merely recited the failure of the experts to publish and concluded that their testimony was unreliable.120 The key issues for the appellate panel were whether (i) well-established engineering practices were applied, and (ii) the methodology was based on extensive [*PG29]practical experience, not whether a single, potentially irrelevant, criterion was met.121 Ideally, publication in peer-reviewed journals is relevant, but the actual practices in the relevant engineering and accident analysis communities is sometimes more relevant.122 Again, the trial judges idealization of formal scientific practices eclipsed any inquiry into how experts actually work.
Finally, in United States v. Smithers, the district court excluded the testimony of Dr. Fulero, an expert on eyewitness identification, on the basis that his opinion was not scientifically valid.123 A divided appellate panel reversed, confirming that psychological studies of the limitations of perception and memory in eyewitness identification are now a scientifically sound and proper subject of expert testimony.124 The strong and lengthy dissenting opinion in support of the trial judges skepticism is interesting because of its criticism not only of research on the deficiencies of eyewitness identification but also of social science generally:
The trepidation with which nearly all appellate courts have treated [expert testimony on eyewitness identifications] is representative of a broader reluctance . . . to admit the expert testimony of social scientists with the same deference given to the testimony of those in the physical sciences. . . . [D]isagreements between dueling experts in the physical sciences . . . typically focus on the data . . . which [are] subject to objective analysis. The certainty of the testimony of social scientists, however, is limited by the nature of their field.125
A majority of the panel, however, was less reluctant, which suggests less idealism concerning the hard sciences together with a pragmatic acceptance of the limitations of the social sciences. Science is not characterized by its certainty, but rather its methodology; conclusions are often tentative, contradictory, probabilistic, or impossible under the circumstances. These do not signal unreliability, but rather are the typical conditions under which scientists work to produce useful knowledge.
[*PG30] The foregoing five cases each involved a trial judge whose decision not to admit certain expert testimony was reversed. We have also introduced our argument that the misunderstandings on the part of the trial judges were not primarily methodological, but rather reflected misunderstandings as to the social, institutional, and rhetorical aspects of science. Of course, we concede that some recent cases exemplify the need for trial judges to better understand scientific methodology. For example, the trial judge in Hardyman v. Norfolk & Western Railway Co., a Federal Employers Liability Act suit based on a carpal tunnel syndrome (CTS) injury, excluded the testimony of the railroad workers experts on causation.126 One such expert, Dr. Linz, employed differential diagnosis methodologyconsidering all potential causes of symptoms and then (by tests and examinations) eliminating likely causes until the most probable one is isolatedto reach his conclusion that plaintiffs CTS was caused by work activities.127 The trial judge, after acknowledging the acceptability of differential diagnosis, failed to recognize that Dr. Linz applied [that] method . . . [and] seemed actually to reject this method.128 The appellate panel therefore reversed, convinced that the rationale of the district court did not justify exclusion of Plaintiffs expert testimony.129
In contrast to the trial judge in Hardyman, who did not recognize or accept sound methodology, the trial judges in the five cases we discussed above exemplify a different problem, which might be called the idealization of scientific methodology. That is, the reason those judges did not recognize the practical goals and limitations of scienceits reliance on self-reported medical history, uncertainty, competing explanations, and conventional practiceswas their idealized image of the features of the scientific enterprise: objective measurement, definitive conclusions, and unanimous consensus in peer-reviewed publications. This idea that trial judges should have a somewhat deflated image of science might, we understand, sound counterintuitive. It would seem that to appropriate the best science in law, judges should set a very high standard. Because we want to challenge that notion, we turn now to six recent cases that specifically highlight the problem of idealizing science, namely that this approach may keep the best science out of the courtroom. Again, in all of these cases, trial judges were reversed by [*PG31]appellate panels who understood the social, institutional, and rhetorical context of expertise.
For example, in Alfred v. Caterpillar, Inc., a products liability action against the manufacturer of a paver, the plaintiffs safety expert was excluded because the trial judge found that his opinion is simply not competent under Daubert . . . [i.e.,] it is not supported by sufficient testing, experience, background, education, or thought.130 The appellate panel, however, was persuaded that [his] testimony . . . was reliable . . . because it was the result of his having researched and applied [well-accepted] standards [in the engineering community].131 The trial judges comments that the experts opinion was very limited, and backed by very little work and very limited expertise, suggest that the judge wanted more science than this expert could offer; the appellate panels reaction was to look at what the community of such experts thinks and does.132 Expertise thus reflects a social practice, not just an abstract methodological ideal. Indeed, even methodological ideals are local and dependent on the relevant communitys standards. Thus, in Charles Alan Taylors formulation, an empirical conclusion by a scientist is itself pragmatically contingent on wider configurations of practices:
My point here is not that all interpretations of the facts are of equal legitimacy. . . . [but we should reject the] claim that the relative legitimacy of a given interpretation is a natural condition of the material to be interpreted, rather than a function of an audiences [for example, a scientific communitys] evaluations of the evidence adduced on its behalf . . . .133
Recourse to the social and institutional aspects of science, in contrast to abstract ideals, is also evident in five more cases where a trial judges idealizations of science were corrected.
In United States v. Finley, the trial judge in a criminal trial excluded the defendants psychological experts testimony on the basis that the testimony would not be helpful to the jury.134 According to the [*PG32]appellate panel, the trial judge seemed troubled by the fact that the psychological tests did not reveal a conclusive diagnosis, and by the fact that the expert based his opinion on his belief that [the defendant] was not faking or being deceptive.135 The expert even admitted at the Daubert hearing that his diagnosis was extremely gray.136 Reversing the conviction, the appellate panel implied that the trial judge was asking for too much:
It appears from the record before us that [the expert] based his diagnosis on proper psychological methodology and reasoning. . . . [He] did not base his conclusions solely on [the defendants] statements; rather, he used his many years of experience . . . .
. . . .
. . . Based on his clinical experience and [the] facts, [the expert] concluded that [the defendant] was not faking or lying.
A belief, supported by sound reasoning, . . . is sufficient to support the reliability of a mental health diagnosis. . . .
. . . .
. . . We have recognized that concepts of mental disorders are constantly-evolving conception[s] about which the psychological and psychiatric community is far from unanimous . . . .137
Here again is a picture of science as inconclusive, based on reasonable belief, evolving, and subject to internal disagreements; this is not, however, a critical assessment of scientific reliability, but an acknowledgment that science is a social enterprise with institutional supports (for example, standardized diagnostic categories) and debates that betray rhetorical strategies of persuasion.
Trial judges who want more from science, we might say, need to understand more about its limitations, and their exclusive focus on idealized methodological aspectslike testing or datamight be misleading them. For example, in Pipitone v. Biomatrix, Inc., a patient who contracted a salmonella infection after receiving a knee injection brought a products liability action against the manufacturer of the [*PG33]synthetic fluid Synvisc.138 At trial, the testimony of an infectious disease expert was disallowed as unreliable because (i) no epidemiological study was performed, (ii) no published study supported his opinion, and (iii) other potential sources for an infection had not been eliminated.139 As to the first statement, the appellate panel agreed with the expert that an epidemiological study is not necessary or appropriate in a case such as this in which only one person is infected.140 As to the lack of peer-reviewed literature supporting the experts opinion, the appellate panel observed that where:
there is no evidence that anyone has ever contracted a salmonella infection from an injection of any kind into the knee, it is difficult to see why a scientist would study this phenomenon. We conclude . . . that the lack of reports . . . supports, rather than contradicts, [the experts] conclusion that the infection did not arise due to . . . [a] source not related to Synvisc.141
Even the third concern, that other sources were not eliminated, was rejected by the appellate panel: [The expert] methodically eliminated the alternative sources of the infection as viable possibilities. After doing so, he stated that he was 99.9% sure that the source of the salmonella was the Synvisc syringe.142 Significantly, the expert seemed to fare badly under the four Daubert guidelines: (i) he did not test his hypothesis, (ii) no known or potential rate of error or controlling standards [were] associated with [his] hypothesis, (iii) there was no relevant scientific literature, and (iv) only his diagnostic principles, not his particular hypothesis, were generally accepted in the relevant scientific community.143 Nevertheless, the appellate panel deemed it appropriate for the trial court to consider factors other than those listed in Daubert to evaluate . . . reliability . . . . In this case, the experts testimony is based mainly on his personal observations, professional experience, education and training.144
A similar evaluation by an appellate panel appeared in Furry v. Bielomatik, Inc., where a safety engineers testimony was excluded because he did not offer specific designs for safety features that he [*PG34]identified as necessary.145 The appellate panel vacated the summary judgment because the trial courts evaluation appears to have been based upon an overly expansive view of [the experts] role as a safety expert, as well as an overly technical application of the factors articulated in . . . Daubert.146 In Pipitone and Furry, the Daubert guidelines emerge as ideals that must be mediated by pragmatic concernsevery hypothesis will not have been the subject of extensive testing, well-established standards or error rates, peer-reviewed publications, or even consensus (except in the most general sense of consensus regarding methodological principles).
Two other recent cases also highlight the limitations under which science pragmatically, though not ideally, operates. In Lauzon v. Senco Products, Inc., the district court excluded the testimony of a forensic engineer who testified often in pneumatic nail gun cases.147 The trial judge found that (i) the testing of the experts theory was inadequate (the expert was unable to duplicate the events of the accident on which the case was based), (ii) the relevant peer-reviewed literature was inadequate, (iii) the experts theory was not widely accepted, and, impliedly, (iv) the experts research was not sufficiently independent of litigation.148 The appellate panel found the experts testing and the literature (and therefore general acceptance) sufficient, but also observed that the experts involvement with past litigation did not infect his research:149 [T]he slight negative impact of [the experts] introduction to the field of pneumatic nail guns through litigation is outweighed by his independent research, independent testimony, and adherence to the underlying rationale of the general acceptance factor, scientific reliability.150 General acceptance cannot be found for every reliable hypothesis, nor can many reliable hypotheses be found outside the litigation context. Moreover, as explained in Metabolife International, Inc. v. Wornick, a study commissioned by a party, not subjected to peer review, and incomplete, is not by those facts alone ren[*PG35]dered unreliable: Rather than disqualify the study because of incompleteness [the overall project was ongoing, but all of the relevant data had been gathered in final form] or because it was commissioned by Metabolife, the district court [on remand] should examine the soundness of the methodology employed.151
These cases suggest that science is not purethere is always funding from somewhere, and there is always a social or contextual reason to study something. In Pipitone, there was no reason to study salmonella knee infections until the injury occurred; in Furry, there was no history of extensive testing, and therefore no well-established error rate or consensus, concerning the safety of paper converting machines; and in Lauzon and Metabolife, the relevant research was driven by litigationthese do not signal unreliability, but rather constitute social, institutional, and rhetorical features of science. In effect, the trial judge in each of these cases understood the methodological ideals of science, but not its historical, communal, and economic dimensions. To the extent that the trial judges at least recognized these social features of science, they were viewed as problems or impurities rather than conventions or inevitabilitiesthat is what we mean by the tendency to idealize science. Although no one doubts that the scientific enterprise rests on historical, social, institutional, and rhetorical structures, some trial judges tend to see methodology as a check on their effects. The notion that methodology itself is social or rhetorical is therefore counterintuitive.
Judicial idealization of science, in each of the cases just discussed, resulted in reversals because the experts, engaged in a pragmatic enterprise with practical goals and limitations, did not live up to the trial judges ideals. In the next Part, we discuss a parallel problem: trial judges are often reversed for deferring to the social authority of experts, even when the experts lacked methodological reliability. As we will show, the debate about whether judges should defer to science out of ignorance, or, conversely, should demand that experts educate the judge, is not new. We argue that when trial judges understand the social, and not just methodological, authority of science, as well as the potential disconnect between social authority and reliability, they tend to adopt an educational model of their gatekeeping responsibilities.
I think that judges can become comfortable with science or scientists if they know more about how they operate. . . . [T]here has been this notion that science is beyond us, in another world entirely, and that we cannot handle it. I just do not buy that idea.152
The above remark, made by Chief Judge Markey at a conference on science, technology, and judicial decision making over twenty-five years ago, anticipated the turn in Daubert v. Merrell Dow Pharmaceuticals, Inc. toward a more active gatekeeping role for judges with respect to expert scientific testimony.153 Indeed, the notion that judges under the Frye v. United States154 regime routinely deferred to scientists is belied by the proceedings of that conference, which could easily be mistaken for a contemporary discussion of the problems of how to define science, the distinction between hard and soft science (as well as between scientific and non-scientific expertise), the differences between the legal and scientific enterprises with respect to standards of proof, and the perceived need for science advisors or panels to aid judges in their evaluations of experts.155 Nevertheless, one recurring theme in the conference, expressed by Chief Judge Bazelon and others, was that judges are not equipped to handle scientific and technical disputes:
It is hard to imagine a less likely forum for the resolution of technological disputes than our trial courts. Participation in litigation is controlled by the parties who call the witnesses. The information is developed by rules and the strict admission of evidence. The finder of fact, whether it be a judge or [*PG37]a jury, obviously has no claim to expertise in resolving the scientific questions . . . .156
Judge Spaeth agreed that judges are starting to become very uncomfortable about whether they are being asked to make decisions that really they should not be asked to make because they are not well equipped to make them,157 which skepticism was echoed by Chief Justice Rehnquist in Daubert158 and by some commentators after Daubert.159 In retrospect, however, Chief Judge Markeys optimism has prevailed: We need to develop some understanding of scientists and scientific methodshow they think, how they work, how they arrive at this view and do not arrive at that one. . . . I think judges have to learn that scientists do not have two heads. They are not ten feet tall.160 An educational, not deferential, model is suggested here for law/science relations.
On the eve of the 1993 U.S. Supreme Court decision in Daubert, Ronald J. Allen and Joseph S. Miller summarized the ongoing debate over deference or education? regarding experts, and tentatively concluded that an education model was preferable if not exclusive.161 Allen and Miller reformulated the debate between Ronald Carlson162 and Paul Rice163over whether the facts or data grounding an experts opinion should be admissible (Rice) or not (Carlson)as one over the extent to which they are willing to defer to experts:164
Carlsons fact finder . . . can only attach value to the experts opinion on the basis of that experts perceived credibility; the restriction on basis testimony, then, functions to turn the expert into a super-fact finder capable of producing admis[*PG38]sible substantive evidence (an opinion) from inadmissible evidence. Rice prefers that the fact finder be allowed to hear and to use the facts or data that support the experts opinion to the same extent that the expert uses them.165
Professor Rice, in short, is not as deferential as Professor Carlson. Professor Imwinkelrieds position in the debate, building on Judge Learned Hands suggestion (in 1901) that experts inform the jury of general principles to apply to facts, is strikingly deferential to scientists (though not with respect to the facts of a case).166 Allen and Miller, in response, highlight the problem of conflicting expert testimony,167 noting that Frye provided a check on irrational choices between experts:
[A] system designed to encourage education has considerably less need of such a check, for the check will come from the pedagogical process itself. As the fact finder becomes informed about an area of knowledge, charlatans will be exposed. The Federal Rules thus do not embrace Frye just because they are considerably less dedicated to deference than their common law predecessors. Education is clearly permitted, perhaps encouraged . . . .168
Finally, Allen and Miller criticize Peter Huber and Richard Epstein as overly deferential to science, the latter of whom (prior to Daubert) questioned the idea that the Federal Rules do not adopt the deferential perspective of Frye.169
Without revisiting the disputes over the proper interpretation of Frye, one of the sensible readings of that opinion would be that judges in that regime were to decide two questions, one normative and one [*PG39]empirical.170 First, is the witness representing a group worthy to be called scientists (for example, astrologers: no; chemists: yes)? If that normative question can be answered affirmatively, then the empirical question is whether the proffered testimony represents that which is generally accepted in the field. If so, then the judge defers by declaring the testimony admissible.171 As to what the jury does with that testimony, Frye does not address whether the jury is then educated by or deferential to the expert. Trial lawyers presenting such testimony obviously want botha deferential jury that understands and is persuaded by the testimony. As to whether the judge was exercising a gatekeeping function under Frye (by asking a normative and an empirical question), the Daubert Trilogy confirms that a more aggressive gatekeeping function is now required of federal judges. Daubert held that Frye did not survive the Federal Rules,172 and in General Electric Co. v. Joiner, the Court observed that nothing . . . requires a district court to admit opinion evidence that is connected to existing data only by the ipse dixit of the expert. A court may conclude that there is simply too great an analytical gap between the data and the opinion proffered.173 That sentence was quoted approvingly in Kumho Tire Co. v. Carmichael, just after the Court explained that:
[*PG40]no one denies that an expert might draw a conclusion from a set of observations based on extensive and specialized experience. Nor does anyone deny that, as a general matter, tire abuse may often be identified by qualified experts . . . . [But] the question before the trial court was specific, not general[:] . . . [W]hether this particular expert had sufficient specialized knowledge to assist the jurors in deciding the particular issues in this case.174
The best way to read the excerpts, we believe, is that judges should not admit an experts testimony unless the judge understands its logic, which implies education by the expert as a prerequisite to admissibility. In several recent cases, that emphasis on the educative role of experts recurs.
In Elcock v. Kmart Corp., a case involving injuries sustained by a department store patron, the trial judge admitted the testimony of Dr. Copemann, an expert in vocational rehabilitation, and Dr. Pettingill, an economist.175 As to Dr. Copemann, the appellate panel held that the trial judge should have held a Daubert hearingan understandable error before Kumho Tireand that a fuller assessment of Copemanns analytical processes would have revealed its weaknesses.176 Specifically, Copemanns methodology in reaching a conclusion of the plaintiffs 5060% disability was neither testable nor reproducible; at best, it was a novel synthesis of two widely used methods, but Copemann did not demonstrate that this hybrid approach bore a logical relationship to the established techniques.177 Nor, looking at Copemanns description of his methodology, did it seem to the appellate panel that a reasonable explanation could be provided.178 Because of the disconnect between the stated nature of Copemanns methods and the results they produced when the facts of the case were plugged into their machinery, the appellate panel hesitated to find Copemanns method reli[*PG41]able.179 Copemann seems to have made a subjective judgment . . . in the guise of a reliable expert opinion, which, in the terminology of Joiner and Kumho Tire, is an ipse dixit statement.180
As to Dr. Pettingill, the appellate panel likewise scrutinized his testimony on earning capacity, and found that his conclusions were based on faulty assumptionsfor example, that Elcock was 100% disabled, that she would have earned twice her pre-injury earnings but for the injury, that she had no post-injury income (she did), and that her life expectancy was average (she had diabetes).181 In a lengthy footnote exploring the interstitial gaps among the federal rules, the court explained:
[A] lost future earnings expert who renders an opinion . . . based on economic assumptions not present in the plaintiffs case cannot be said to assist the trier of fact, as Rule 702 requires. . . . [Moreover,] it is not a stretch from the [Rule 703] requirement that other experts in the particular field would reasonably rel[y] on such data in forming opinions . . . on the subject to suggest that an expert should not depend on fictional or random data . . . .
. . . Rule 402 sets forth a liberal admissibility standard for [a]ll relevant evidence, defined in Rule 401 as evidence having any tendency to make more probable or less probable the existence of any fact . . . of consequence . . . . Under this framework, an economists testimony concerning a reliable method for assessing future economic losses can be deemed relevant only insofar as a jury can usefully apply that methodology to the specific facts of a particular plaintiffs case.182
Elcock thereby provides an educational model both for judges applying the testability standard when conducting a Daubert hearing (Copemanns testimony) and for juries applying an experts methodology to the facts (Pettingills testimony).
In Goebel v. Denver & Rio Grande Western Railroad Co., the district court admitted the testimony of Dr. Daniel T. Teitelbaum, who purported to establish a causal link between the plaintiffs cognitive brain damage and his exposure to diesel exhaust in a train tunnel.183 The defendant on appeal characterized Teitelbaums testimony as relying solely upon the ipse dixit of the expert, and because the district court did not hold a Daubert hearing or make specific findings on the record regarding Teitelbaums reasoning and methodology, the appellate panel found an abuse of discretion in admitting the testimony.184 In short, the trial judge should not have deferred to even a credentialed experts belief that on the basis of . . . fundamental physiology, the cognitive defect was caused by exposure to pulmonary irritants at high altitude which produced swelling in the brain.185 The gatekeeping role requires, in various formulations, that district courts vigilantly make detailed findings, that they carefully and meticulously review the proffered scientific evidence.186
Finally, Libas, Ltd. v. United States, which reviewed a trial courts determination concerning the weight rather than the admissibility of expert testimony, is significant because it confirms that general acceptance or widespread usethe proxy for reliability under Fryes deferential regimeis not enough to signal reliability under Daubert.187 A key issue at trial was whether a fabric was power loomed, and the trial judge relied entirely on the results of a Customs Service test that was generally viewed as accurate.188 The appellate panel held that the trial court should have also taken into account testability, peer-reviewed publications, potential rate of error, or other factors to assure itself that it has effectively addressed the important issue of reliability.189 Once the reliability of a generally accepted technique is effectively [*PG43]challenged, as it was by testimony that the fabric came from a village in India with no power looms, a searching analysis is required.190
Together, these cases represent a shift away from deference (to conclusory opinions or generally accepted techniques) and toward a pedagogical model for expert testimony: trial judges need to see and understand the logic or reasoningthe connectionsfrom principle to application to conclusion, and juries need to apply methodologies to the facts before them.
What, then, do trial judges need to know about science to carry out their gatekeeping responsibilities? Several recent federal cases, again, support the notion that trial judges need to be more rigorous in applying the Daubert guidelines. In Ueland v. United States, for example, a prisoner brought a claim under the Federal Tort Claims Act for back and neck injuries sustained in a collision between a prison van and its chase car.191 The principal medical testimony came from Jason Wilson, a college dropout who claims to be a chiropractor with a practice limited to acupuncture.192 The judgment in favor of the United States was reversed on several grounds, among them that:
The district judge refused to apply Rule 702 or conduct a Daubert inquiry, ruling instead that Wilsons lack of credentials and experience concerns only the weight to be accorded to his testimony. That ruling is wrong. On remand, a Daubert inquiry must be conducted, and Wilsons testimony may be received only if . . . [his] testimony is based upon sufficient facts . . . , the testimony is the product of reliable principles and methods, and . . . the witness has applied the principles and methods reliably . . . .193
Likewise, in Lloyd v. American Airlines, Inc. (In re Air Crash on June 1, 1999), an airline passenger was awarded damages for injuries sustained during an American Airlines crash.194 The trial judge admitted [*PG44]the testimony of a Dr. Harris, who testified that the plaintiffs post-traumatic stress disorder was due to a brain dysfunction, but who also acknowledged that he did not carry out certain tests that would have revealed biological changes in the plaintiffs brain.195 The appellate court concluded that a Daubert issue was raised:
Unfortunately, the district court does not appear to have considered any of the Daubert factors . . . . The district court merely noted that Harris was a qualified psychiatrist, and then stated Its beyond my competence. I dont know whether . . . there is research material that shows brain changes as a result of this syndrome. This inquiry was not adequate to satisfy the district courts essential gatekeeping rule under Daubert.196
Because no tests were performed, there was no connection established between the alleged physical brain changes and the plaintiffs condition.197
Finally, in Boncher ex rel. Boncher v. Brown County, the estate of a prisoner (who had committed suicide) brought a 42 U.S.C. § 1983 action against jail officials alleging deliberate indifference to the risk of the prisoners suicide.198 The trial judge allowed a criminologist to testify that the number of suicides in the defendants jail was unusually high, but the appellate panel found that his evidence was useless and should have been excluded under the Daubert standard.199 Indeed, the expert admitted that he had neither conducted nor consulted any studies that would have enabled him to compare the [defendants jail suicide rate with that of the free population in the county or that of other jails.200 Such cases demonstrate the need for trial judges to be more sophisticated regarding scientific methodology, and perhaps even the need for more judicial training in science, although Judge Posner in Boncher seized the opportunity in his opinion to educate judges on the spot as to normal variance:
[*PG45]It would not be sound to condemn a jail administrator if the rate of suicide in his population was within one or two standard deviations of the rate elsewhere, for so small a variance might well be due to chance, or at least to factors over which he had no control. Every statistical distribution has an upper tail, and there is no constitutional principle that whoever is unlucky enough to manage the prisons in the upper tail of the distribution of suicides must pay damages.201
But then the recent decision in Chapman v. Maytag Corp. presents an interesting contrast to the above three cases.202 The trial judge seemed to understand quite clearly that a mechanical engineers testimony, in support of a wrongful death suit for electrocution by a kitchen range, was less than scientific, but nevertheless the testimony was allowed.203 The trial judge found that the engineer failed to specify the details supporting his opinion that [the deceased] would have been electrocuted, regardless of whether the outlet was properly grounded. Moreover, the court stated that the lack of any scientific testing presented a serious problem for the status of [the] testimony as expert opinion.204 A new trial was required because the district court failed to assess whether [the engineers] theory is scientifically valid, but the question remains why a judge who seemingly raised the right questionsinsufficient details supporting the opinion, lack of any scientific testingallowed the testimony.205 Two other recent federal opinions provide a possible answer: the social and institutional, not methodological, authority of science in law sometimes interferes with judicial evaluations of experts.
In Elsayed Mukhtar v. California State University, Hayward, a Title VII suit by a professor alleging race discrimination in a denial of tenure, the plaintiff presented Dr. David Wellman as an expert on racism.206 The district court admitted Dr. Wellmans testimony without any discussion of its reliability,207 and the jury awarded the plaintiff $637,000 in damages.208 On appeal, in response to the plaintiffs argument that it was harmless error to admit Dr. Wellmans testimony without a reli[*PG46]ability finding (because six of plaintiffs colleagues testified that he was qualified for tenure), the Universitys counsel argued that Dr. Wellmans testimony was not harmless because it was cloaked in authority . . . , and that without his testimony, the plaintiffs evidence could show only [an evenly divided] difference of academic opinion regarding his tenure qualifications.209 The appellate panel agreed and vacated the judgment, stating that Dr. Wellman drew the inference of discrimination for the jury . . . [which] more probably than not was the cause of the result reached.210 That notion of being cloaked in authority, even when there may be no reliability, highlights the potential disconnect between social (or institutional) authority in science and methodological reliability, which the trial judge in Elsayed Mukhtar perhaps failed to appreciate.
A similar misunderstanding appeared in Jinro America Inc. v. Secure Investments, Inc., a breach of contract, fraud, and racketeering case in which a purported expert on Korean culture and business practices was allowed to testify.211 The appellate panel agreed with the plaintiff, who lost at trial, that the experts ethnically biased, xenophobic . . . testimony was objectionable and completely improper.212 His sweeping generalizations, derived from his limited experience and knowledge . . . were unreliable and should not have been dignified as expert opinion.213 But they were, because he came before the jury cloaked with the mantle of an expert. This is significant for two reasons: First, it allowed him . . . to couch his observations as generalized opinions . . . . Second, . . . his statements were likely to carry special weight with the jury.214 We think it is also significant that the appellate panel did not limit their review to the methodological deficienciesthere were manyof the purported expert in Jinro, but also discussed a social and institutional aspect of science: its authoritative force, its cloak and mantle that (due to sciences epistemological status) can sometimes get separated from its validity without awareness on the part of the judge or jury.215 To understand science is to understand that authority is not a natural phenomenon, but a rhetorical accomplish[*PG47]ment on the part of those who study nature. At its best, science represents nature in compelling and useful ways, but to understand science is to recognize that methodological advances can be lost without social authority, as in the case of novel theories that have not yet (but should) gain general acceptance; conversely, methodological mistakes can go unnoticed because of social authority, as in the case of Jinro.
For example, in Peabody Coal Co. v. McCandless, the Administrative Law Judge (ALJ) in a Black Lung Benefits Act case, reviewing a Benefits Review Board order, was faced with conflicting evidence of pneumoconiosis.216 The pathologist performing the autopsy opined that the miner had pneumoconiosis, but five other physicians disagreed.217 The ALJ placed more weight on the opinion of the pathologist who performed the autopsy,218 but an appellate panel found that decision irrational: Although we understand why the ALJ . . . wanted to avoid the medical controversy, . . . . [a] scientific dispute must be resolved on scientific grounds, rather than by declaring that whoever examines the cadaver dictates the outcome.219 Although this outcome on appeal can be explained as an example of a judge who does not understand methodology, it is also an example of how social, institutional, and rhetorical authority gets separated from methodological reliability: but for the appeal, the authority of the pathologist was persuasive enough to establish pneumoconiosis.
The potential disconnect between social authority and methodological reliability is also significant for the debate over whether legal science is different from science itself. Because judges simply are not scientists, or because the goals of litigation (for example, finality) are so different from the goals of practicing scientists (for example, criticism and refutation), some would argue that legal science is not the same as, or never can be, genuine science. Indeed, our arguments about science as a practice, for which automatic deference is inappropriate, might be viewed as the basis for an appeal to a legally constructed science which would co-exist alongside socially constructed science. The third trend that we briefly identify (in the next Part), however, is the ten[*PG48]dency to demand genuine science, with all of its own pragmatic limitations, in court.
Recall Resniks proposal that courts should choose their definition of science on pragmatic grounds: criteria should match the goals and concerns of the courtroom.220 This formulation implies that sciences own standards of validity should be different from legal standards of scientific validity. Brian Leiter takes this position in his critique of Heidi Feldmans argument that Daubert v. Merrell Dow Pharmaceuticals, Inc. appropriately adopted a revised empiricist philosophy of science (thereby bringing law into line with actual scientific practice).221 For Leiter, admissibility criteria need not follow the dominant, or even the correct, philosophy of science:222
Courtrooms, after all, are not laboratories, and judges are not scientists. . . . The rules of evidence serve [not only the discovery of truth but also] . . . the promotion of various policy objectives . . . and the efficient and timely resolution of disputes. . . .
. . . .
We plainly want our science in the courtroom to bear some relation to real science . . . . But this goal must be pursued in light of the serious epistemic limits of courtsintellectual, temporal, material.223
Admissibility questions are, for Leiter, questions of social epistemology: [U]nder the real-world epistemic limits of a particular social process for the acquisition of knowledge, what epistemic norms actually work the best?224
Again, that style of pragmatism, which inevitably presumes there is some real laboratory science that never quite makes it into court, is not the pragmatism we identified among federal appellate judges. Rather, the view of real science itself as already a pragmatic practiceperhaps a minor devaluation of scienceis combined with an educational model of what scientists do in court to create an actual scientific discourse in [*PG49]law. In effect, for these judges, science is not as complex, and courts are not as limited, as Leiter suggests. Indeed, if scientific knowledge is always approximate and probabilistic, it is not so different from law. Science, too, has, in Leiters terminology, serious intellectual, temporal, [and] material epistemic limitations.225 Although some education is necessary, and some translation warranted, the standards for science in law are to mirror the standards for scientists generally. Courts therefore generously cite Chief Judge Posners requirement that when scientists testify in court they adhere to the same standards of intellectual rigor that are demanded in their professional work.226
In any truly public battle, those arguing for constructivism in general will lose to those arguing for reality in general. What is necessary is first an at least rhetorical concession to the power of the argument for reality, and second, a demonstration of the way particular uses of the constructivist position are humanly helpful and consistent with a rigorous science.227
Although we will not attempt to introduce or revisit the polarizing debates about the role of social interests in the production of scientific knowledge, it is important to acknowledge that those debates keep many historians, philosophers, and sociologists of science busy. In the science wars, the rational and the social are dichotomized and the debate is about which ought to be given primacy in accounting for scientific knowledge.228 Sociologists of science, for example, identify and emphasize the practices and processes that . . . succeed in ratifying some content . . . as knowledge in a given communityscience is a process of developing . . . new accounts of natural processes in such a way as to effect general assent to those accounts.229 Unfortunately, that emphasis seems to destabilize or undermine science as a reliable source of knowledge, because of the overt dependency on community or general assent. Scientific representations, we might say, [*PG50]should be caused by nature or reality, not by communal assent, and certainly not by rhetorical techniques. A scientists account might therefore emphasize the processes and practices that justify knowledge independently of community practices:
One can speak of the knowledge of an individual as the intersection of what the individual believes (justifiably) and the set of all truths [for example, concerning nature], or the knowledge of a community as the intersection of what is accepted (justifiably, or as a consequence of normatively sanctioned practices) by a community at a time with the set of all truths.230
Unfortunately, again, that emphasis on truth seems to idealize science by ignoring the historical evolution of scientific knowledge as well as the social, institutional, and rhetorical aspects of justification and normatively sanctioned practices.
Numerous theorists have concluded, therefore, that the choice between a view of science as fundamentally social (or cultural) and as fundamentally rational (or methodological) is a false dualism. Helen Longino, for example, concedes that justification is contextual, but need not be arbitrary or subjective, as it is dependent on rules and procedures immanent in the context of inquiry. Contextualism is the nondichotomists alternative to [sociological] relativism and [rationalistic] absolutism regarding justification.231 Bruno Latour argues that scientific facts are indeed constructed, but they cannot be reduced to the social dimensionnetworks of scientific knowledge are simultaneously real, like nature, narrated, like discourse, and collective, like society.232 In another formulation, Slavoj Zizek asks: [I]s historicist relativism (which ultimately leads to the untenable solipsist position) really the only alternative to a naive realism (according to which, in the sciences . . . , we are gradually approaching the proper image of the way things really are out there, independently of our consciousness of them)?233 What is missed in that dichotomy, Zizek suggests, was indicated by Thomas Kuhn when he claimed that the shift in a scientific paradigm is MORE than a mere shift in our (external) perspective on/perception of reality, but nonetheless LESS than our ef[*PG51]fectively creating another new reality.234 Although such notions are complex and sometimes counterintuitivehence the tendency toward false dichotomiesthey are particularly useful in explaining some of the confusion in Daubert Trilogy jurisprudence. The inevitable social, institutional, and rhetorical aspects of science are not the opposite of scientific methodology; they provide its context.
In our analysis of federal appellate court opinions, we identified various instances where trial judges did not appreciate the social, institutional, and rhetorical aspects of science. In reversing district court admissibility decisions, the appellate panels identified the social authority of scientists that can interfere with methodological evaluations, as well as the pragmatic limitations on science that arise, for example, due to the fact that not all hypotheses are the subject of well-established standards, past research interests, or extensive testing. The mantle of authority is not a marker of reliability, but neither are the pragmatic limitations on science markers of unreliability. Science is a network of communities, institutions, persuasion, and consensus-building; methodological norms can sometimes provide a check on these features, but such norms are also part of the network.
Determining just what constitutes a sufficient level of scientific understanding for the judiciary is a question for future study and policy development.235
Why do trial judges governed by the Daubert Trilogy need to understand the social, institutional, and rhetoricaland not just the methodologicalaspects of science? First, if they are unduly focused on methodological factors, they risk idealizing science and consequently keeping reliable science out of court because of its pragmatic goals and limitations. Second, they risk deferring to science and consequently allowing unreliable science into court because of its social authority, which authority does not necessarily signal reliability. Third, by making such mistakes, they risk constructing a legal science that is out of sync with mainstream science. Conversely, an appreciation of the inevitable social, institutional, and rhetorical features of the scientific enterprise, as well as the methodological ideals of that enterprise, helps judges (i) to recognize reliable science even when it ap[*PG52]pears as a demystified practice, (ii) to recognize unreliable science even when it appears as authoritative, and (iii) to appropriate science itself, and not a legalistic shadow of science, into court.
Relying primarily on recent federal appellate opinions that reversed trial judges evaluations of admissibility of scientific evidence, we identified three tendencies in the wake of Daubert v. Merrell Dow Pharmaceuticals, Inc.: (i) a pragmatist orientation with respect to ongoing philosophical disputes concerning the nature and reliability of the scientific enterprise; (ii) an orientation to an educational, rather than a deferential, model of the relationship between science and gate-keeping judges; and (iii) a merger of legal and scientific discourse, that is, a tendency to resist the notion that, in determinations of validity, law and science operate on justifiably different grounds. We conclude that these trends ought to lead away from false dichotomies: between nature and culture, between methodology and social contexts, and even between genuine and junk science when experts disagree.
The Advisory Committee Notes to Federal Rule of Evidence 702 state that:
When facts are in dispute, experts sometimes reach different conclusions based on competing versions of the facts. The emphasis in [Rule 702] on sufficient facts or data is not intended to authorize a trial court to exclude an experts testimony on the ground that the court believes one version of the facts and not the other.236
Although the foregoing might seem obvious, the court of appeals in Pipitone v. Biomatrix, Inc. viewed the trial courts exclusion of the plaintiffs infectious disease expert as the precise problem identified above.237 Perhaps there is a tendency in law to see every dispute as having two sides, only one of which will win. Perhaps it is a scientistic culture, and not legal culture, that is responsible for the sense that when two scientific experts disagree, one of them must be unreliable. Scientific debate, however, can:
be understood as an ongoing process of critical interaction that both prevents closure where it is inappropriate and helps to establish the limits of (relative) certainty. . . .
. . . .
[*PG53] . . . [I]t makes no sense to detach measurements and data descriptions from the contexts in which they are generated[.] . . . [A]s soon as one does, one creates a new context relative to which they are to be assessed and understood.238
Again, however, the sense in scientistic culture that contexts are unstable leads some consumers of science, including trial judges, to want more than social authority, institutional gatekeeping, and rhetorical ornamentsthey want reality. Indeed, that distinction between context and reality gives the social, institutional, and rhetorical features of science a pejorative connotation. Science should, we might say, represent nature, not funding interests, lofty credentials, communal assent, or good argumentative techniques. The same distinction, a false dualism, has made its way into training manuals for attorneys who cross-examine expertsbias, interests, and motivations are bad, which implies that genuine expertise is unbiased, disinterested, and unmotivated. Although it is true that some experts may be biased toward a pet theory, financially invested in their clients cause, or motivated by greed, the very notion of disembodied, detached, asocial, and acontextual science is a product of an unjustified dichotomy. Fidelity on the part of scientists to contemporary methodological conventions is a bias, an interest, and a motivationbut that is what we want. Moreover, involvement on the part of scientists in the social, institutional, and rhetorical features of their profession can help them generate reliable scienceand that is also what we want.