Intelligent Belief in Evaluation
“Educating for Intelligent Belief in Evaluation” by Thomas A. Schwandt, University of Illinois at Urbana-Champaign
This version printed with the author’s permission.
My topic concerns what it means to educate for intelligent belief in evaluation, understood here as a particular attitude and outlook on self and society. Intelligent belief in evaluation is demonstrated in a thorough understanding of what is involved in evaluative reasoning as well as a robustly held, warranted conviction that such reasoning is vital to our well being. Intelligent belief in evaluation is closely allied with the idea of public reason. As noted in a white paper produced by the Poynter Center for the Study of Ethics and American Institutions at Indiana University:
A commitment to public reason is a key educational goal if one is concerned about educating individuals for citizenship. ‘Public reason’ as we are using it here is a doctrine about the terms on which dialogue should occur. It says that the reasons we offer in dialogue with others should make a good faith effort at being ‘public’—that is, at being intelligible across a range of traditions, beliefs, and practices, and open to criticism and revision based on information that meets the same test of good faith intelligibility. (Crouch, Miller, & Sideris, 2006).
Possessing (and acting upon) intelligent belief in evaluation is a special obligation of evaluators—those who claim to be well prepared in the science and art of making distinctions of worth. Thus, it is incumbent upon those of us involved in educating individuals desiring to do evaluation work to help them acquire and develop this kind of intelligent belief. It is also part of the professional responsibility of all evaluators to cultivate this belief in the citizenry, for the res publica—the common good—depends on it.
Diagnosing the state of thinking in society
The idea of educating for intelligent belief in evaluation, while undoubtedly something we all find reasonable, seems exceptionally salient and particularly urgent of late. We are facing a rather ominous brew of developments affecting practical intellectual life in modern society and the very well being of society itself. By practical intellectual life I mean not simply the life of the embodied mind, and the importance of thinking, reasoning, and understanding, but how those cognitive endeavors play a role in the achievement, maintenance, and enhancement of the good society. I’ll leave it to those of you who are better historians than I to explain how this mixture has developed. For present purposes, I simply want to point to some noteworthy things that are happening in this regard that have significance for what I believe is a fairly pressing need to cultivate intelligent belief in evaluation in both aspiring evaluators and the citizenry.
The substitution of spin for reasoned assessment
Spin is the act of casting someone’s remarks or relating a story in such a way as to influence public opinion. It implies being disingenuous and deceptive and often uses such strategies as cherry picking and euphemisms or double speak. Spin on both political and scientific issues has reached an art form. It is clearly a bipartisan undertaking, and it is, paradoxically, both fostered and checked by the proliferation of web blogs. Examples of how all quarters of society are infected by spin abound: In July 2007, the Washington Post (DeYoung, 2007) reported that the US Joint Forces Command paid the Rand Corporation $400,000 for a study entitled “Enlisting Madison Avenue: The Marketing Approach to Earning Popular Support in Theaters of Operation.” According to the clinical psychologist who authored the report, the key to boosting the image and effectiveness of US military operations around the world involves both shaping the product and the marketplace and then establishing a brand identity that places what you are selling in a positive light. The study concluded that the military’s “show of force” brand had limited appeal to Iraqi consumers, and that a more attractive branding of the military’s efforts would be “we will help you.”
In 2007, the television program 60 Minutes drew attention to the work of Richard Berman, the former labor management attorney and restaurant industry executive who now works as a Washington lobbyist and serves as executive director and president of the Center for Consumer Freedom, that claims it is devoted to defending “the right of adults and parents to choose what they eat, drink, and how they enjoy themselves.” According to an article in the American Prospect (Sargent, 2005), Berman wages a never-ending public-relations assault on doctors, health advocates, scientists, food researchers, and just about anyone else who highlights the health downsides of eating junk food or being obese.
He also targets groups that want animal-treatment standards for the meat industry and trial lawyers who want to sue the food industry. Such people, Berman notes on the center’s website, are “food cops, health care enforcers, militant activists, meddling bureaucrats and violent radicals who think they know what’s best for you.” However, while Berman presents himself as a defender of consumers against overbearing bureaucrats and health zealots, he is really defending the interests of another group: restaurant chains, food and beverage companies, meat producers, and others who stand to see profits hampered by government regulations, or even by increased health awareness on the part of consumers. Berman’s spin campaigns have been mounted against efforts to deal with obesity, mercury in fish, raising the minimum wage, union organizing, and the attempt by MADD to lower the blood alcohol limit for drivers.
Serious issues will always be framed from different perspectives. That is not the point of these two examples. What they signify, to me at least, is the belief that the public is a bunch of dummies that can readily be influenced by bombast over argument, spin over substance, image over reality. In sum, there really is no need to think carefully about an issue, weigh up evidence, and come to a reasoned position. Thinking can be easy.
‘Easy think’ flourishes in a climate of apathy, distrust, and cynicism. Goldfarb (1991) argued years ago that cynicism—the prevailing political wisdom—undermines rational public debate. Cynical condemnation of public life or public reason as a sham, manipulation, ideology, or self-interest does not lead to critical thought; it simply readies the ground for the growth of easy think.
The political manipulation of science in the interest of ideological conviction
A second, related, development concerns the troubled relationship at present between science, politics, and democracy. Never an easy relationship, and marked by at least a 50 year history of instances of political manipulation of science in the interest of ideological conviction, at present the situation is acute as noted in editorials that have appeared in the past two years in the Washington Post and the Christian Science Monitor, among other media outlets. Lewis Branscomb (2004), Professor Emeritus in Public Policy and Corporate Management at Harvard and former government appointee to various scientific committees by Presidents Johnson, Nixon, Carter, and Reagan has observed that the number, intensity and scope of reports of political interference with the processes for bringing scientific information and advice to government policy decisions is simply unprecedented in recent years.
I do not want to belabor the obvious point here and will simply note a few examples of criticism emanating from different sectors of society. Many readers are no doubt aware of the work of Congressman Henry Waxman, chair of the House Committee on Oversight and Government Reform in compiling allegations of abuses including suppression or manipulation by officials of information bearing on public health and the environment, and the replacement of experts on advisory committees when their views conflicted with industry or ideological interests.
In February 2004 the Union of Concerned Scientists issued the statement, “Restoring Scientific Integrity in Policy Making.” Initially signed by sixty-two leading scientists, as of January 2008, the statement bore the signature of more than 12,000 scientists from the US, including fifty-two Nobel laureates, sixty-two National Medal of Science winners, and 194 members of the National Academies of Science, and science advisors to both Republican and Democratic presidents dating back to Eisenhower. The statement declares that across a broad range of issues—from childhood lead poisoning and mercury emissions to climate change, reproductive health, and nuclear weapons—political appointees have distorted and censored scientific findings that contradict established policies. In some cases, they have manipulated the underlying science to align results with predetermined political decisions. They have also undermined the independence of scientific advisory panels by subjecting panel nominees to political litmus tests that have no bearing on their expertise, and by nominating under- or unqualified individuals—some of whom have industry ties that could represent a conflict of interest. Other scientific advisory committees have been disbanded altogether.
In July 2007, Washington Post carried an article on the former surgeon general, Richard Carmona’s claims that the administration muzzled him on sensitive public health issues (Lee, 2007). His was but the latest in a string of complaints from government employees at NASA, the FDA, the NIH and other agencies that ideology is trumping science.
What do we make of this? This is not Bush administration bashing. It is symptomatic of a much larger social issue concerning the relationship between science, politics, and society. Branscomb, who I mentioned earlier, has argued that the long tradition of American political pragmatism—wherein voters evaluated their leaders by objective assessments of what they achieve for the lives of people in society has long outweighed influences of ideology, religion, and elite connections. It is because of this tradition that government officials for many years have encouraged the best and brightest in science to advise the government on science-based public policy. But this is changing. Voters are increasingly focusing more on social, religious and patriotic values and are persuaded, as my comments about spin indicate, by images that present the appearance of pragmatic measures of achievement. Branscomb (2004, p. 59) approvingly quotes Sheila Jasanoff’s observation from her 1990 book, The Fifth Branch: Science Advisors as Policy Makers: “In the closing decades of the 20th century the intellectual and technical advance of science coincides with its visible decline as a force in the rhetoric of liberal-democratic politics.”
The tendency to let ideology trump science is not simply a development in political life but part of our own doing as evaluators and scientists as well. It is partly the result of our insistence on maintaining a sharp distinction between the natural world of fact and the cultural and social world of value, and our failure to fully come to terms with the awareness that evaluative and scientific knowledge and their expressions are rooted in historical and political contexts. This, it seems to me at least, was a central message of House and Howe’s (1999) examination of the idea of deliberating facts and values to reach an all-things-considered evaluative judgment.
The growing threat of technical professionalism
Turning our sociological gaze in yet another direction reveals a further development—one that surrounds our understanding of the professions. It is hardly a closely held secret that government affairs and social life in general are increasingly dominated by market logic and the logic of consumption wherein the public is depicted as customers who relate to their government and to each other on the basis of an economic, rather than a social contract (Hall, 2005). In such a climate, as William Sullivan (2004) observed in his book Work and Integrity, professionalism, understood as a duty to the public, or as Sullivan puts it, “the demand that a professional work in such a way that the outcome of the work contributes to the public value for which the profession stands”, is at risk (p. 23). The powerful trend of market logic works to dilute, if not completely strip away, any moral relationship between profession and society except that of commercial exchange. As the ideal of professionalism as a kind of ‘social trusteeship’ erodes, it is replaced with notion of technical professionalism, with the professional reduced to a supplier of expert services. Without aiming to sound too pastoral here, we face the danger of a divorce of technique from calling.
The propensity to reach for all-or-nothing solutions to social and political problems
Ironically, in this climate, professionals are increasingly being called on to offer advice to decision makers that frame their options as dichotomous choices or all-or-nothing solutions. This development is obvious in several situations well known to evaluators. In the fields of mental health, medicine, social work, and education the familiar bid to develop evidence-based practice is often (quite incorrectly) framed as a dichotomous choice—either a practice is technically based (that is, it consists of the application of scientifically validated knowledge) or it is hopelessly (critics claim) judgment-based. In the arena of international development evaluation we often find that solutions to country-wide and region-wide problems such as the impact and spread of HIV/AIDS are framed as all-or-nothing—either one invests in political leadership and government and NGO cooperation or one invests in civil society, in community groups, grassroots organization, and the like. Or we find a single methodology or approach to evaluation touted as the Holy Grail as is evident in the near total convergence on impact evaluation as the only means of determining the value of development projects. In health care, there is a strong preference for using economic evaluations comparing the costs and benefits of two or more treatments to identify the single preferred option over the choice of allocating treatment based on patient-specific preferences.
This trend to value only optimal solutions is disturbing for several reasons: It is a kind of intellectual arrogance—the attitude that Thomas Huxley (1866) so recklessly expressed in advocating Darwin’s newly proposed theory, when he said of empiricism that there is but one kind of knowledge and but one way of acquiring it. It ignores all that we have learned about complexity, about bounded rationality, and about the virtues of a seeking a satisfactory outcome versus a maximal one (A lesson taught years ago by Herbert Simon and recently resurrected by Michael Feuer, Executive Director of the Division of Behavioral and Social Sciences and Education at the National Academies of Science in his book advising educational researchers and policymakers Moderating the Debate: Rationality And the Promise of American Education). This trend is disturbing because it reflects a lack of tolerance for ambiguity. This trend is troubling because it ignores the important fact that considerable variation in perspectives is not simply inevitable but actually productive of new ways of thinking. This trend is alarming because it shows an unwillingness to fashion solutions to problems in inevitably contingent and complex circumstances. Finally, it is distressing because at least in the policy arena, it is based on an overly simplified view of the social order and a tendency to discount the capacity of citizens to create meaningful solutions to problems (Cook & Pickus, 2002).
The substitution of assurance for evaluation
Another clear trend is the eclipse of evaluation by assurance. Evaluation was conceived as an undertaking useful in a highly interpretable social environment—what Karl Popper called an open society and Don Campbell renamed the experimenting society. This is a society in which we ask serious and important questions about what kind of society we should have and what directions we should take. This is a social environment indelibly marked by uncertainty, ambiguity, and interpretability. Evaluation in such an environment is a kind of social conscience; it involves serious questioning of social direction; and it is a risky undertaking in which we endeavor to find out not simply whether what we are doing is a good thing, but also what we don’t know about what we are doing. So we experiment—we see what we can learn from different ways of knowing in evaluation, we try to work from the top down (so to speak) using what policy makers say they are trying to do as a guide, as well as from the bottom up, doing evaluation that is heavily participant oriented or user-involved. All of this unfolds in an atmosphere of questioning, of multiple visions of what it is good to do, of multiple interpretations of whether we as a society are doing the right thing.
In the past twenty years or so, largely as a result of the rise of managerialism (Clarke & Newman, 1997), the role to be played by evaluation in a vigorous debate on social direction has been radically changing. Managerialism is a way in which economic rationalism is implemented in government and non-profit agencies. Managerialism is a set of beliefs, attitudes, values, and activities that support the view that management is a problem solving approach that avoids conflict and argument and focuses on the rational assessment of problems (e.g., gathering information, weighing alternatives, evaluating consequences, choosing the best course of action). Management framed in this way is grounded in a kind of technicist belief in the power of human mastery—We imagine a fairly unbounded human ability to solve social, economic, and political problems with the use of the right scientific tools and methods and the right technologies. A managerialist way of framing evaluation is evident in the U.S. Office of Management and the Budget’s evaluation tool known as the Program Assessment and Review Technique (PART) that was developed “to assess and improve program performance so that the Federal government can achieve better results. A PART review helps identify a program’s strengths and weaknesses to inform funding and management decisions aimed at making the program more effective” (http://www.whitehouse.gov/omb/part/).
One consequence of managerialism is that society begins to view evaluation as simply one of the technologies needed for assuring the effective and efficient management of society. That is, evaluation comes to be regarded principally as a technical undertaking—a job that involves the successful application of tools, systems, or procedures for determining goal attainment, outcomes, or effects of policies and programs. Hence, society comes to take for granted the framing of policies and programs within the dominant managerial discourse. That, in turn, leads to a gradual erosion of evaluation as an independent kind of questioning and informed critical analysis. I worry that we are witnessing the displacement of the experimenting society by the audit society.
The market transformation of the modern research university
A final contemporary development has to do with what is happening to the modern research university—the site where university education in evaluation takes place. We are witnessing ever-greater pressures to corporatize and vocationalize higher education. While to be sure, acquiring new knowledge and skills for success in the labor market is a good thing, when taken to an extreme this development threatens the very idea of higher education as a public good and as an autonomous site for the development of a critical and productive democratic citizenry. At my own university, we have developed a corporate arm called the Global Campus with initial first-year funding of $9 million that will offer highly profitable (or so it is assumed) on-line professional master’s degree programs to educational consumers. None of this is to say that there cannot be ways in which economic and entrepreneurial, academic and intellectual values can coexist—if I did not believe that, I would not be a department chair (although I admit there are days I truly question my decision about that).
However, I worry when I hear the mantras of many university deans, presidents, and boards of trustees who talk about their faculty as academic entrepreneurs, who pursue the development of benchmarks and metrics for academic unit performance, and who refer to students as customers. I worry that what we are witnessing is the erosion in the public imagination of a vocabulary for understanding higher education as a potential source of political or social transformation, as a source of critical education, and as a source of democratically inspired thought. I worry that we will no longer look to research universities as the source of inventive notions of social agency and means of critical examination (such as evaluation) that help to expand the meaning and purpose of democratic public life.
This development is significant for the kind of education in evaluation that is and will be encouraged in university settings. Will we continue to vocationalize educational preparation in evaluation? Will education in evaluation increasingly come to be viewed as training, that is, solely as the acquisition of knowledge and skills required for employment? Will the connection between evaluation and the liberal democratic ideals of the pursuit of a just society through debate, questioning, and deliberation simply be regarded as a relic of past thinking?
Implications for educating for belief in evaluation
So there you have it—a foreboding mix of spin, political manipulation of science, an eroding sense of civic professionalism, a fondness for dichotomous thinking, a growing preoccupation with assurance rather than evaluation, and university education increasingly moving in response to market forces. This is all very worrisome for each development signals in a particular way degradation of the cultivation of, capacity for, and necessity of reasoned evaluative criticism in the practical intellectual life of society. This set of circumstances is highly relevant to teaching and learning evaluation, not simply because it describes the conditions under which we work, but because it speaks to a particular obligation that we must meet with respect to our work—an obligation to the public and to future generations of evaluators.
The concoction of developments I have just sketched makes it plain that we cannot simply teach about the ways and means of doing evaluation, we must also convey the idea of evaluation as a practical, intellectual disposition and outlook on social and political life. Simply equipping more and more individuals, with whatever methods, to do more evaluation will not help address the circumstances we find ourselves in as a society. Evaluators have an obligation to educate the public to the idea of evaluation as a way of reasoning in and being in society. My thesis is that it is a professional obligation of evaluators not simply to deliver evaluation services with integrity but to teach this notion of intelligent belief in evaluation to both aspiring evaluators and to the public. In what follows, I briefly sketch several components of intelligent belief.
Understanding the value of evaluation
One critical aspect of intelligent belief in evaluation is understanding and communicating the value of evaluation itself. We are all familiar with the idea of value defined in terms of utility (or instrumental value)—that is, something is considered of value because it is for the sake of something else. It is commonplace that evaluations are promoted for their instrumental value—evaluation is useful in determining whether a program or policy works, that is whether it has achieved desired aims, outcomes, or impacts. Evaluation has instrumental value because it generates credible empirical evidence that figures prominently in the determination of the effectiveness of different means to agreed-upon ends.
On the other hand, we also make judgments of intrinsic value—for example, we say that knowledge is of value for its own sake, that traditionally under-represented peoples are always entitled to an equal voice in community decision making, that the pursuit of truth is worthy in and of itself, that the rights of children are sacrosanct, that participation in democratic decision making is a moral good. Importantly, the intrinsic value of something is generally thought to generate a moral duty or obligation on the part those who hold it to protect it. In other words, if I believe that truth telling is of intrinsic value, then I have a moral duty to defend and protect that idea. Intelligent belief in evaluation—broadly understood as an appraisal of whether, as a society, we are doing the right thing and doing it well—means conveying to the public not simply that evaluation is of instrumental value but that evaluation is of intrinsic worth—it is something we hold and defend as important to our sense of society and self.
Understanding the nature of evidence
Intelligent belief in evaluation also entails knowing what evidence is and what one can and cannot do with it in evaluation as well as being able to competently communicate those understandings to the public. This is a very involved topic, and I simply want to point out a few key ideas here. First, there is no such creature as foundational, natural, unprocessed, or uninterpreted evidence. As philosophers of science have noted for years, evidence arises in the context of a particular set of beliefs.
Consider a simple example (Avis & Freshwater, 2006): Imagine cross-examining a witness whose testimony to the fact that is was raining the night of the accident is key to a case. What the cross-examination entails is probing the complex set of beliefs on which the apparently simple evidentiary statement, “it was raining that night” rests. While the cross examination may not involve questioning beliefs about the causes of water falling from the sky (although this is certainly a key set of beliefs entailed in the claim), the cross-examiner will undoubtedly check other aspects of the web of beliefs including the meaning of terms that are essential to understanding the evidence —“What do you mean ‘raining’? Was it drizzling, misting, was it a downpour, a shower?” The good cross-examiner will also go after other aspects of the web of beliefs that have to do with understanding what constitutes a reliable observer—“Were you on any medication that night? Were you drinking and perhaps hallucinating? Are you sure you aren’t simply mistaken?” What this simple example illustrates is that the reason for holding one belief—in this case a belief about evidence that it was raining—rests upon other beliefs. In other words, evidence cannot be separated from the beliefs that influence its production and interpretation.
From this way of thinking, it follows that there is no species of evidence that can ever decisively determine such a thing as outstanding employee performance, the impact of an intervention, the outcomes of treatment, the effectiveness of a reading curriculum, and so on absent beliefs about what the very terms outcomes, performance, impact, and effectiveness mean, as well as beliefs about the utility and credibility of various methods to determine these states of affairs, as well as beliefs about who constitutes a credible observer, and so on.
Furthermore, empirical evidence may help us determine the relative likelihood of well-defined outcomes, harms, and benefits, but it does not help us determine the relative importance of those outcomes, benefits, or harms. To determine the latter is a moral-political matter. This is not something that most evaluators want to deal with for fear that it compromises their objectivity. However, we cannot avoid responsibility for addressing questions of social desirability by hiding behind claims that our job is only to employ methods that generate empirical evidence. Given the influence of managerialism on evaluation, there is a strong tendency to look for methodological solutions to what are largely moral-political problems—Problems such as framing the ‘right’ questions to ask about a policy and program; problems such as linking evidence to decision making in very complex and complicated environments of socio-political actors. When discussion turns to the question of the ‘best’ methodology, we are often witnessing a bid for an all-or-nothing solution, taking a very simplistic view of the social order, and distrusting anybody but experts to create effective solutions to problems.
As evaluators, we are in it up to our necks in this kind of milieu. Intelligent belief in evaluation means having the moral courage to enter the fray—to be capable of patiently and persuasively arguing that methodological matters cannot be neatly disentangled from moral-political matters. It means recognizing the situations in which we find ourselves. For example, sometimes, but only sometimes, the important question on the table is one of what the evidence tells us. Sometimes, even when that is the question, we still have to face the issue of whether what is evidently apparent is something we find desirable. Those who call themselves evaluators must have something intelligent to say about the social desirability of various outcomes and effects of policies and programs.
Understanding evaluation as argument
Intelligent belief in evaluation means that in the education of evaluators, alongside learning about evaluation models, methods of generating evidence, means of checking the quality of that evidence, and so on evaluators learn about—and become capable of explaining to the public—that an evaluation is an argument. My concern here is that in the press to master methods of generating data we ignore the idea of developing a warranted argument—a clear chain of reasoning that connects the grounds, reasons or evidence to an evaluative conclusion. Instead of providing a warrant for our claims, we often rehearse the research methods used or state something like “conclusions are justified because they were based on the use of” (you fill in the blank): “multi-faceted and richly detailed data,” “multiple methods,” “standard experimental procedures,” and so on. However, those are not warrants for the credibility and validity of an argument. A warranted argument might appeal to an established understanding of a research design—as for example, determining causal claims via an RCT—but a warranted argument is not reducible to any particular method of data collection. Developing a warrant means asking the question ‘what else might this mean’ and then convincingly ruling out plausible rival explanations of value to come to the conclusion that this is best explanation of apparent value we have at present.
Moreover, I am concerned that we do not attend carefully enough to the characteristics of an evaluation argument—a case made by Ernie House on several occasions (House, 1980; 1995). The word argument here signifies that an evaluative judgment is not a matter of logical demonstration but a matter of persuading a particular audience (using reason and evidence) that something is the case. The characteristics of such an argument are that it is:
(1) Practical & Presumptive—The term practical signifies that we are dealing with decisions incapable of being made in an algorithmic way. Presumptive means that that argument is about what is considered most likely and reasonable in the circumstances—rather than a matter of proof.
(2) Contextual—An evaluation argument is contextualized in two senses. First, the context determines, in part, what comprises reasonable evidence, criteria, and data. In other words, for example, the value of a program is studied in a particular context of debate, conflict of opinion, value preferences, criticism, and questioning about the relative merits of those opinions, values, preferences, and criticisms. Second, it is with reference to a context composed of some particular client(s) and stakeholders that the evaluator aims to make a persuasive case for her or his judgment. Evaluation arguments are always indexed in this way to some particular context of contentious ideas.
(3) Dialectical—The argument that the evaluator constructs is dialectical because it is designed to respond to particular doubts that clients might raise about the credibility, plausibility, and probability of the evaluator’s conclusion. In addition, evaluators also have an imagined or real meta-evaluator or peer group in mind in constructing their arguments. They develop their judgments while thinking “Would this stand up to the scrutiny of my peers?”
(4) Finally, an evaluative argument involves both persuasion and inquiry—The evaluator aims to persuade clients of her or his conclusion or point of view on the value of the evaluand. Thus, the rhetoric of the written or oral argument—its clarity, thoroughness, organization, and so on—matter, for the evaluator always asks, “How can I put my case so that others will not misunderstand?” At the same time the argument is based on inquiry; it is a knowledge-based or evidentiary argument.
Understanding normative logics
This brings me to yet another component of intelligent belief in evaluation, namely, understanding that valuing in evaluation involves deliberation among normative logics. The latter is a cumbersome phrase but a necessary one for it signifies something beyond the idea of inventorying stakeholder perspectives.
Understanding what constitutes credible evidence as well as understanding what comprises a convincing argument in evaluation rests, in part, on recognizing that multiple normative logics are involved in assessing value in evaluation. This essential feature of evaluation is in danger of being overshadowed by the current preoccupation with the importance of scientific evidence for determining what should be done in our social practices.
What is happening is something like this: A fairly simplistic effort is underway to expand the logic of evidence-based medicine from laboratory and hospital based medicine—arenas in which evidence of the etiology and treatment of disease is fairly well understood—into practices where that kind of certainty and clarity do not exist—for example, criminal justice, occupational health, teaching, social work, counseling, and so on.
As Vos, Willems, Houtepen (2004) argue, the standard assumption of EBM is that scientific evidence needs to be collected, translated into guidelines, protocols, instructions and procedures and then infused into the world of medical professionals. The key point here is ‘infuse’, signifying that the criteria, norms, and values of the medical scientific world enter into and dominate the worlds of medical professionals.
The problem, as identified by these authors, is that this idea of infusing scientific knowledge into practice does not hold in fields of practice like teaching, social work, occupational health, and so on. In these fields, different groups of professionals and lay persons have to collaborate, and their different perspectives have to themselves be evaluated and coordinated. For example, in deciding upon what constitutes an effective and desirable intervention, in, say, social work, one must take into account the logic of science (so to speak), that is, what does the evidence reveal about what works; the logic of social work practice (what is it that we are able and capable of in a given practice situation, what has our experience taught us); and, the logic of clients’ worlds (what are their goals, how do they determine what living well in some particular set of circumstances means, and so on.) Each is a normative logic, that is, each is a way of deciding what is good and right to do, and each is a way of deciding what constitutes credible evidence. We need to help the public understand that an evaluative argument requires engaging these normative logics, and that scientifically generated evaluative data is not by definition an evidently better and more authoritative source of insight.
Acknowledging complexity & defending practical rationality
Intelligent belief in evaluation is also matter of both embracing and explaining to aspiring evaluators and the public the complexity of social systems and the limitations on our ability to predict, plan, and control their behavior. It may indeed mean that in the face of such complexity we need an experimenting society committed to innovation, social reality testing, learning, self-criticism, and the avoidance of self-deception, as Don Campbell so elegantly formulated it. Yet, an experimenting society is not one in which policymakers and their evaluation allies seek to manage social and economic affairs in an apolitical, managerialist, and scientized manner. To put it bluntly, policymaking and evaluation are not exercises in social technology. An experimenting society is an evaluating society, not a scientific society or an audit society.
In such a society we are modest in our expectations to solve social problems through policy supported by evaluation efforts. We recognize that the processes involved in the formulation, implementation, and evaluation of policies and programs are not exercises in scientific thinking. Rather, as Sanderson (2006) has noted, they are essentially communicative acts, involving dialogue and argument shaped by rules, conventions and power structures. Moreover, in such a society, as Sanderson explains, we are fully aware that the policy maker and the service provider do not simply seek to deal with uncertainty on a technical basis using evidence, but rather seek to cope with ambiguity on a practical basis, making wise judgments about the appropriateness of their actions in relation to a range of technical, political, moral and ethical concerns.
Thus, intelligent belief in evaluation means educating evaluators-in-training and the public to an expanded sense of rationality (Sanderson, 2006). Expansion is needed along two bearings: First, in the direction of greater cognitive pluralism—acknowledging that evaluation knowledge and expertise are not the sole basis for rational choice. Second, as Ernie House (House & Howe, 1999) has argued, an extension of rationality to cover consideration of values, ends, and ethical-moral choices. Only this kind of practical rationality can guide us toward appropriate action in complex and ambiguous social contexts.
To successfully manage complexity and exercise practical reason requires a set of traits or dispositions—dispositions that should be most obvious among those who call themselves evaluators, but also dispositions that evaluators aim to cultivate in others. For an evaluating society to flourish, it needs citizens and professionals that are marked by their capacity to be inquisitive, systematic in their inquiry, judicious in their claims, truth seeking, analytical, intellectually humble, sympathetic to opposing points of view, self-critical, and open-minded—not simply open-minded in the sense of being tolerant of other points of view, but open-minded in the sense of recognizing the challenges to one’s own way of seeing things that arise from others’ ways of making distinctions of worth. These are the dispositions of a critical thinker (Paul & Elder, 2007; see also http://www.criticalthinking.org). For more than 30 years, Michael Scriven has been reminding us that evaluation involves critical thinking.
Education for intelligent belief in evaluation is perhaps our highest responsibility as an association and certainly as university educators of evaluators, yet I often feel it has been our most troubling failure. Given the characteristics of the social and political environment that I sketched at the outset, I worry that we have yet to teach those who aim to be evaluation practitioners and many of our fellow citizens as well that evaluation is so much more than a collection of means, however cleverly and competently employed, for assessing effect and impact. I worry that we have not adequately taught that evaluation is about experimenting, social criticism, constant questioning, objectivity, candor, intellectual honesty, and modesty in our claims to know what is of value.
The last time I checked Google I found over two hundred thousand hits for the phrase ‘evaluation toolkits’. Tools and procedures for evaluative inquiry are wonderful things, but manualizing and proceduralizing evaluation in society are not. As an association and as university educators, we need to develop more evaluation-thinking kits. That is what educating for intelligent belief in evaluation demands. That is what our society needs.
Avis, M. & Freshwater, D. (2006). Evidence for practice, epistemology, and critical reflection. Nursing Philosophy, 7: 216-224.
Branscomb, L.M. (2004). Science, Politics, and U.S. Democracy, Issues in Science and Technology, Fall, 53-59.
Clarke, J. & Newman, J. (1997). The Managerial State. London: Sage.
Cook, B.J. & Pickus, N.M.J. (2002). Challenging policy analysis to serve the good society. The Good Society, 11(1).
Crouch, R. A., Miller, R.B., & Sideris, L.H. (2006). Intelligent Design, Science
Education, and Public Reason. Bloomington, IN: The Poynter Center for the Study of Ethics and American Institutions. Available at http://poynter.indiana.edu/science.shtml
DeYoung, K. (July 21, 2007). The Pentagon gets a lesson from Madison Avenue. The Washington Post, A01.
Goldfarb, J.C. (1991). The Cynical Society: The Culture of Politics and the Politics of Culture. Chicago: University of Chicago Press.
Gorard, S. (2002). Fostering scepticism: The importance of warranting claims. Evaluation and Cook, B.J. & Pickus, N.M.J. (2002). Challenging policy analysis to serve the good society. The Good Society, 11(1): 1, 5.
Hall, K. D. (2005). Science, globalization, and educational governance: The political rationalities of the new managerialism. Indiana Journal of Global Legal Studies 12 (1), 153-182.
House, E.R. (1980). Evaluating With Validity. Beverly Hills, CA: Sage.
House, E.R. (1995). Putting things together coherently: Logic and justice. In D. Fournier (Ed.), Reasoning in evaluation: Inferential links and leaps. New Directions for Evaluation No. 68. San Francisco: Jossey-Bass.
House, E.R. & Howe, K.R. (1999). Values in Evaluation and Social Research. Thousand Oaks, CA: Sage.
Huxley, T.H. (1866). “On The Advisableness Of Improving Natural Knowledge.” A lay sermon delivered in St. Martin’s Hall on Sunday, January 7th, 1866, and subsequently published in the Fortnightly Review. Retrieved July, 12, 2007 from http://infomotions.com/etexts/gutenberg/dirs/etext01/thx1410.htm
Lee, C. (July 11, 2007). Ex-surgeon general says White House hushed him. Washington Post, A01.
Paul, R. & Elder, L. (2007). A Critical Thinker’s Guide to Educational Fads. Dillon Beach, CA: Foundation for Critical Thinking.
Sanderson, I. (2006) Complexity, ‘practical rationality’ and evidence-based policy making. Policy & Politics, 34 (1): 115-132.
Sargent, G. (January 3, 2005). Berman’s battle. The American Prospect (web version). Retrieved July, 25, 2007 from http://www.prospect.org/cs/articles?articleId=8984
Sullivan, W. (2004) Work and Integrity: The Crisis and Promise of Professionalism in America 2nd ed. San Francisco: Jossey-Bass.
Vos, R., Willems, D., & Houtepen, R. (2004). Coordinating the norms and values of medical research, medical practice and patient worlds—the ethics of evidence based medicine in orphaned fields. Journal of Medical Ethics, 30: 166-170.