Development Essay History International Knowledge Politics Science Social

The report on history is concerned mainly with those persons—inside, but also outside, the historical profession—who are engaged in historical work of a social science character, and with that part of historical study and training that falls within the scope of social science. This focus has no invidious implications. On the contrary, the diversity of historical work reflects the diversity of the historian’s interests and of the evidence available to him; this diversity is a valuable, even indispensable, feature of the discipline.

Because history is not a unitary discipline, however, an inquiry of this kind assumes a special character. It cannot simply be addressed by the profession to the outside world. Instead it is addressed on behalf of one segment of the profession to both the discipline and the outside world. We have tried to convey the state of that part of history that is or would be social science, and to offer recommendations that would promote and improve this kind of work. As will be seen, many historians are inclined to greet such recommendations with doubt, scorn, anxiety, or hostility.

We believe the promotion of social scientific history is in the interest of all historians. The changing character of historical evidence, the development of new techniques and concepts in related disciplines, the growing body of research by nonhistorians into historical problems—all these imply that even those historians who are not themselves working in social science have to learn to read it and use it, if only to teach their students.

What is more, most of the material facilities required to promote social scientific history are by their nature facilities for the entire discipline. Better libraries, easier retrieval and dissemination of data, more generous arrangements for pre- and postdoctoral research, and similar improvements redound directly or indirectly to the benefit of all.

In return, these gains are dependent on the cooperation of all, for students of history as social science will always need training in all aspects of the discipline. If anything, the growing sophistication of social scientific techniques makes it all the more important for practitioners of these techniques to know and appreciate the humanistic approach to historical knowledge. We cannot afford to gain a world of numbers and models, only to lose our historical souls in the process.

There is already a large body of literature on the nature and method of history. There have been published in recent years a number of essays on the relation of history to social science. Many of these raise difficult epistemological questions about the nature of truth and evidence that we prefer to avoid. We have barely touched the classic questions of historical knowledge: To what degree can the historian ever free himself from the biases of his own time and place? Should he? Is there a special mode of historical knowledge based on empathy—the ability to put oneself into the skins of other people in other times and places? Are there laws, cycles, repetitions, irreversible trends in history? We have not seriously examined the role of historical thinking and materials outside the discipline of history—an important question in a day when economists, sociologists, political scientists, and many others are attempting to work with historical evidence. Instead, we have concentrated our attention on history as a discipline and profession, with special attention to the social scientific sector, loosely defined. The kinds of questions we ask are: Who are the historians? What do they do? How do they work? What do they want and need? And what can be done about it?

The first large section of the report treats the discipline of history in general and seeks to define the characteristics of social scientific history in terms of ideal types. It includes summary findings of a survey of about six hundred working historians which the panel undertook in the spring of 1968. The next section describes some of the varieties of social scientific history, their achievements, limitations, and promise. Then we turn to the resources, working, and needs of the profession—first in teaching, then in research. A special section is devoted to library problems; another, to the role of foreign scholars. Finally, we sum up the observations and recommendations made along the way.

The nature of history

“The contribution of history is perspective. This is no small matter.”

If we are to concentrate on history as social science, we need some sense of what sets history off from other social sciences. The contribution of history is perspective. This is no small matter. It is only too easy and tempting for each generation (especially the more sensitive members of each generation) to see the tests and troubles of its own time as unique. For many, what is past is past, what matters is now and sometimes later. This is particularly true of social engineers who, however much they may be motivated by the recollection of past wrongs, do not want to be discouraged by the record of past mistakes. In defense of this “ostrich approach,” it must be admitted that history has been misused as a stick to beat reformers and block change.

Yet never is the perspective of history so valuable as when men try to shape their destiny, that is, try to change history. Then, if ever, man has to know how he came to this pass; otherwise he is condemned to repeat his errors or at best to blunder through one difficulty only to arrive at another. In this sense history, if readcorrectly, should help make men wise. Not everyone would agree. There has always been a body of opinion within the historical profession that has denied the possibility of an objective history—for the very cogent reason that it is simply impossible for the historian to perceive the past except through eyes distorted by personal values and sympathies. Each man, in this view, is his own historian. As for the lessons of history, men choose these to their purpose. De Gaulle called on France’s tradition of greatness and power to justify his break with NATO; his adversaries pointed to the experience of two world wars to show the necessity of European cooperation. Israelis cite Jewish history to demonstrate the justice and passion of their attachment to the Holy Land; Palestinians point to their own history—as recorded in the Bible—to argue that they were there first. Some supporters of the American military intervention in Vietnam have drawn an analogy to Munich and the appeasement of the 1930s to justify firmness in the face of totalitarian aggression; some of their opponents have gone back to ancient Athens for lessons in the folly of arrogance.

History is not alone in this respect. One could cite any number of other examples of self-serving analogy, even of conflicting inferences from the same body of evidence, from any of the behavioral and social sciences. Indeed, a lawyer might remark that this is the human condition: people will always see things differently—that is what keeps the courts busy.

It would be a serious mistake, however, to infer from these difficulties that our ignorance is inevitable and irreducible. Just as courts have developed over time adversary procedures and principles of evidence designed to promote the pursuit of truth and justice, so social scientists, including historians, have invented techniques for the collection, verification, and appraisal of evidence as a means of understanding man’s motivations and behavior. The understanding that results cannot be complete or definitive: the social scientist typically deals with a realm of probability, but as his techniques have become more refined and powerful, the probabilities and usefulness of his answers have increased.

The gains have been greatest in those areas where the social scientist has been able to simplify his problems by exclusion of all but a few paramount variables. The best example is economics. History, by comparison, has and always will have a hard time: the matter to be studied is inherently complex (some would say, infinitely complex) and resistant to simplification. That, however, only makes the task harder and the results of inquiry necessarily looser. It does not rule out a closer approach to the goal of truth.

Historians have often treated the complexity and particularity of their material as a good in itself. They have pursued the essential wisdom by immersing themselves in a particular time and place until they absorb its ethos, its rules of action, its everyday routines. The deep immersion sometimes produces marvelous reconstructions of the past, as when a Jacob Burkhardt takes us to Renaissance Italy. It also has a “privatizing” effect: historical work becomes an intensely personal thing and hence indivisible, noninterchangeable, perhaps even incommunicable.

This point of view has interesting consequences for the historian’s attitude toward research as an activity competing with other activities for scarce time. If the product of research is personal, it is not necessarily cumulative or additive. Some research is worth doing because of the subject and the person doing it, but much work is a waste of time, the writer’s and the readers’. Hence the remark of one of our correspondents:

“We need Malthusian restraint in research, not expansion, support, or encouragement. Demand quality and accept no substitutes.”

For similar reasons historians are often suspicious of courses in methodology and hostile to any kind of normalization of research procedure. If historiography is art, it cannot and must not be reduced to some kind of routine.

These values have, to be sure, a strong intellectual justification. Insofar as history attempts to see things whole, it is more dependent than other disciplines on individual perceptions. Interpretation and understanding are never routine; there are too many variables to reduce the analysis to some kind of procedure. Hence it is important that each scholar dig down to bedrock. He comes with new questions and concepts to old material as well as new; and if he permits himself to rely entirely on the ruminations of others, he has given half the game away.

It is one thing to justify this attitude in principle, however, and another to establish it as a moral absolute. Nothing comes free, and the insistence on “original” research is bought at a price. No other discipline builds so slowly, because the members of no other discipline are so reluctant as historians to stand on the shoulders of others. All historians can recall criticisms of colleagues and students on the ground that their work was too derivative at one point or another, that it relied too heavily on secondary sources.

A look at historians

Does our picture of the historical profession seem exaggerated? What do individual historians say about their conditions? To get some idea, we asked them.

In April and May, 1968, the History Panel mailed a short questionnaire to about one thousand regular members of the history departments of twenty-nine American colleges and universities. Over the next six weeks, roughly six hundred of those historians returned usable questionnaires, forty sent word of their refusal or inability to answer, one hundred replied in some other form, and two hundred and sixty did not respond at all. In the selection of departments the Panel intentionally emphasized large, prestigious graduate departments, but also included six good institutions where there was little or no training of graduate students in history. The twenty-nine departments together gave 64 percent of the PhDs in history granted in the United States during 1960–66.

The sample therefore provides a fairly good picture of what is going on in the institutions giving the bulk of American historians their advanced training, even if it seriously underrepresents the smaller and less prestigious departments.

The topographic map of the profession that emerges shows a rough, uneven terrain. Four fundamental features are shown by the data: (1) a rather unequal distribution of historical specialties among different sorts of departments and academic positions; (2) wide variation in research interests, needs, and support according to special field, type of institution, and position within the institution; (3) a standard life cycle of research experience; (4) some change in these matters from one generation of historians to the next. Let us examine each of these briefly.

Within the sample, African and Asian historians are disproportionately concentrated in the institutions with highest prestige, West European historians in the smaller liberal arts colleges, intellectual historians in both, rather than in the departments of middling reputation.

Historians of Asia, Africa, and Eastern Europe fairly frequently had affiliations with research centers in their institutions, while historians of the United States or Western Europe rarely had such affiliations. Yet the latter groups show the highest proportions of full professors, while the newer fields of East European, Asian, African, and Latin American history include many of lesser rank. Likewise, diplomatic historians, economic historians, and the much younger group of historians of science are concentrated in the senior ranks, while political, social, and intellectual historians generally occupy the junior positions. Thus an economic historian of Asia (to take an extreme case) is likely to hold senior appointments in both a department of history and a research center in a high-prestige institution, and a historian of European science is likely to hold a similar position without the research affiliation, while the odds are better that an American political historian will hold high rank, without research appointment, in a less distinguished institution, and that a Latin American social historian will hold a similar appointment at a lower rank. Since rank, quality of institution, and research affiliation all affect the historian’s ability to get his work done, the problems faced by specialists in the various fields differ considerably.

There are, for example, marked variations in research funds available to historians in different fields, as shown by the data on funds received from outside the university between 1964 and 1967, reported by historians in the twenty-nine departments surveyed. Except for the history of science (which is the best-supported field in almost every respect), the specialties receiving heavier outside support are generally those connected with interdisciplinary research centers. Among geographical specialists, historians of Latin America, Africa, and Asia do best. That is partly because such specialists are more likely to undertake expensive forms of research in the first place; it is also because more money is available for research on “exotic” areas or in interdisciplinary fields like economic history. Over and above these differences by field, our data show the decided advantage (not only in outside grants, but also in university support, teaching load, and time released for research) of the historian in a high prestige institution or with a research appointment.

On the whole, with the important exception of historians of science, the kinds of historians who are best supported also show the closest ties to the behavioral and social sciences. About a tenth of the historians in the sample have undergraduate degrees in behavioral and social science fields other than history; around 7 percent have PhDs in those fields; and roughly a third claim “substantial training” in at least one of them.

“We find that we can divide our historians into three categories: uninterested, involved, and frustrated.”

About three quarters of the historians queried said that at least one social science field was “particularly important” to their own fields of interest, about two thirds expressed interest in a social science summer training institute, and just over half would choose a social science field for a full year’s additional training. Distinguishing between fields, we find that we can divide our historians into three categories: uninterested, involved, and frustrated.

Historians of science and intellectual historians, especially those dealing with Europe and North America, typify those with little social science training, little current contact with social science fields, and little desire to change in this regard. Economic historians of the United States, Latin America, or Asia provide a good example of the involved: likely to have substantial formal training in economics, staying in contact with economics and economists, and interested in extending their knowledge of social science. The frustrated are those with little previous social science training who have come to think that it is vital to their own work: social historians of the Americas tend to fall into this category. While in this case everything depends on the definitions, it would not be outrageous to label a fifth of the historians answering our questionnaire uninterested, another third involved, and nearly half of them frustrated.

A fairly standard life cycle of research also appears in the findings. Within the sample the men just getting started tend to have heavy teaching loads, course assignments alien to their research interests, and poor support for their research. Those who are farther along begin to acquire funds, time off, and greater control over their teaching assignments, but also begin to feel the pinch of administrative responsibilities and outside commitments to writing and public service. The most senior historians are less likely to be involved in large and expensive research, although they continue to bear the burden of administrative and outside commitments.

The more distinguished the institution and the closer the affiliation with a research institute, the earlier the historian achieves the perquisites of seniority.

Finally, some features of the historical landscape are changing with time. Judging by age, year of acquiring the PhD, or academic rank, we find senior historians concentrated in the traditional fields of North American and West European history (especially diplomatic, intellectual, and political history), and junior men in the newer specialties of East European, African, Asian, and Latin American history. These latter fields include very few scholars who earned PhDs before 1945. In recent years Eastern Europe appears to have lost favor, but all the others have more than their share of PhDs earned since 1962. So the very fields that involve their practitioners most heavily in the behavioral and social sciences are the ones that are growing and are currently staffed with junior men.

The younger men have a different outlook on their profession. In answer to our (leading) questions, “Do you think of yourself as a social scientist, humanist, or something of both? Why?” a senior historian at a Midwestern university gave this thoughtful answer, characteristic of the older generation:

“Principally as a humanist because I believe history is principally made of the ideas and actions of men, oftentimes unpredictable, and cannot be measured in statistical or ‘scientific’ terms.”

One of his more irritable West Coast colleagues added, in capitals:

“THERE IS NO SUCH THING AS A SOCIAL ‘SCIENCE,’ ONLY MEN WHO BELIEVE THERE IS ONE.”

On the other hand, an Asian specialist from the Midwest said:

“I consider myself a social scientist. I was trained as one and view my work as a historian as developing and testing social science theory and method with historical data.”

A young historian of Latin America replied flatly: “As a social scientist,” but then added this comment on historical method that few humanistic historians would disagree with:

“I do not think that the method is different, but the application of the method by historians certainly differs from that of, for example, sociologists or anthropologists. I think that the historian is generally more penetrating in his search for evidence, and is more rigorous in his application of the method.”

The correlation of outlook with age and (more particularly) specialty is strong, if not perfect.

As new fields of inquiry flourish within history, the division of opinion is changing, and the genteel poverty of historical researchers may change as well. An increasing number of historians are working in fields that bring them into interdisciplinary research centers and other forms of contact with more favored disciplines. Since they are better financed and equipped than their fellows, they inevitably produce a kind of demonstration effect among them.

Some conclusions and recommendations

“What we are proposing, to both audiences, is a bigger and better bridge.”

Historians—at least many historians—have not yet learned to live with these uncomfortable intruders on a world of art, intuition, and verbal skill. Hence our concern to stress the fact that we speak here for just one branch of the historical profession and that the changes we recommend are complementary to, rather than competitive with, other branches of historical scholarship. Social scientific research will make history richer, more exciting, more valuable, more relevant (that much overused word!) to contemporary concerns and problems. But it is not alone in possessing these merits, and much of what it has to contribute is dependent on its incorporation within the discipline of history. The flow of knowledge and insight here runs two ways. History has always been a borrower from other disciplines, and in that sense social scientific history is just another example of a time-honored process; but history has always been a lender, and all the social sciences would be immeasurably poorer without knowledge of the historical record. The social sciences are not a self-contained system, one of whose boundaries lies in some fringe area of the historical sciences. Rather the study of man is a continuum, and social scientific history is a bridge between the social sciences and the humanities. What we are proposing, to both audiences, is a bigger and better bridge.


The following recommendations sum up those offered throughout the report:

  1. That departments of history diversify and enrich the present program of instruction: by building more courses around analytical themes (war, population, urbanization, etc.); by providing training in the techniques and concepts of social science (including quantitative methods and computer analysis); and by adding to the instructional staff, on a part-time and full-time basis, specialists in these techniques and concepts. Training in these areas should be required of those students intending to specialize in social scientific history; but all history concentrators and graduate students should be required to do a substantial portion of their work in some other discipline or disciplines.
  2. That universities and colleges, with the support of public and private funding agencies, increase the support available for graduate study in history to a level commensurate with that found in the other social sciences. In particular, support is needed for the extra time required for training in related disciplines and quantitative techniques; for the application of these methods in research (equipment, computer time, photographic work); and for a more flexible arrangement of field research.
  3. That departments of history organize a substantial part of graduate education, for those students who desire it, around the continuing workshop-seminar. Such a seminar would be an analogue to the teaching laboratory of the natural sciences. It would have its own premises, its own specialized library and store of research material, its own research equipment, and it would unite faculty, staff, and students in a changing variety of individual and team research projects built around a common interest, more or less broadly defined. The members of such a seminar could also serve as the staff for undergraduate courses in its area of interest, thereby gaining experience in teaching as well as research.
  4. That universities and colleges, with the support of public and private funding agencies, make it easier for historians to continue learning and research after the doctorate. Specifically, we recommend a loosening of leave arrangements to allow for both shorter and longer leaves than those currently permitted; the establishment of the postdoctoral research appointment as a normal option for first step on the academic ladder (as it already is in other disciplines); a program of retraining grants in combination with the establishment of interuniversity training institutes in fields important to historical research (statistics, computer programming, psychoanalysis); and increased support for such research centers as the Center for Advanced Study in the Behavioral Sciences and the creation of new ones, both in this country and abroad.
  5. That universities and colleges, with the assistance of public and private funding agencies, promote those forms of cooperation that will enrich their programs of instruction and facilitate research: specifically, interuniversity research consortia, conferences, discussion groups, collaborative teaching, joint degrees, division of labor in the acquisition of equipment and materials. All these forms of cooperation are already established in various places on both an ad hoc and standing basis; but there is still a great deal that can be done, particularly on an international level.
  6. That public and private funding agencies promote cooperation between American and foreign historical scholarship by linking counterpart grants for foreign scholars to American travel stipends; and that American scholars working abroad similarly promote cooperation where possible and appropriate by affiliating themselves with foreign academic institutions, by involving native scholars in their research, and by communicating their techniques and findings to the scholars and students of the host country.
  7. That the federal government commit itself to the maintenance and growth of our major libraries and archives as a precious national resource; and that it develop additional regional libraries so that colleges and universities throughout the country will be within convenient reach of a major repository. Further, the federal government should finance the preparation of a machine-readable union catalogue of all library holdings in the United States, including eventually articles in periodicals and collective works—entries to be retrievable by subject as well as by author and title.
  8. That libraries, archives, and museums widen the range of their collections to include those everyday records and artifacts that are now disposed of and destroyed but that will one day be the staple source material of social scientific history; that historians join with librarians and curators in developing a program for the systematic collection, storage, and retrieval of such material; and that public and private funding agencies finance both the planning and implementation of this expanded curatorial function.

This program of reform would obviously open paths to social scientific work in history which simply do not exist within the traditional confines of historical teaching and research. More important, it would enrich the study of every variety of history.

 


This essay is comprised of excerpts from the report of the History Panel of the Behavioral and Social Sciences Survey. These selections are reprinted (with minor changes) by permission of the publisher from History as Social Science, edited by David S. Landes and Charles Tilly, to be published on March 19, 1971 (Englewood Cliffs, NJ: Spectrum Books, Prentice-Hall, Inc.). Copyright © by Prentice-Hall, Inc., 1971.

This essay originally appeared in Items Vol. 25, Issue 1 in the spring of 1971. Visit our archives to view the original as it first appeared in the print editions of Items.

Science, these days, is political. Few people would disagree with this sentiment. And, yet, this represents a conundrum: many also would agree that science is supposed to be value free—objective, and certainly independent of political influence. In what ways does politics influence scientific knowledge, and why does this influence occur? This article sets out to answer these questions, providing an overview of various political influences on the production, communication, and acceptance of scientific knowledge. The potential scope of such a discussion is admittedly very broad. To provide a detailed accounting of politics and science within the bounds of this research article, the focus is largely restricted to the U.S. context. Focusing on one nation has the advantage of allowing for an integrated discussion of relevant actors in society—scientists, government officials, journalists, and the broader public—who react to one another, as well as to their shared history, as they shape scientific knowledge.

The structure of the essay is as follows. It begins by offering a definition of the politics of scientific knowledge and then proceeds to explain why science, which is supposed to be value free, is so often imbued with political meaning. The remainder of the essay discusses four groups of actors and the distinct ways in which they influence scientific knowledge at its various stages: government officials; scientists; journalists; and the public. This article aims to mainly provide a dispassionate, objective description of political influences on scientific knowledge. This said, at points the essay indicates where normative theorists generally endorse or denigrate political influences on scientific knowledge, and the end of the essay takes up normative questions in a more direct manner.

Defining “the Politics of Scientific Knowledge”

It is important to understand what is meant by the politics of scientific knowledge. This term (and closely related ones, such as the politics of science) carries a wide range of meanings—and not only because scholars disagree over how to define “politics” and “science.” Unless one intends to signal any way in which politics and science intersect, greater specificity is needed. The word politics in the above phrase could be understood as a noun (i.e., particular types of activities engaged in by scientists) or an adjective (i.e., an attribute of science). Further, the adjectival use could imply that science is an instrument used to influence politics, is influenced by politics, or simply has political implications or effects (Brown, 2015, p. 6).

This essay discusses one piece of the politics of scientific knowledge. Building on a framework introduced in Suhay and Druckman (2015), it focuses on political influences on the production, communication, and acceptance of scientific knowledge (and not the reverse causal relationship—scientific influences on politics). Politics, here, is not used in the broad sense of power relations in society. Rather, it is used in the more formal sense: describing government actors, government activities, and—among the public—both preferences and actions related to how government should be structured and what it should do. Scientific knowledge means conclusions drawn by scientists from systematic empirical study in their areas of expertise that are formally communicated to the scientific community (and, normally, the public).

This article focuses on scientific knowledge, as opposed to the scientific process that produces it, for two linked reasons. As noted in the next section, science is political because it is powerful, and its power ultimately rests in the knowledge it produces, its epistemic authority (Douglas, 2009). For this reason, discussion of political influences on science that are not associated with concern over the topics studied by scientists and/or the conclusions they draw with respect to those topics is largely avoided. A prominent example of political influences on science that largely fall outside of this purview would be efforts by government actors to ensure the integrity of the scientific process among scientists who hold government grants (see Guston, 2000). But focusing on scientific knowledge does not solely narrow our purview, it allows us to extend beyond the formal products of the scientific process to examine how those products are communicated to others and how they are understood by nonscientists, the lay public.

Finally, it is important to differentiate the definition of the politics of scientific knowledge employed in this article from a related concept commonly discussed today—the politicization of science. Numerous definitions of this term have been used in published work (e.g., Bolsen & Druckman, 2015; Fowler & Gollust, 2015). While the precise definitions vary in their details, they overlap in asserting that some actors have purposely imbued a scientific topic with political meaning and that this outcome is normatively problematic. The topic of this article is broader, encompassing unintentional actions (such as motivated cognition, bias that often operates below the level of conscious awareness) as well as political influences on scientific knowledge that are welcome from a democratic perspective. Of course, the term politicization is used when relevant.

Why Do Politics Influence Scientific Knowledge?

Many philosophers argue that understanding and describing real-world phenomena as they exist is a different endeavor from advocating for a particular phenomenon. In other words, “is” should not be confused with “ought” (Hume, 2000). Believing that one can deduce what should be done from what is true is called the “naturalistic fallacy” (Moore, 2004). Given that science is the province of “is” (or fact) and politics of “ought” (or values) this suggests science and politics should not mix.

Yet they do, even when science proceeds in an objective manner. While scientific knowledge cannot directly dictate values, it can indirectly bolster or undermine them. Scientists often investigate phenomena thought to be problematic, raising the question of how something has come to be defined as a problem. What values and/or whose interests are threatened? Given limited resources, scientists cannot investigate all problems, which raises the further issue of whose problems are considered (by scientists or their sponsors) worthwhile enough to pursue. In addition to influencing which scientific studies are carried out, values are also involved in translating scientific findings into societal action. Values influence when evidence for a specific threat is deemed sufficient to justify action. Finally, scientists inevitably advance certain preferences and interests at the expense of others when they attribute blame for a problem to specific individuals or groups as well as when they argue certain corrections are more promising than others. In sum, for a variety of reasons, scientific study to determine what “is” is often intertwined with “ought” (see Douglas, 2015; Jasanoff, 2012).

The general nature of this relationship between societal values and science is not country specific; however, in the United States, it has become stronger and more formalized over time as the federal government has increasingly recognized the importance of incorporating scientific knowledge into policymaking. The government’s reliance on scientific advice grew rapidly in the 20th century. By the end of the century, science advising could be considered a “fifth branch of government” (Jasanoff, 1990). In turn, the U.S. government has sought to foster scientific study thought to be in the public interest. This special status of science within the U.S. government not only exemplifies the critical role such knowledge plays in many policy decisions, it also represents a new locus of power over government action.

With science’s role in studying societal problems and influencing collective action (including government action) now in view, it becomes easier to understand why so many actors wish to influence scientific knowledge, in ways that often go far beyond what philosophers of science find appropriate (e.g., see Douglas, 2009, 2015). As Suhay and Druckman (2015) write, “individuals with strong convictions regarding which societal goals are most important and how those goals ought to be achieved … have an interest in what is accepted as ‘fact’” (p. 8). This interest sometimes motivates problematic attempts to get science on one’s side—such as miscommunicating scientific findings to shore up an argument in a public forum or simply resisting new scientific information that undermines a strongly held viewpoint. As indicated, not all attempts to influence scientific knowledge are normatively objectionable, however. For example, a citizen group concerned over a particular problem may try to influence the scientific agenda such that a solution might be discovered (Bucchi & Neresini, 2008). Here, individuals wish to direct the topic of study but do not wish to bias the resulting knowledge. Below, we discuss these—and many other—examples of political influences on scientific knowledge.

Government Influences on Scientific Knowledge

Government officials and the policies they create are the most obvious place to look for political influences on science. Sometimes officials act in the pursuit of personally held values. More often, their actions are driven by the preferences of colleagues, interest groups, and constituents. In other words, government policy is a route through which a variety of actors directly and indirectly influence scientific knowledge. This section first provides a brief history of the relationship between the U.S. government and scientists as well as the origins of contemporary left-right disagreements over the value of government-sponsored science. Then, in keeping with the remainder of the essay, it discusses in a more fine-grained manner the specific ways in which government actors can influence scientific knowledge—in terms of the agenda that establishes topics of study, the actual doing of science, and the communication of scientific knowledge.

A Short History of (20th Century) Government-Science Relations

While the U.S. government employed scientists in many capacities in the 19th and early 20th century, the close relationship between government and science we know today was forged during and shortly after World War II (Douglas, 2009; Kevles, 2006). President Franklin D. Roosevelt greatly expanded the role of the federal government, and this expansion included increased attention to scientific research and training (Kevles, 2006, p. 768). This greater focus on science under Roosevelt would expand dramatically during World War II, when the United States found itself greatly in need of technical assistance for the war effort—not only for the purpose of creating armaments, but also for developing new medicines and information gathering technologies (Douglas, 2009). Large numbers of scientists were hired to work directly for the government in government labs and indirectly via the contract research grant. New government-science institutions were created, including the important Office of Scientific Research and Development, which coordinated many scientific endeavors that supported the war effort. Scientists also began to play a greater role in advising government officials, particularly the president. The most influential of these advisors was Vannevar Bush, who not only advised President Roosevelt but also was one of the primary architects of the new government-science institutions of the period (Douglas, 2009; Guston, 2000).

After World War II ended, the United States found itself with a greatly expanded scientific capacity but a less-than-clear scientific mission. Government officials and scientists embarked on an intense period of collaboration in repurposing this capacity for a post-war world. While scientists generally were eager for the close relationship between government and science to be made permanent with such institutions after the war, they resisted the continuance of the hands-on character of government actors necessary for wartime. In Science: The Endless Frontier—technically a report to President Truman, but widely read—Vannevar Bush (1945) advanced a vision of government-science relations popular among scientists. This report argued that science is essential to the public welfare, but that scientific productivity is best ensured by preserving scientists’ autonomy—specifically, by investing in independent colleges, universities, and research institutes carrying out basic research. Bush writes: “Scientific progress on a broad front results from the free play of free intellects, working on subjects of their own choice, in the manner dictated by their curiosity for exploration of the unknown” (p. 7). The advice of Bush and his compatriots was heeded to a considerable agree. As detailed by Guston (2000), a social contract for science emerged. Trusted in large part because of their essential contributions to the war effort, scientists were given a great deal of deference, autonomy, and funding.

In the years following World War II, many new federal government institutions were created to both sponsor and oversee research. Prominent examples included the Office of Naval Research (founded in 1946), the Atomic Energy Commission (1947), the Research Grants Office of the National Institutes of Health (1946), and the National Science Foundation (1951). The role of scientists as policy advisers would become more formalized during this period as well. The most prominent of the new advisory groups was the Science Advisory Committee, initiated by Truman in 1951, which would provide advice to the federal government, especially the President (Douglas, 2009; Kevles, 2006).

Federal science would continue to expand at least through the 1970s, responding to national and international events. The National Aeronautics and Space Administration (NASA) (created in 1958) and heavy investment in the space program was a direct response to Sputnik, a satellite launched by the Soviet Union, within the context of the Cold War. The Environmental Protection Agency (EPA) (1970) was a response to a new environmental movement among Americans. These expansions were joined by even more reliance among government officials on scientists—with respect to “virtually every technically related area of government policymaking” (Kevles, 2006, p. 769). This growing role for science in government was supported by both Democrats and Republicans in government, a consensus that largely held until the end of the Cold War (Kevles, 2006).

Confidence in government-sponsored science among political leaders on the right and left began to decrease in the 1970s. It was in this era that science began to be politicized in a manner we are familiar with today, and the elevated status of scientists that had brought them considerable autonomy (in addition to esteem) was diminished. In a sense, government-sponsored science would be a victim of its own success. Scientists were playing a greater role than ever before in directing government policy. In addition, new medicines and technologies were developing and entering the marketplace at a rapid pace. In both capacities, science was becoming increasingly intertwined with Americans’ lives (Kevles, 2006). All manner of interest groups and activists took notice of the power of science and technology, publically lauding scientific reports that confirmed their perspective, opposing those that undermined them, and in some cases opposing the reach of science and technology, period. All of this fed scientific controversy (Kevles, 2006; Nelkin, 1995).

The slow fracturing of the bipartisan consensus on government science was uneven, however. Those on the right eventually became far more critical of government-sponsored science than those on the left. This difference is best understood within the context of growing ideological differences between the two parties. During the 1960s and 1970s, the Democratic Party shifted to the left, increasing their support for government intervention in a range of issue areas, most prominently civil rights and the environment (Noel, 2014). In supporting the government’s work in these and other areas, the left almost necessarily supported the technical experts on whose knowledge government action was based (Kevles, 2006). In reaction to the Democrats’ leftward shift (and to the growing power of the federal government in general), the Republican Party moved in the opposite direction, becoming increasingly conservative and anti-federal regulation (Noel, 2014).

The challenge to government science from the right would take two forms. At the least, conservatives argued, government funding for science should decrease in an effort to control the federal deficit. As conservatives had argued in the past, government-funded science should be limited and practical in nature; government funding of basic research was largely superfluous. Private funding of science should replace much federal funding (Kevles, 2006, p. 772). A newer argument would emerge, however, that was more damaging to the scientific endeavor and a direct response to science’s hand in federal power: conservatives seized on the idea that they could challenge government’s increasing intrusion into Americans’ lives by challenging the science on which it was based (Jasanoff, 2012, pp. 12–13; Kevles, 2006; Oreskes & Conway, 2010).

The end of the Cold War was perhaps the nail in the coffin for conservative support for large-scale, government-sponsored science. For many Republican Members of Congress, the worth of scientists rested largely in their ability to counter foreign threats, as they had during World War II and the Cold War; with the fall of the Soviet Union, this impetus disappeared (Kevles, 2006). Republicans’ ability to act on this increasing skepticism of government-sponsored science would also grow in subsequent years due to electoral successes at the Congressional and Presidential level.

How Government Actors Influence Scientific Knowledge

With this brief history in place, let us discuss in more detail some of the ways in which government actors influence scientific knowledge—scientific agendas, discoveries, and communication. Before beginning, it is important to recognize that both the President and Congress have the ability to substantially influence government-sponsored science. The President is the formal head of the bureaucracy and exercises power through appointees as well as direct executive actions, such as executive orders. The Congress writes the legislation that shapes the parameters of bureaucratic institutions, controls the purse strings of those institutions, and exercises oversight (Lowi, Ginsberg, Shepsle, & Ansolabehere, 2014).

It is no secret that the U.S. government plays an enormous role in setting the scientific agenda of the nation. It does so in large part through the expenditure of research dollars, much of which is distributed through grants. The government’s agenda-setting ability with respect to scientific knowledge that is most powerful is simply its ability to drastically expand or contract research funding in general. Yet, despite this blunt power over the overall growth or contraction of scientific knowledge, it is rarely exercised. Sarewitz (2013) shows that the relative size of the Research and Development (R&D) budget remained remarkably stable in the years following World War II and particularly since the 1970s. Since that time, total R&D (including military and nonmilitary) has ranged from 13 to 14% of discretionary spending. In constant dollars, the amount of federal science spending has increased during this period, but this increase has been in concert with an increase in federal spending overall. This stability stems at least in part from an institutional quirk of the U.S. budgeting system for science: it is highly decentralized, thus resisting strategic planning by ideological presidents or members of Congress (see Sarewitz, 2013). The one clear aberration in the overall size in the R&D budget over the last six or so decades occurred in the 1960s, when nondefense R&D briefly doubled as a percentage of nondefense discretionary spending. This occurrence is linked to one very expensive priority of the Kennedy Administration, however: sending people to the moon (Sarewitz, 2013, p. 15).

Presidential administrations and members of Congress tend to exercise their budgeting powers by setting priorities within a relatively stable budgeting pie. The most obvious shift in priorities has occurred if we compare funding for NASA to the National Institutes of Health (NIH). Even after the Apollo mission to the moon had been concluded, the NASA budget far exceeded those of other science-related agencies. However, the NIH reached parity with NASA in the 1980s, and, today, its budget is 2.5 times that of NASA. NIH currently has a budget of $30 billion, with 80% of those funds awarded through grants to nongovernment researchers (National Institutes of Health, 2015). Other institutions, such as the Department of Energy, have seen their budgets rise and fall as well (Sarewitz, 2013, p. 16). To some extent, these changes carried out by government officials reflect the public interest: a growing, then waning, perceived need to compete with the USSR; greater desire to devote monies to public health as medical technologies continue to advance; greater, and then lesser, worry over access to energy.

However, agenda setting by government officials is not only due to widespread public concern. Interest group lobbying—carried out on behalf of particular groups of researchers, industries, and private citizens—can noticeably influence the size of specific government-science institutions’ budgets as well as to what subjects they allocate funds (Greenberg, 2001). The concerns of individual members of Congress also can play a role. For example, beginning in 2009, Republican members of Congress sought to reduce social science funding within the National Science Foundation (NSF) and sought to completely eliminate Political Science funding (see Coburn, 2011; Sides, 2015). The efforts, spearheaded by former Senator Tom Coburn, did result in reduced spending in these areas. While part of a general crusade against wasteful spending, Coburn’s intense interest in eliminating Political Science funding in particular suggests some idiosyncratic personal beliefs were at play. As Sides (2011) points out, the Political Science program Coburn sought to eliminate cost only $5 million (around 0.1% of NSF’s billion-dollar budget), hardly a big contributor to the deficit.

These decisions to increase or decrease government investment in particular research agendas have an enormous influence on where scientific discoveries occur. For example, recent increases in government funding of biosciences via the NIH has led to greater expertise on these subjects and many important discoveries, such as our still rapidly growing understanding of the intricacies of the human genome. Such funding also has important ripple effects. Graduate students are trained, and researchers develop expertise they will continue to develop even after their grant has expired. Furthermore, the excitement surrounding such discoveries, and the research dollars attached to them, attract the attention of scholars in other disciplines. Not only are there more people working within the biosciences today due to increased government funding, but also a range of other disciplines—including in the social sciences and humanities—have begun to incorporate biological approaches into their work (e.g., see Krimsky, 2013).

Agenda setting is just one aspect of government influence over scientific knowledge—one that (depending on the reason for influence) need not necessarily be worrisome. More problematic are efforts by government actors to change the conclusions and public reports of scientists working on behalf of the government.1 Jasanoff discusses how the regulatory process in particular is vulnerable to political influences, as scientists within agencies must meticulously deconstruct knowledge claims to examine their strength and certainty, which invites politically motivated arguments over the strength of the evidence for or against a particular policy (Jasanoff, 1987). Although political meddling in regulatory science and other technical areas of government occurs with some frequency, the George W. Bush administration stands out as unusually politicized. An extensive investigation carried out by the Union of Concerned Scientists, discussed in two reports (Union of Concerned Scientists, 2004, 2005), found that the Bush administration had been consistently suppressing and distorting research findings at federal agencies on a wide range of topics. Those topics included environmental concerns (climate change, endangered species, forest management, strip mining); health concerns (HIV/AIDS, breast cancer, contraception, abstinence-only education), and the war in Iraq (whether the Iraq government was building weapons of mass destruction).

Presidential administrations are not the only ones who try to influence the conclusions of government-sponsored research. When faced with a federal agency generating inconvenient scientific conclusions, members of Congress may threaten to decrease or eliminate an agency’s funding or, short of that, conduct hearings or subpoena information in an effort to discredit or harass scientists. For example, at the time of this writing in late 2015, Rep. Lamar Smith (R-Tex)—a well-known “climate skeptic” and also chairman of the House Committee on Science, Space and Technology—had recently issued a subpoena for internal deliberations of scientists working for the National Oceanic and Atmospheric Administration (NOAA) who had worked on a well-regarded study published in Science that refuted claims that global warming had slowed in the preceding decade. While Smith stated that his intentions were to investigate whether scientists had rushed the study and published it despite important flaws, many scientists and administrators, including the head of the NOAA, interpreted the subpoena as politically motivated, with the goal of intimidating scientists (Rein, 2015a, 2015b). This pattern of unusually aggressive interference with, and skepticism of, government-sponsored scientists by Republican leaders in recent decades—particularly surrounding the issue of climate change—has led some to declare that there exists a Republican war on science (Mooney, 2005; also see Kolbert, 2015; vanden Heuvel, 2011).

Political influences on science in the federal government extend to science advising as well. From the politician’s perspective, science advisers serve two functions: to help him or her make quality policy decisions (evidence-based decisions), and to provide justification for already made policy decisions to colleagues, the media, and the public (what we might call, more critically, decision-based evidence). Befitting the post-war bipartisan consensus on scientists’ perceived important contributions to the public welfare, Presidents Eisenhower and Kennedy both emphasized the first role, listening intently to their science advisers (Douglas, 2009). Yet, throughout history, “both presidents and Congress latched onto technical views that suited their political purposes” (Kevles, 2006, p. 762). Nixon famously disbanded the President’s Science Advisory Committee when its members refused to rubber-stamp his war-related initiatives (Kevles, 2006, p. 763). More recently, the George W. Bush administration continually applied political litmus tests to appointees on advisory committees, thus ensuring ahead of time that those advisers would be on the President’s side (Kevles, 2006). Congress competes in the contest for science-backed credibility as well, with Democrats and Republicans cherry-picking scientists based on known perspectives to appear as supportive experts in Congressional hearings. Because of these behind-the-scenes efforts to control which scientific perspectives are expressed in public forums, even politically neutral scientists who agree to speak in such venues can add fuel to the fire of political debate, particularly where value differences between opposing sides are high and scientific certainty is low (Pielke, 2007).

The Influence of Political Values on Scientists at Work

The previous section discussed various ways in which government institutions and actors influence scientific knowledge. However, it should not be presumed that scientists themselves do not also have political commitments that may influence their work. This said, writing about such influences is a challenging endeavor. The process of research usually is not observed by anyone beyond the researcher or research team, and, even if scientists are observed in action, the observer cannot peer into their minds to understand their thought processes. While it is certainly possible to shed some light on scientists’ likely motivations via empirical research (e.g., observation, personal interviews, textual analysis of notes and publications), conclusions normally must be somewhat tentative.

The field of the sociology of scientific knowledge (SSK) has contributed the most to collective understanding of various influences on scientists’ work, including the topics scientists pursue, the methods they employ, and the conclusions they draw. As one learns in classic works in the field, such as Latour’s Science in Action (1986), creating knowledge bears little resemblance to the overly concise and stylized way scholarly publications portray research. Scientists make critical decisions based on competition with other scientists, power dynamics, and miscellaneous epistemic values,2 such as a preference for novelty, theoretical simplicity, or particular methodologies (also see Douglas, 2009). For the most part, these influences are not political, at least according to the relatively formal definition used herewith.

Yet, politically relevant values and interests do sometimes play a role in coloring scientific research. The influences can intersect research at the agenda-setting stage as well as in the “internal stages of scientific reasoning” (Douglas, 2015, p. 122)—planning and carrying out a study and interpreting its evidence.

To begin, political influences on research agendas are not only produced indirectly through funding. While some scientists pursue subjects out of intrinsic intellectual appeal, scientists’ values often also influence their research agendas to some degree. P. B. Medawar, in Advice to a Young Scientist, insists that scientists must study problems in which “it matters what the answer is—whether to science generally or to mankind” (Medawar, 1979, p. 13). Knowledge that matters to mankind is certainly bound up with values. Some of these values are widely shared and pursued via a range of projects, such as improving humans’ health and happiness via medical or consumer safety research. In other cases, value priorities may differ considerably between individuals, or people may share societal goals but disagree over how best to get there (Rokeach, 1973). Such contested values are apparent in—and divide—the social sciences. For example, it is fairly well known that American sociologists are, on average, considerably more liberal than economists. It is likely that students who are relatively left-leaning are drawn to a field (sociology) explicitly concerned with social ills, such as racial discrimination, whereas students who are relatively right-leaning are drawn to a field (economics) which, at least until fairly recently, held that economic markets are most efficient when they are free of government regulation.

When performing and interpreting a research study, scientific norms dictate the importance of avoiding any direct influence of social or ethical values (beyond those values that outline ethical scientific procedures, such as the treatment of human or animal subjects). This means that scientists should not design studies in a way that guarantees a desired conclusion will be reached. It especially means that, when interpreting evidence, scientists should not allow themselves to be influenced by what they wish the result to be. One’s personal values simply are not appropriate evidence (see Douglas, 2009). Most professional observers of the scientific process would argue that scientists generally strive to adhere to this ethos.

This said, scientists—particularly government scientists working in a regulatory capacity—consistently do (and should) take values into account indirectly in their work (Douglas, 2009). Based on concern over real-world risks, scientists must consider whether the weight of the evidence they have before them justifies an affirmative scientific claim, which may include a recommendation for some type of collective action, or whether more evidence should first be gathered to increase certainty. Given that no scientific study ever claims to have 100% settled an empirical question, scientific uncertainty is a focal point of much political disagreement related to science-backed government policy. For example, some may see a risk, such as children’s ill health due to low-level exposure to lead, as unacceptable and recommend efforts to remove all lead from children’s environments even if the evidence of ill health effects remains uncertain. Others may be more tolerant of such a risk and demand further study to demonstrate more conclusively that the ill health effects of low-level lead exposure are consistent and substantial prior to additional regulation (see Douglas, 2009). The climate change debate offers a different type of example. Most climate scientists argue that there is enough persuasive evidence for the catastrophic effects of climate change that steps must be taken immediately to counteract climate change. A handful of climate scientists, in many cases connected to industries that stand to lose a great deal if their productivity is curbed by government regulation, have argued that the science is as of yet too uncertain to impose the costs of regulation on American industries and consumers (Oreskes & Conway, 2010).

Thus far, only explicit (or conscious) political influences on scientists have been described. Such influences play a role not only in scientists’ decisions regarding their field (and topics) of study but also in their assessments of whether the strength of evidence justifies a public conclusion and perhaps a recommendation for societal action. This said, borrowing from research in psychology and the sociology of scientific knowledge, politically relevant values and interests likely also influence the doing of science—designing, conducting, and interpreting studies—to some extent subconsciously, via motivated cognition as well as background assumptions. Motivated cognition involves, in essence, wishing for a particular scientific conclusion (or fearing a conclusion) due to value or interests, and (unknowingly) allowing this desire to influence one’s interpretation of evidence. Motivated cognition consists of two key behaviors: increased skepticism of evidence that undermines one’s point of view, and searching the information environment, or one’s memory, for facts that bolster one’s perspective (Lodge & Taber, 2013). While it appears as though experts are less likely than others to engage in this style of thinking (Kahan et al., 2016), it likely exists in some form among scientists. Background assumptions operate differently. These are not values but, rather, factual beliefs about the world that are taken for granted. Such perceived facts unavoidably differ among people, leading Barker and Kitcher (2014) to avoid calling them a “bias.” Instead, all knowledge is “situated” in light of the unique perspectives of the observer. Background assumptions may have a political flavor when they take the form of stereotypes of social groups or other distinct perceptions of the world that stem from a person’s socioeconomic status or political alliances. Such assumptions can influence scientists’ work by making it easier to “see” evidence that fits an expected pattern (Barker & Kitcher, 2014).

Below, these influences—motivated cognition and background assumptions—are discussed within the context of scientific debates over biological influences on human characteristics and behaviors. The study of biological inheritance (i.e., genetics), in particular, has strong political implications (see Suhay & Jayaratne, 2013 for an overview), meaning that scientists’ own political commitments may influence their work more here than with respect to other topics. While this makes it easier to spot cases of bias, it is important to note that the examples described below are likely not representative of scientific research generally, including within the biosciences today.

Prior to World War II, the work of U.S. (as well as many British and European) geneticists appeared to be influenced both by motivated reasoning and problematic background assumptions. These scientists, most of whom were upper-class, Christian men of Western European descent, made a number of important discoveries related to genetic inheritance, but also a related set of additional claims that have since been discredited by biologists. The most notorious of those claims included the belief that a wide range of people—southern and eastern Europeans, Africans, Asians, Jews, women, the poor, those with addictions, and the mentally ill—were genetically inferior to people like themselves. It is not an exaggeration to say that many of these scientists advocated for eugenic practices—many of which were acted upon by the U.S. government—including forced sterilization of some individuals and greatly reduced immigration (Beckwith, 2002; Kevles, 1985; Paul, 1998). These early geneticists did not appear to be consciously skewing their conclusions for political reasons, however. In an era of rising inequality and immigration, “the raison d’etre of the eugenics movement was the perceived threat of swamping by a large class of mental defectives” (Paul, 1998, p. 125). Problematic background assumptions about people with whom they had little interaction—people from different social classes and nations and of different races and ethnicities—appeared to be influencing these scientists. Many geneticists of the era genuinely believed that large swaths of the masses were mentally disabled and, in having many children, threatened to diminish future Americans’ wellbeing. Barker and Kitcher (2014) also suggest motivated reasoning influenced the geneticists’ extreme conclusions: “Historically, research aimed at finding innate biological differences that underlie and explain existing social inequalities has enjoyed intense interest and often won acclaim … Bad or sloppy science may be tolerated if it leads to comfortable conclusions” (2014, pp. 107–108). Whatever the reasons for their conclusions, the American eugenics movement caused substantial human suffering in the United Sates and perhaps beyond. While historical counterfactuals are impossible to trace out with certainty, it is well established that Hitler drew heavily on the ideas of both American and British geneticists and eugenicists when formulating Nazi racial ideology (Black, 2003; Kühl, 1994).

After World War II, in the wake of the Holocaust, the ideological ground surrounding biological research shifted considerably. For example, Provine (1973) documents how geneticists “changed their minds about the biological effects of race crossing” (i.e., miscegenation) after the war even though the store of relevant scientific evidence on the subject (there was, in fact, very little) had not changed. Before the war, many geneticists had warned that the offspring of two parents of different “races” would likely exhibit physical defects; after the war, genetic scientists reversed this claim, arguing defects were highly unlikely. Segerstrale (2000) describes a general post-war taboo on using biology to explain human behavior because of concern such theories could be used to justify prejudice and discrimination against vulnerable groups in society. Those scholars who did make claims about biological influences on human characteristics and behaviors, such as famed sociobiologist E. O. Wilson, were often met with intellectual attacks whose ferocity suggested more than just academic motivation.

In more recent years, the ideological nature of debates over the origins of human differences has dissipated. But new scientific controversies continue to arise in this arena, and sometimes the political motivations behind the actors involved are quite apparent (e.g., see Dreger, 2016). Interestingly, in the contemporary era, the coalitions arguing in favor of nature vs. nurture have been reshuffled to a degree. While the academic disciplines most associated with egalitarian value orientations continue to be relatively pessimistic about research on human genetics (Hochschild & Sen, 2015), empirical findings in support of innate influences on sexual orientation specifically have been eagerly communicated by some socially progressive researchers (e.g., Bailey et al., 2016). As the belief that people are “born gay” has increasingly become associated with tolerance for diverse sexual orientations in the public (see, e.g., Garretson & Suhay, 2016), some academics may be motivated to present such evidence in a favorable light to further advance gay rights (Pitman, 2011; Walters, 2014). In sum, Provine’s conclusion several decades ago, in a very different context, continues to ring true today: “the science of genetics is often closely intertwined with social attitudes and political considerations” (1973, p. 796).

A discussion of scientists’ political biases would be incomplete without some discussion of the handful of scientists who knowingly distort scientific truths for political ends. By all accounts, the relative number of such individuals is exceedingly small. However, given that such scientists are likely to be highly outspoken, their small number belies their impact. A remarkable account of one such group of scientists is provided by Oreskes and Conway (2010). The authors document the activities of a small cadre of fervently anti-communist and libertarian scientists who would knowingly mislead the government, the media, and the public on the science behind a range of topics, from the risks of smoking, to Reagan’s Strategic Defense Initiative, to various environmental concerns (including the ozone layer, acid rain, and climate change). For these individuals, their fierce opposition to communism and anything resembling it (i.e., government regulation) justifies lying. The strategy of these individuals is to attack any science they do not like as “junk science”—as science either driven by politics or full of mistakes (or both). In some cases, they trumpet dubious studies produced by themselves or their allies. Because these individuals are accomplished scientists (usually in fields other than those they are critiquing, however), they are trusted. These unscrupulous scientists play a key role not only in influencing government policy but also in fostering the undeserved public perception that much government-sponsored science is biased (Oreskes & Conway, 2010).

Public Engagement with Science

A broader view of scientific knowledge considers the communication of scientific knowledge to the public, public perceptions of what is (or is not) settled scientific knowledge, as well as ways in which the public itself can influence the production of scientific knowledge.

The (Political) Science of Science Communication

It is difficult to separate the public’s understanding of science from the communication of science. For nonscientists, science is a mediated reality. “Their exposure to science and scientists … is not a direct one, but indirect through mass or online media” (Scheufele, 2014, p. 13587). While many types of people engage in science communication via media (including scientists themselves), this section focuses on science communication by entities that politicize scientific communication with some frequency: journalists and interest groups.

Science journalism began in earnest in the United States between the world wars (Lewenstein, 1995; Weingold, 2001). The focus of this field has long been to simply translate scientific findings for the general public with an added dimension of clarifying how scientific findings may be—or may become—relevant to lay people’s lives (Lewenstein, 1995). With this latter point in mind, science reporting has long had a value dimension. This said, science reporting has recently become more explicitly politicized. The reasons are several.

First, the ranks of science journalists have been thinning due to shrinking news budgets and associated newsroom cuts. As a result, when scientific topics are covered, they are often covered by nonspecialists, including political reporters and columnists. These individuals are more likely to frame scientific issues in a political manner (Nisbet & Fahy, 2015).

Second, over the last several decades, controversy has become a craft norm of the news media (Weingold, 2001). As with other media stories, framing scientific findings as politically controversial increases audience interest. Examples stretch far beyond the well-known example of climate change reporting, including medical scientists’ health recommendations (Fowler & Gollust, 2015) and genetic discoveries (Garretson & Suhay, 2016), among others. Where scientific knowledge is contested among scientists, emphasizing controversy may be even more advantageous for journalists. In such cases, most journalists are unlikely to understand the scientific or technological issue well enough to understand which claims in a scientific debate are well founded and which are safely ignored (or debunked). Further, in covering the controversy, rather than adjudicating between competing claims, journalists often are trying to appear objective, in the sense of a balanced presentation of all sides of a debate (a long-held craft norm). Of course, when the view of a minority of scientists is presented in media reports as just as credible as that of an overwhelming majority of practicing scientists, this greatly distorts public perceptions of the current state of scientific knowledge (Oreskes & Conway, 2010, p. 243).

Yet a third way in which politics can influence science reporting is less well known to those outside the media. Scheufele (2014) describes behind-the-scenes strategic efforts by a variety of policy stakeholders—including interest groups, corporations, scientific associations, and others. These groups compete for access to the news agenda and, not surprisingly, work hard to ensure that their science or technology issue of interest is framed in the way they want (Nisbet & Huge, 2006). One method of gaining access to the news agenda under favorable terms is to provide information subsidies to news organizations (Weingold, 2001, p. 181). In a striking parallel to the influence of lobbyists on Capitol Hill (see Drutman, 2015), perpetually rushed journalists are sometimes relieved to be able to draw heavily on a press release provided by an interest group.

Fourth and finally, politics sometimes also enters science reporting simply due to the political goals of a particular reporter or news outlet. Partisan news outlets have flourished amidst the fragmentation of the media (Levendusky, 2013; Stroud, 2011). Such outlets are certainly less interested than others in neutral reporting. A content analysis of climate change coverage on several cable news channels (Fox News, CNN, MSNBC) between 2007 and 2008 demonstrated that Fox was more dismissive of climate change and interviewed more climate change doubters than the other cable channels (Feldman, Maibach, Roser-Renouf, & Leiserowitz, 2012). This said, note that political bias practiced by such outlets is not necessarily carried out by presenting falsehoods. Rather, politically motivated news outlets and journalists may cherry-pick the studies they discuss—only reporting ones with results that support their perspective—or express greater skepticism of studies that undermine their perspective.

All of these political influences on science journalism will influence public understanding of science in some way, of course. Even when reporters themselves have no political agenda, simply alerting media consumers to the fact that there are different sides to a debate (and clarifying which political values or identities are associated with which side) will tend to encourage motivated cognition in the public. Where scientific knowledge has identified a threat to the public and thereby justifies government intervention, politicized reporting also has a status quo bias. For example, Fowler and Gollust (2015) found that when media coverage of the HPV vaccine emphasized political conflict over its use, support for the vaccine and a state immunization program decreased. Covering the controversy also tends to erode public trust in scientists, as their motives are implicitly portrayed as political (Fowler & Gollust, 2015). As for more marked political biases, these influence public understanding of science in predictable ways. The previously mentioned study of cable news climate change reporting also found that Fox News viewers were less likely to believe in climate change than viewers of other channels, even after controlling for possible confounds (Feldman, Maibach, Roser-Renouf, & Leiserowitz, 2012). A follow-up study suggests that this media influence was mediated by changes in viewers’ trust in scientists (Hmielowski, Feldman, Myers, Leiserowitz, & Maibach, 2014).

Finally, those who wish to communicate about science do not need to rely on journalists. Oreskes and Conway describe a number of such efforts, including a pamphlet called “A Scientific Perspective on the Cigarette Controversy” sent to 176,800 American doctors in the 1950s. The intellectually dishonest pamphlet, funded by the tobacco industry, challenged existing evidence that smoking causes cancer so that doctors would not recommend that their patients quit smoking (Oreskes & Conway, 2010

0 Thoughts to “Development Essay History International Knowledge Politics Science Social

Leave a comment

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *