Recently, one of the departments on my campus invited an academic “expert,” who, among other specializations, “advise[s] on the ethical aspects of telescope siting,” to give a talk entitled “How Research Harms.”
The advertisements for the event summarized the speaker’s perspective with the declaration, “We ought to be restricting research based on a number of unique and underacknowledged harms … [which] are poorly understood and lack clear definitions.” Prominent among these “harms” are unspecified “psychological, social and moral hazards.”
This is but one example of a growing phenomenon in higher education. The perspective in question—that some sizable quantity of scientific research is causing undefined harms and must therefore be prevented on ethical grounds—has become widespread. It marks a significant departure from an earlier academic culture that celebrated the open-ended pursuit of truth as the fundamental value of higher education.
The creation of the IRB system marked the first stage of the effort to exert overarching ethical control over scientific and academic inquiry.This transformation has been underway for some time. The creation of the IRB (Institutional Review Board) system marked the first stage of the effort to exert overarching ethical control over scientific and academic inquiry. When, two decades ago, I arrived on the campus where I am employed, I was informed by our IRB that any research I would do, as well as any classroom exercises involving human subjects, must protect participants from “a variety of types of risks.”
Of course, projects that expose subjects to real physical, legal, or financial harms should be carefully vetted. But chief among the risks human subjects must be protected from in the IRB documents I received were a broad range of “psychological risks,” including “stress, embarrassment [and] boredom.” How would it be determined that a given research project could produce “stress” in subjects, and how much “stress” was acceptable? The self-appointed experts on the IRB would make the call.
Yet the very justification for IRBs’ existence is based on ethical exaggerations and falsehoods about abuses. A founding document for the creation of IRBs, the Belmont Report, presents “respect for persons,” “beneficence,” and “justice” as the three ethical principles that must guide scientific research. The report gives as an exemplary case that violates these principles (and therefore justifies the existence of the systematic ethical oversight of research) the Tuskegee study of black syphilis patients that took place between the early 1930s and the early 1970s. The purpose of that research was to investigate potential racial differences in syphilitic infection, with the goal of improving outcomes in the affected group.
Historical exaggeration and outright inaccuracies about the Tuskegee project have become common knowledge. The anthropologist Richard Shweder wrote a remarkable analysis of the facts of the case that shows that much of what is claimed about them is distorted or wrong. It is true that the subjects were treated paternalistically, but such paternalism was, in that era, an ingrained element of the doctor-patient relationship in general. It was not a simple residue of racism against blacks.
These suffering men were denied treatment, critics claim. But effective treatment—penicillin—did not become the medically established norm until more than 20 years after the start of the study.
A good deal of the narrative justifying IRB policing proceeds from basic ignorance.Okay, the critics continue, but the fact that these men were not given penicillin even when it was finally available constitutes a grave, objective medical harm. Yet syphilis is curable with penicillin only in its early stages. Administration of that drug would not have cured the patients, nor would it have lowered their contagiousness or improved their symptoms.
A good deal of the narrative about the case, which, again, is presented as an obvious justification for IRB moral policing, proceeds from basic ignorance about the normal, natural course of syphilis, charged up with largely misplaced ethical outrage.
This does not mean that Tuskegee is not a morally complex case. A large amount of medical research is. Yet ethical hyper-simplicity too often drives contemporary understanding of such cases and fuels the perceived need for draconian restrictions on research.
A central argument in Shweder’s article is that there is no evidence that a 1932 IRB would have found the Tuskegee project unethical. It is only in hindsight, using an ethical framework that is ours alone, that we find it problematic. Why then should we be assured that contemporary efforts to restrict research are not missing things that might in the future be seen as objectionable? And why are we assured by the distorted reading of this particular case that contemporary efforts to limit what can be studied will not morally overreach and prevent useful research?
IRBs are by now just one part of a well-developed academic apparatus for ethically guiding scholarly work and teaching. The shifting of higher education’s raison d’être from the tireless pursuit of truth to the social-justice quest to eliminate all suffering reaches into every corner of institutions. Exaggerated concern for “justice” has spread throughout the disciplines, and the professoriate grows more committed to the new vision of the university every year.
Students, too, are increasingly eager to play a role in the censoring of ideas and academic work that are judged to be potentially harmful. In my class on sociological theory, more than a few students invariably react with outrage when I present to them the evidence that the warping of scientific inquiry to fit moralizing discourses has produced at least as much harm as has ethically problematic science.
Students are increasingly eager to play a role in the censoring of academic work that is judged “harmful.”I use examples from the Marxist “science” of Lysenkoism, which led to the jailing, professional destruction, and execution of many of its critics, as well as to the deaths of some 15 to 55 million Chinese in the Great Famine of 1959-61.
These students are perfectly confident that their moral desires to prevent exploration of, for example, the possible consequences of human genetic diversity have nothing in common with the Marxian crusade against “bourgeois” science. Yet experts in human genetics have decried moralizing efforts to enforce woke sensibilities on such research and have pointed to immense potential harm in the failure to tailor policy to the search for scientific truth.
The National Institutes of Health (NIH) now limits access to certain bodies of genetic data for projects that (it claims) might produce “stigmatizing” outcomes for some groups. Genetic contributions to intelligence, education level, income, and many behavioral traits such as drug abuse are thus determined to be unexplorable.
Stuart Ritchie, a psychologist who studies IQ and genetics, summarizes the situation pithily: “The NIH allows researchers to use the genetic data they host to do research … but not research that might offend people.”
Does the mere possibility of offense outweigh the possible goods of scientific inquiry? We already know that some diseases are specific to populations with a given ancestral ecological history. Tay-Sachs and sickle cell anemia are but two examples of illnesses confined to populations with shared long histories in given ecologies. Knowing who is genetically most at risk for such diseases is of immense value for effective screening.
There is evidence, too, that certain populations will face unique health risks in given environments, and knowing the involved genetics will help us prevent and treat related diseases. Perhaps half of all Americans are vitamin-D deficient, but this deficiency is substantially higher in blacks.
Some of this is attributable to diet, but a significant contribution is made by skin-color differences. The darker skin of American blacks evolved in specific environmental conditions, and members of that community now live in an ecologically very different part of the world. Owing to the melanin in their skin, they often do not get enough sunlight to make the necessary amount of vitamin-D. Vitamin-D deficiency increases the risk of diabetes, cardiovascular diseases, and cancer, and blacks have elevated rates of all these. Vitamin-D supplements for blacks are being investigated as a way to help mitigate these risks.
If susceptibility to disease differs across regional populations because of genetics, why would we assume that those populational differences can’t make any contribution to other features of the organisms involved—for example, behavioral predilections and distributions in psychological profiles? Ruling out in advance the study of genetic differences that might affect outcomes across groups is an anti-scientific move. If our goal is to understand what causes specific outcomes in the human world so that we can intelligently intervene to alter these effects, we should not close down viable modes of inquiry because of self-righteous moralizing agendas. This is a bad moral move in addition to being a bad scientific one.
There are legitimate criticisms to be made of scientific research. Much of what gets published in scholarly journals and then taught in university courses does not replicate. Much more of what happens in the contemporary social sciences is little more than straightforward propaganda. These are real problems. But the idea that scientific research needs to be overseen by social-justice committees trained to find abuse and offense needs to be denounced as the evident and dangerous foolishness that it is.
Alexander Riley is a professor of sociology at Bucknell University in Pennsylvania.