
Tech megastar Peter Thiel, in a recent interview, said that he no longer trusts science. Lesser luminaries and no small segment of the public have said much the same. The reasons for this distrust are many: fraud (Thiel mentioned a particularly egregious case involving the president of Stanford University); thousands of retracted papers (most justified, some not); massive failure to replicate studies; and weak experimental and analytic methods. Not to mention the authoritarian failure of the medical-science establishment over the Covid epidemic. Many fields, especially in the social sciences, have split into multiple subfields with no mutual contact, so truth has in many cases become relative, different for each bubble. Overall, the pace of scientific advancement also seems to be slowing.
Among the causes of these problems are bad incentives on campus and elsewhere, a system that encourages careerism over love of knowledge, and epistemologically irrelevant race-and-sex policies (aka DEI). Another problem, however, is how basic science is funded.
Here are a few suggestions on how scientists and their proposals for support might be better evaluated.
Overall, the pace of scientific advancement seems to be slowing. Science: Applied and Basic
There are two types of science, applied and basic. The two areas often overlap; nevertheless, there is a clear difference between them. Applied science involves projects: a plan to achieve a well-defined objective. Biomedicine, forensics, and materials science provide many examples of projects to solve a medical-device problem, discover the cause of a specific phenomenon, cure a particular medical condition, or strengthen an artificial fiber. All engineering is applied science.
Among the causes of science’s problems are bad incentives on campus and elsewhere. If the aim is to improve an alloy, for example, well-established physical and chemical laws and findings are available for the task. Once the objective is known, the course of action in such a case will be pretty well laid out. The questions to be asked and the probable answers are well-defined. A lengthy research proposal is justified. The success or failure of applied projects is relatively easy to evaluate: Does the engine work? Did the creatively designed new bridge hold up in a storm?
Basic science is quite different. Here, the aim is to understand nature through novel insights and the discovery of new phenomena. Topics such as the structure and function of microorganisms, the relationship between different types of plants, and the sources of genetic and behavioral variation, not to mention much of social science, are much more open-ended, with no objective that can be clearly defined in advance.
In applied science, the goal and, for the most part, the questions to be asked are given in advance. But in basic science, the goal is vague, and the questions to be asked emerge from the work; they cannot be detailed in advance. Yet now, proposals of both kinds are evaluated by government agencies in exactly the same way.
Most academic science is funded by the National Institutes of Health (NIH) and the National Science Foundation (NSF). These agencies make no real distinction between basic- and applied-science proposals. All are treated as well-defined projects with a series of anticipated experiments and probable conclusions, all described in some detail. This process is now incredibly onerous: My first NIH proposal, several decades ago, was about 10 double-spaced pages. Now, 100-plus-page, single-spaced proposals are routine.
Evaluating an applied-science project is fairly straightforward. The aim is to develop a more efficient jet engine or battery, say. An applied-research proposal can usually lay out the steps to be taken and their costs. The proposal may be lengthy because many details can be specified in advance. The aim is well-defined, and judging success or failure is relatively straightforward.
But how do you evaluate a basic-science proposal? Preferably by the results: What new phenomena were discovered? How important are they? What new understanding was achieved? But understanding of what? And what new phenomena, and when? Obviously, these things are themselves tough to evaluate and cannot be specified in advance. And here’s another problem: These new findings may take many years to be realized, and most hypotheses will turn out to be false. Most of the major findings of basic science, from Darwin’s theory to the periodic table, have required many years to achieve. Others, like the discovery of radioactivity or penicillin, were accidental. Alexander Fleming and Wilhelm Roentgen were smart enough to notice the darkening of film in a drawer and the failure of microbes to grow near a fungus in a Petri dish.
The relevant variable, beyond the basic qualifications of the researcher and the proposed budget, is the quality of the scientist. The relevant variable, beyond the basic qualifications of the researcher and the proposed budget, is the quality of the scientist. I intentionally refer to an individual, not a research team. Creative products, whether in art, literature, or science, are almost invariably made by an individual or a small group (and even Watson and Crick were unusual). A team may well be necessary to confirm a discovery, look for exceptions, and so on. But the original finding, achieved either via creative thought or serendipitous perception, is usually due to a thoughtful observer who is usually a single man (yes, again usually a man).
It makes sense to evaluate applied-science proposals in a directly competitive way. Hence, the creativity, curiosity, perseverance, and skills, observational as well as cognitive, of the scientist are, in fact, the main factors in the success of basic science.
Sadly, government science agencies now make no effort whatsoever to assess these qualities as part of the grant-proposal process. This was not always the case. NIH at one time did give long-term, even lifetime salary, as well as research support, to a few people with outstanding track records. These awards supported the individual rather than the project. Today, long-term individual support like this has essentially ceased to exist. Now, support is limited to five years and usually provides salary only.
The old program had its limitations, of course. One problem was that scientists in many areas are at their most creative when they are young, before they have a chance to get a track record. But there has been no effort to try and assess scientific potential by other means, although much is known about the character and talents of gifted scientists like Richard Feynman, Francis Crick, and B. F. Skinner. Perhaps an appropriate psychological profile should be part of a young scientist’s basic-research proposal? The proposal itself could be quite short, just a budget and a CV, plus a page or two, rather than the current system, in which most grant-supported scientists spend as much time writing proposals as doing science.
Competitive Evaluation
It makes sense to evaluate applied-science proposals in a directly competitive way. A review committee, specialized in the relevant area, can ask questions: How important is the goal of the project? How technically qualified is the applicant? How likely is the proposed approach to succeed? This approach can work for applied science.
Unfortunately, as an approach to basic science it is misguided, because the most important variable is the talent of the investigator. His broad area of inquiry can be evaluated, and some are more interesting and potentially important than others, although these judgements are necessarily uncertain: Who knew that Roentgen’s darkened film would lead to X-rays? Who knew that Darwin’s eight-year struggle to understand the evolution of barnacles would give him a better understanding of evolution generally? The evaluative point is this: Would the kind of person capable of attending to such apparently irrelevant things have an edge in our current funding system? Probably not, since the relevant psychological profile is never created.
Another problem is that the time horizon for basic science is usually much more uncertain than for applied. Just when should basic research be evaluated? How long should we wait?
So we need a better system for evaluating basic science. It should be both competitive and long-term. It should support individuals, not projects, but the competition should not be between individuals but between programs. The competition should be between different ways of selecting promising scientists, not between the scientists themselves.
The point is that basic science needs to be evaluated over the long term and depends more on individuals than teams. Here is one way it might work. Imagine that there are, in NSF, two basic-research programs. Both evaluate individual basic-science applicants and start out with the same budget. The total budget is fixed with politically driven adjustments each year. Then, year by year, the size of the individual budgets is adjusted according to the collective discoveries each program delivers each year. If Program A does well, its budget for the next year is slightly increased, at the expense of Program B. Over time, the program that is good at picking winners grows, and its competitor shrinks—though not to zero, although there might well be some turnover of personnel. But both programs should coexist, since some competition is essential to the long-term stability and effectiveness of the system.
The point is that basic science needs to be evaluated over the long term and depends more on individuals than teams. Basic science cannot be judged project by project, but competition between different approaches to the selection of investigators seems like a promising start.
This is an off-the-cuff suggestion, of course. The system needs to be rethought, and this is just one approach. Unfortunately, NSF, far from favoring individual scientists, has increasingly emphasized team research, which has begun to lead to support for projects that are more political than scientific: for example, the study of “campus intersectionality” or “increasing diversity” in computing. The point is, the scientific-review process needs to be reexamined. The present system has become sclerotic and political. We need something better.
John Staddon is James B. Duke Professor of Psychology and Professor of Biology Emeritus at Duke University. He was profiled in the Wall Street Journal in January 2021 as a commentator on the current problems of science. His book Science in an Age of Unreason (Regnery) came out in 2022.