
President Trump continues to ramp up the pressure on Harvard: The university has lost $2.7 billion in funding from the National Institutes of Health (NIH) and the National Science Foundation (NSF), and the president is now attempting to bar Harvard from enrolling international students.
Harvard is, of course, not alone in these forfeitures: The NIH has terminated over 400 grants to Columbia University, while Cornell University has received 75 stop-work orders from the Department of Defense even as the federal administration has frozen $1.7 billion destined for Brown, Northwestern, and Princeton Universities, as well as the University of Pennsylvania. But Harvard has attracted the most attention, both from the general public and from the administration, because it has most resisted the president’s measures. Whereas Columbia, for example, has sought to accommodate the administration’s demands, Harvard has defied them.
The national commentary is unanimous in suggesting that the threatened cutbacks to U.S. university science will damage Americans’ health. By nearly universal consent, the federal government’s cuts to the universities’ research budgets portend disaster, and the national commentary is unanimous in suggesting that the threatened cutbacks to U.S. university science will damage Americans’ health and economic wellbeing. Here, however, I argue that, if the administration will only introduce its cuts in a measured way, the consequences will be wholly beneficial.
If the administration will only introduce its cuts in a measured way, the consequences will be wholly beneficial. The most important fact in science policy is that the United Kingdom led the world through the first industrial revolution in the absence of significant government money for research. By contrast, the French and German governments were then funding their nations’ research capaciously, yet their GDPs per capita failed to converge on the UK’s. Government funding for research is thus neither necessary nor sufficient for economic progress.
The country that did converge on the UK—and in 1890 overtake it—was the U.S., whose government also did not fund research significantly. In those days, the British and U.S. federal governments then funded only “mission research,” which was research that was undertaken by government agencies such as the Library of Congress or the Coast Survey in the service of their missions. In comparison with the vast government labs funded in France and Germany, UK and U.S. mission research was always narrowly focused and modestly funded. All other research was done through private funding.
The two largest U.S. research missions were defense and agriculture, but neither was of substantial economic impact. Defense research in peacetime was always small: The federal government did create research agencies during the Civil War and WWI, but they were defunded on the resumption of peace. Equally, the Office for Scientific Research and Development, which had been established in 1941 and had funded the Manhattan project, MIT’s Rad Lab, and other vast WWII research missions, was shuttered in 1947.
The other significant mission of the day was agriculture, yet agriculture’s problem was over-production, which impoverished the farmers. But trying to raise agricultural productivity would not solve their problem. The federal and state governments’ motives in funding agricultural research were, therefore, essentially political—the farmers were poor, there were many of them, they had votes, so it seemed wise for politicians to be seen to be doing something.
In short, by as late as 1940, the federal and state governments’ investment in research amounted to only 23 percent of U.S. R&D and 10 percent of U.S. basic science, and the nature of that investment could have had little or no impact on rates of American economic or health growth: Defense R&D has almost no economic benefit, while the agricultural R&D was surplus to requirement.
That story of research laissez-faire made a U-turn in 1950 when, to meet the scientific needs of the Cold War, the National Science Foundation was created and the National Institutes of Health’s budgets were expanded. Following the launch of Sputnik in 1957, moreover, new agencies such as Defense Advanced Research Projects Agency (DARPA) were created in conscious imitation of Russian research policies.
And those initiatives would transform the leading universities. Most American universities, then, were essentially liberal-arts colleges, but the Truman and Eisenhower administrations funneled much of their increased funding for research into them. In the 1950s, though, professors needed to be enticed into writing grants, and, as Fred Stone of the NIH recounted, post-war “it wasn’t anything to travel 200,000 miles a year” to solicit research grants from the universities.
Once the universities had tasted federal grant money for research, they became dependent on it. Once the universities had tasted federal grant money for research, they became dependent on it, and the Ivy League today is a research league with a bit of teaching attached. That is something President Eisenhower, the great progenitor of the transformation, grew to regret. In his Farewell Address, he lamented the passing of “the free university, historically the fountainhead of free ideas,” which had been replaced by institutions in which “a government contract becomes virtually a substitute for intellectual curiosity” and where a “scientific-technological elite” had been fostered that might threaten the workings of democracy itself.
The core function of a university is teaching, which should be untrammeled by any concern other than the transmission of truth. The conversion of the federal government from laissez faire in science to dirigisme presented the administration, though, with an ideological problem: Relying on government for research suggested that USSR-style socialism is superior to free-market capitalism. However, the RAND corporation addressed that problem by paying Richard Nelson and Kenneth Arrow to write the two key papers that are universally portrayed as the bedrock of the economics of science. In their papers, Nelson and Arrow made two major arguments.
First, they supposed that knowledge is explicit and so spills over between companies. Such spillovers will supposedly deter companies from investing in their own research, so government has to compensate by funding science. Yet we have known for a millennium that knowledge is tacit, not explicit. In reality, therefore, knowledge spills over only to other entities doing similar research, because only they understand how to access the knowledge. This explains how the market funds all the research it needs. Companies actively seek to copy each other’s advances, but they cannot copy unless they are also active researchers (only cutting-edge pharmaceutical companies can copy cutting-edge advances in pharmacology; only cutting-edge cell-phone manufacturers can copy cutting-edge advances in cell-phone electronics), so companies are forced to do research if they are to keep up with their competitors. In short, it is the tacicity of knowledge that incentivizes the private funding of science.
Nelson and Arrow’s second argument was even more bizarre, and very hard for non-economists to understand, for they argued that industry should not fund science—only governments should fund science—because companies’ proprietary knowledge would render the market “imperfect.” The concept of the “perfect market” is dear to some economists, but it is a fictitious construct where an infinity of companies sells an infinity of goods to an infinity of consumers—and where knowledge is so-called “perfect” and where there are no real profits. It bears no resemblance to reality, and it should be professional misconduct for an economist to extrapolate arguments from perfect markets to an unsuspecting real world.
Such fictitious arguments are, however, useful to lobbyists because, in empirical reality, there is no evidence that the government funding of research has ever stimulated economic growth, and there is a wealth of evidence to show it only crowds out private funding, as the best researchers leave industry to enter the universities and government research labs, leaving industry only with lesser, less profitable researchers who consequently achieve less in terms of economic, technological, or health growth.
In other words, there are huge opportunity costs when government funds research.
The core function of a university is not research—that can be undertaken elsewhere. The core function of a university is teaching, which should be untrammeled by any concern other than the transmission of truth. In support of their teaching, universities should of course promote scholarship (for a scholar is more likely to approach truth). Equally, universities should promote research, since science professors will probably teach better if they are also active researchers. But such scholarship and research should be met out of the universities’ own endowments and by grants from private foundations.
Harvard and the other research universities have damaged their own academic freedom by making themselves dependent on government funding, as they are now realizing in the onslaught from Trump. In an ideal world they would now seek to work with the federal government in weaning themselves off federal money at a pace that would allow the private sector to accommodate the newly available researchers without disruption to the research enterprise in aggregate.
Terence Kealey is the author of The Economic Laws of Scientific Research and other books. Since 2014 he has been a scholar (now adjunct) at the Cato Institute.