The Research Crisis in Higher Ed

Mark Edwards

The modern American research university is in crisis. Perverse rewards and incentives create an unhealthy “hyper-competition” among research scientists and encourage unethical behavior that can lead to bad science. So say Mark A. Edwards, the Virginia Tech professor best known for exposing the high levels of lead in the water in Flint, Mich., and Siddhartha Roy, a Ph.D. candidate at Virginia Tech.

“If the practice of science should ever undermine the trust and symbiotic relationship with society that allowed both to flourish, our ability to solve critical problems facing humankind and civilization itself will be at risk,” they warn in a paper, “Science Is Broken,” in the digital publication Aeon. The Aeon article is abridged from a longer paper published in Environmental Engineering Science.

The pursuit of tenure influences almost the priorities and decisions of young faculty at research universities, write the authors. Recent changes in academia, including increased emphasis on quantitative performance metrics, “harsh competition” for federal funding, and implementation of “private business models” at public and private universities are producing undesirable outcomes and unintended consequences.

Some examples of unintended consequences:

Incentive: Researchers rewarded for increased number of publications.
Intended effect: Improve research productivity, provide a means of evaluating performance.
Actual effect: Avalanche of substandard, incremental papers, poor methods, and increase in false discovery rates.

Incentive: Researchers rewarded for increased number of citations.
Intended effect: Reward quality work that influences others.
Actual effect: Extended reference lists to inflate citations; reviewers’ request citation of their work via peer review.

Incentive: Researchers rewarded for increased grant funding.
Intended effect: Ensure that research programs are funded, promote growth, generate overhead.
Actual effect: Increased time writing proposals and less time gathering and thinking about data. Overselling positive results and downplay of negative results.

Incentive: Reduced teaching load for research-active faculty.
Intended effect: Necessary to pursue additional competitive grants.
Actual effect: Increased demand for untenured, adjunct faculty to teach classes.

The list goes on.

The traditional university culture relied more extensively upon the “old boy network” for hiring and advancing tenure-track professors. That system lent itself to criticism for bias against women and minorities. But Edwards and Roy say that the quantitative-metric approach has created a new set of abuses. “All these measures are subject to manipulation as per Goodhart’s law, which states, When a measure becomes a target, it ceases to be a good measure. The quantitative metrics can therefore be misleading and ultimately counterproductive to assessing scientific research.”

Edwards and Roy also find fault with the way federal research grants are handed out. “The grant environment,” they write, “is hypercompetitive, susceptible to reviewer biases, skewed towards funding agencies’ research agendas, and strongly dependent on prior success as measured by quantitative metrics. … These broad changes take valuable time and resources away from scientific discovery and translation, compelling researchers to spend inordinate amounts of time constantly chasing grant proposals and filling out increasing paperwork for grant compliance.”

Most concerning of all:

There is growing evidence that today’s research publications too frequently suffer from lack of replicability, rely on biased data-sets, apply low or sub-standard statistical methods, fail to guard against researcher biases, and overhype their findings.

Science is expected to be self-policing and self-correcting. But incentives induce stakeholders to “pretend misconduct does not happen.” There is no clear mechanism for reporting and investigating allegations of research misconduct.

The system “presents a real threat to the future of science,” they say. Academia is at risk of creating a “corrupt professional culture” akin to the doping scandal in professional cycling in which athletes felt they had to cheat to compete. “We can no longer afford to pretend that the problem of research misconduct does not exist.”

Bacon’s bottom line: The inability to replicate results from many scientific studies is widely acknowledged to be a real problem. Likewise, the risk is very real that the public could lose faith in science, especially when scientific research intersects with public policy. The idea that government agencies favor and fund research projects that bolster their policy agendas — admittedly, a minor point in the Edwards-Roy essay — is a phenomenon that should concern all Americans.

As research scientists, the authors are most concerned with how the system impacts upon the integrity of the scientific process and the advancement of tenure-track faculty. But their thoughts raise issues of interest to non-scientists who focus on cost and quality issues in higher education. The perverse incentives, along with the research university business model, have virtually severed top faculty from the task of teaching undergraduate students. Universities hire more subalterns — at extra cost –to handle the job of teaching. From the perspective of students and parents, superstar research faculty are superfluous overhead.

An important question left unanswered is the extent to which students and parents are funding this dysfunctional system through their tuition. How much tuition revenue goes to supporting this massively inefficient research edifice in which an increase share of faculty time is spent applying for grants? Perhaps none at all. But perhaps quite a lot. The public doesn’t know. It’s entirely possible that university administrations don’t either — higher-ed accounting could be more transparent. As students, parents and taxpayers, we should insist upon finding out.

(Hat tip: Reed Fawell)