The president of Stanford University, the neuroscientist Marc Tessier-Lavigne, has announced his resignation following an investigation into allegations of fraud and fabrication in three of his lab’s scientific papers, including one cited as the most important result on Alzheimer’s disease in twenty years. The report exonerated him of committing the fraud but found he had failed to correct the errors once they were brought to his attention.
The vast majority of scientists are honest, but recent years have seen many cases of scientific misconduct come to the surface, implying there is a systemic problem. The financial and reputational rewards that come with headline-generating results make research fraud all too tempting. High-profile papers on stem cells, superconductivity, psychological priming, drug efficacy and ocean-heat content have been retracted.
Retraction Watch, an organization that pushes journals to withdraw dodgy studies, estimates that 5,000 papers are retracted a year but that this is a tiny fraction of how many should be. And they argue that most scientists who retract papers suffer no career setback, while “the ones whose papers haven’t been retracted have even fewer worries.”
Gloriously, in June this year, a study of honesty itself was accused of being dishonest. Professor Francesca Gino of Harvard Business School had claimed that people who signed truthfulness declarations relating to tax or insurance at the top of a page were more honest than those who signed at the bottom of a page. Her co-author says he has been shown “compelling evidence” of data falsification. Gino denies the accusation and filed a lawsuit against Harvard last week.
Last year the journal Science retracted a paper by the marine ecologist Danielle Dixson that claimed rising carbon dioxide levels can alter the behavior of coral-reef fish. An investigation by the University of Delaware found Dixson had got implausibly strong results in impossibly short time scales. When challenged, she produced data files with “patterns of copying and pasting [that were] signatures of fabrication and falsification.”
Ivan Oransky and Adam Marcus of Retraction Watch say that most journals are reluctant to retract papers even when a strong case is made. In Tessier-Lavigne’s case he did try to pursue corrections to two papers in 2015 but the journals did not publish them. Cell said a correction was not necessary; Science said it would publish his corrections but then failed to do so. Oransky and Marcus say that Science “has a history of failing to prioritize retractions and not just in this case.” Universities are similarly reluctant to look into frauds that might tarnish their reputations and prefer to investigate secretly if they do.
But outright fraud is only the tip of the iceberg. Exaggerating results is a far commoner reason why scientific publications cannot be treated as holy writ. “P-hacking” is a widespread issue, where scientists torture their data till it confesses to a statistically significant result, often a chance outcome.
In 2015 John Bohannon published a deliberately misleading study showing that chocolate could cause weight loss and submitted it to multiple journals from a fake institute to see how many would publish it. It was a real study but its design, with a small sample size and a large number of variables tested, was a “recipe for false positives.” It was accepted within twenty-four hours by a journal that boasts that it “reviews all papers in a rigorous way” and published unchanged. With the help of a press release, it was soon all over the media, for which any diet story is irresistible clickbait.
Data dredging of this kind is probably the main cause of the “replication crisis”: John Ioannidis of Stanford University published a paper in 2005 showing that most published research findings are false. In 2016 a survey by Nature of 1,576 researchers found that more than 70 percent had tried and failed to replicate experimental results from other labs but that journals had proved reluctant to publish such negative studies. Replication is vital to science, as shown by the current rush to test the recent claims of a South Korean team to have found a material capable of being a superconductor at room temperature and pressure.
I once bumped into an academic acquaintance and asked him what he was up to: his answers were all about the grants he had won and the conferences he had attended; nothing about content. The main incentive in organized science is to publish more papers and get more grants. This results in “salami-slicing” of results to generate more papers. Since the 1990s Chinese scientists have been paid cash bonuses for publishing papers in good journals. One Heilongjiang professor managed to publish 279 papers in five years in a single journal, Acta Crystallographica Section E.
An alarming recent example is the case of the “pangolin papers,” four studies hurriedly published in February 2020 conveniently purporting to show that a handful of smuggled pangolins were infected with coronaviruses similar to SARS-CoV-2 in 2019. My co-author Dr Alina Chan of the Broad Institute of MIT and Harvard soon spotted that all four relied on data that had already been published the previous year, and one paper had simply re-described four biological samples under new names.
It took the journal Nature six months to print a correction to that paper, in which the authors confessed to multiple errors. By then, the pangolins had done their job through the media to get the public thinking a natural source of the virus had been found — when it had not. (A couple of pangolins might have been infected somehow, but with a different virus.) The editors at Nature were either not bothered, or realized that the longer they stalled, the less attention there would be on how they had mismanaged the papers.
As this example shows, the real scandal in science is not the criminal frauds, of which there are always a small number, nor the data dredging and fire-hose publishing, but the gate-keeping, groupthink and bias that politicizes some fields of science, turning it into the dogma known as “the science.” The pandemic provided a glimpse of just how far senior scientists will go to bend conclusions to a preferred narrative and suppress debate.
On the efficacy of masks, whether the Covid vaccines prevented transmission, the effectiveness of lockdowns and the accuracy of epidemiological models and other issues, the scientific establishment proved willing to suppress alternative views. The skeptics on these points were not necessarily all right, but they deserved to be heard.
“In retrospect, maybe it wasn’t so smart to hand the keys of public health over to mad-scientist virologists, hypochondriacal epidemiologists and megalomaniacal science bureaucrats,” tweeted Professor Jay Bhattacharya of Stanford Medical School recently. He was one of the authors of the Great Barrington Declaration, calling for focused protection rather than society-wide lockdowns. Regarding that declaration, “There needs to be a quick and devastating published take down of its premises,” wrote Francis Collins, head of the National Institutes of Health, to Anthony Fauci, head of the National Institute of Allergy and Infectious Diseases, in October 2020. “Is it under way?” It was.
The most shocking case concerns the “Proximal Origin” paper that shut down the debate on the origin of Covid-19 for the best part of a year. Published by Nature Medicine in March 2020, it ruled out “any type of laboratory-based scenario,” deceiving me and many others. Emails and Slack messages released by a congressional subcommittee last month show how the five authors of the paper thought in private that several types of laboratory-based scenarios were indeed possible, even “friggin’ likely.”
They continued to think this secretly even as they drafted their paper, edited it in response to pressure from “higher ups” and journal editors to make what it said even more dogmatic, then published it and responded to media inquiries, while celebrating its influence. The lead author astonishingly told Congress two weeks ago that publishing one view while thinking the opposite is “simply the scientific process.” But the fact that the heads of their main funding agencies were part of the conversation, even suggesting edits, and were keen to (in Collins’s words) “put down this very destructive conspiracy” seemingly influenced what they wrote.
Last month forty-seven scientists wrote a letter to the editor of Nature Medicine requesting retraction of the Proximal Origin paper, and arguing that “the authors’ statements show that the paper was, and is, a product of scientific misconduct.” So far the editor, Joao Monteiro, has refused to consider retraction, arguing that it was just an opinion piece, despite the fact that it was peer-reviewed and hailed as a case-closing study.
Ah, peer review, that laying on of hands that renders a profane paper scientifically sacred. In practice, peer review has become less a means of challenging papers than a way of keeping out heretics while waving through true believers. In 2019 the late science writer Sharon Begley exposed how a powerful cabal of professors used peer review to ensure that Alzheimer’s research remained in thrall to the hypothesis that amyloid plaques are a cause rather than a symptom of the disease. Grants and publications were denied to heretics of this faith.
A common trick, currently being played by the defenders of the Proximal Origin paper, is to say to the heretics: how come you have not published your critiques in a peer-reviewed journal? To which the answer is: because you have used peer review to keep them out. In an egregious case of gate-keeping, Alina Chan wrote a detailed review of the data from the Huanan seafood market in Wuhan, showing that it was unlikely to be the origin of the virus. After nearly two years of peer-review rejections, she asked permission of the latest journal to reject it to publish online the two anonymous reviews, one of which was highly complimentary while the other attacked her credentials and made a series of comically misinformed criticisms. The journal said that publishing the reviews would breach copyright laws.
Most editors of scientific journals took an early and strong line against even considering a lab leak in Wuhan and are now reluctant to publish evidence that they were wrong. The editor of Science, Holden Thorp, wrote in response to one highly revealing leaked document: “Missteps by researchers and funding agencies… have provided fodder for conspiracy theorists… None of these miscues says anything substantive about the science and the conclusion that the virus is almost certainly of zoonotic origin.” Open-minded? Not much.
Gate-keeping matters because it is often people from outside the club who bring scandals to light. In 2018 the independent, self-funded British statistician Nic Lewis reanalyzed the data behind a paper in Nature that had found the oceans were absorbing heat faster than previously thought. Lewis found major flaws in the work, tried in vain to engage with the lead author, and then published his critique on the blog of a retired climatology professor, Judith Curry. Eventually the paper was retracted, largely unnoticed by the media which had lionized it.
The Tessier-Lavigne case was pursued by a first-year Stanford undergraduate, eighteen-year-old Theo Baker, who wrote for the campus newspaper. On the origin of the virus, many significant findings or critiques came not from professional academics but from unpaid amateurs like Jeet Ray in India, Francisco Ribera in Spain and Gilles Demaneuf in New Zealand, or private-sector scientists like Yuri Deigin in Canada, Alex Washburne in America and Steven Quay in Taiwan.
The pandemic showed how science could be reformed. Many results were posted online as “pre-prints” before being peer-reviewed. This allowed all of us, expert or otherwise, to analyze the evidence and if necessary tear the conclusions to shreds — without hiding behind anonymity. Some of the best “peer reviewers” in this public sense were people outside the conflicted priesthood of virology or epidemiology. Such radical transparency will be vital to the reform of science, just as it was to the Church in Martin Luther’s day. “If we are not able to ask skeptical questions, to interrogate those who tell us that something is true, to be skeptical of those in authority, then we’re up for grabs for the next charlatan, political or religious, who comes ambling along,” said Carl Sagan.
This article was originally published in The Spectator’s UK magazine. Subscribe to the World edition here.