Human beings make science. They collect the data. They analyze the data. Other human beings peer review the science, edit the science, and publish the science, and still other humans report the science to the public, who then go around telling each other about the science on Twitter, and at the dinner table, and at the gym, and who knows where else.
Somewhere in this Chain of Fact, things can go wrong. As a recent spate of incidents—involving falsified data and deceived journalists—should remind us, the route from the laboratory to public knowledge is not a perfect conduit of abstract truth, much as we modern humans often imagine (or at least wish) it to be. Instead, the route from laboratory finding to public knowledge is comprised of people, working within particular systems—some of which, it seems, are not receiving the oversight they ought to.
Two recent cases highlight some recurring problems. The first of these has been the prolonged implosion, over the past few weeks, of a major study purporting to measure how one-on-one canvassing can substantially influence voters’ opinions on same-sex marriage. The study was published in the journal Science, and it received widespread media coverage, including on This American Life.
Also, the data turned out to be fake.
In part, this was the case of a rogue graduate student, Michael LaCour, who seems to have broken nearly every imaginable rule of research ethics. But it was also the case of a duped community. Donald Green, a prominent political scientist, helped with the analysis, and put his name on the study (“I am deeply embarrassed that I did not suspect and discover the fabrication of the survey data,” Green told New York magazine, whose coverage of the situation has been excellent). Princeton offered LaCour a faculty position. It took six months after the study’s publication before David Broockman and Joshua Kalla, graduate students at UC-Berkeley, publicly identified the fundamental issues with a landmark study in a major field of social scientific inquiry.
The second case is even weirder. Earlier this year, the science journalist John Bohannon set up a diet study—an intentionally terrible piece of research. What made it so terrible? For one thing, Bohannon had only twelve subjects, broken up into three groups. For another, he had no clear question that he was testing. He altered the diets of some of his subjects, collected a bunch of data, and then mined the findings in order to get something statistically significant. And, of course, he did. Eating a chocolate bar per day contributed to faster weight loss, according to Bohannon’s meaningless data.
Bohannon published the study in an open-source journal under the name Johannes Bohannon, made up a website for a fake research institute, and wrote a press release. Journalists interviewed him about his work. Bild, a major German newspaper, ran their article as a front page story. Other publications, including Shape magazine and the Daily Express, covered it as well. Nobody caught him; Bohannon came clean about the stunt last week.
The problem of fake science getting a pass is old, and human nature is fickle, but these aren’t isolated incidents. As a New York Times op-ed noted yesterday, “cheating in scientific and academic papers is a longstanding problem, but it is hard to read recent headlines and not conclude that it has gotten worse.”
Another recent Times op-ed, written by the editors of the blog Retraction Watch, argued that pressure to publish was leading to small-but-concerning rates of faked data, much of it slipping through peer review—or else, as in Bohannon’s case, getting swallowed up by gullible media.
What’s going on? Well, human nature of course; there are cheaters in this world. But the system is designed, at least in theory, to catch them. And while there isn’t a single, soundbite answer for how to avoid these kinds of situations, it’s worth highlighting a couple more blatant issues.
The first is that the incentive to accept high-profile or flashy findings tends to be high, while the incentive to challenge them is generally low. This is especially true for journalists, for whom a dramatic, surprising result—chocolate aids weight loss!—is excellent copy. The relationship can also be mutually beneficial for researchers and journalists, who get good press and good stories, respectively (Bohannon speaks, accurately, of “the diet research-media complex”).
Challenging a scientific finding, on the other hand, takes time and expertise, and it involves a substantial risk. Not everyone wants to be the journalist who tells the PhD that his research sounds fishy.
That same incentive structure can exist within scientific communities, too, if a bad paper slips through peer review (aka the anonymous system that’s supposed to catch bad research). The LaCour paper had the imprimatur of Donald Green, a leading scholar within the field of political persuasion research. Challenging the paper involved substantial professional risk, while the personal cost of saying nothing was low. Broockman, the graduate student who uncovered the issue, told New York magazine that an older scholar, and friend, had discouraged him from pursuing his initial suspicions. As Jesse Singal at New York put it, describing the potential payoffs:
The moment your name is associated with the questioning of someone else’s work, you could be in trouble. If the target is someone above you, like Green, you’re seen as envious, as shamelessly trying to take down a big name. If the target is someone at your level, you’re throwing elbows in an unseemly manner. In either case, you may end up having one of your papers reviewed by the target of your inquiries (or one of their friends) at some point…Moreover, the very few plum jobs and big grants don’t go to people who investigate other researchers’ work—they go to those who stake out their own research areas.
Scientific findings, like everything else, come within the context of a particular culture, with its own, very human, set of pressures.
An even thornier issue, though, is the peculiar kind of trust that we place in scientists.
Looking at all the creationism, climate change denial, anti-vaccination activism, and GMO skepticism, it’s tempting to conclude that people don’t trust scientists. But when you dig into these issues, it gets subtler. Even the skeptics are grounding their arguments in whatever science (or pseudoscience) they can. And when they look for an authority figure to back their claims, again and again they look for a scientist. It’s probably more accurate to say that people distrust certain scientific findings. They consider these findings to be bad or corrupted examples of actual science, which they like.
In other words, while we may argue over what good science actually looks like, we all want the science on our side. And when we encounter scientific findings that we do like, we may trust them completely. Chocolate will help you lose weight! You can change people’s minds on gay marriage, just by talking to them! Altering your posture will make you more successful (unless it doesn’t)! Yes, of course. Science says so.
So should we trust scientists less? I don’t think that’s the solution, exactly. Science is a powerful tool for understanding reality, with a remarkable system of checks-and-balances to prevent and catch error. When scientific consensus arrives about a specific phenomenon, such as climate change, we need to heed it.
But we can stop framing the issue in terms of Inalienable Science vs. Stupid Naysayers, and be more open about the particular, very human process by which laboratory findings become public knowledge. That openness might include teaching kids a bit of philosophy as part of their science classes. It will require scientists, and those outside the scientific community, to talk more publicly about the skewed incentives that confront many researchers and journalists. And it will certainly require wider acknowledgement that a single study rarely provides perfect proof of anything (especially in social science). Perfect solutions may be tempting, but we should all be careful when we go hunting for miracles.