Because of the replication problems facing biomedical science and psychology, much attention in recent years has focused on scientific integrity. How can scientists ensure that the data they are publishing is accurate and reliable?
A new report that partially addresses that issue has been released by the National Academies. It was reviewed by Physics Today, which said that, among other things, the report "advocates stricter policies for scientific authorship attribution, increased openness in scientific work, [and] the reporting of negative findings." These recommendations are fine, but they do little to address the general public's concern: Is science reliable?
Increasingly, the public is becoming skeptical of science itself. This is the natural outcome of a society that is hyperpartisan, ensconced in social media echo chambers, distrustful of expertise and authority, leery of corporations, and quite eager to embrace conspiracy theories. Though exceedingly rare, high-profile cases of scientific fraud haven't helped.
We shouldn't be surprised, therefore, that research perceived as even mildly controversial is immediately met with the criticism, "Follow the money!"
But this is a ridiculous and offensive thing to say. Off the bat, it assumes that scientists are frauds and shills. But this is highly unlikely, not because scientists are incorruptible, but because the scientific method is inherently self-correcting. If Scientist X tries to build upon Scientist Y's research and is unable to, he or she has uncovered a clue that something is wrong in the scientific literature. However -- and this is crucial -- it is not (yet) evidence that something nefarious has happened. Why?
Applying Hanlon's Razor
Because in science (as in life), it is best to assume that people are well intentioned, until there is sufficient reason to believe otherwise. In law, we call this "innocent until proven guilty," and on the Internet, we call this Hanlon's Razor: "Never attribute to malice that which is adequately explained by stupidity." Keep in mind that the stupidity might be your own.
When the results of scientific research do not meet our expectations, applying Hanlon's Razor would require scientists (and the public) to go through this thought process, in this order:
#1. I screwed up. The authors are correct, but I don't understand their methodology, results, or conclusions. I need to work harder to understand them.
#2. The data is right, but the conclusions are wrong. The researchers' methodology is sound, but the conclusions they make or policy recommendations they suggest do not necessarily follow from the data.
#3. The scientists don't know what they're doing. While being well intentioned, the researchers' methodology is sloppy, and their conclusions are wrong or sensationalist.
The vast majority of the time, your thought process can stop here. Almost all scientists are well intentioned, and most of them are smart, too. Only if there is substantial evidence to the contrary should you continue with this thought process:
#4. The scientists are pushing an agenda. Some scientists believe so firmly in an idea, that they are willing to disregard any evidence to the contrary. They will publish research that stands in direct contradiction to the scientific literature, and they make little if any effort to entertain the notion that they might be wrong. These scientists may not be unethical, but they are certainly driven by ideology, which is corrosive to the scientific enterprise. Now is the appropriate time to ask the question, "Who funded the research?" The answer to this question may help shed light on why such shoddy research exists.
#5. The scientists are committing fraud. If all of the other possibilities have been thoroughly exhausted, it is possible that the scientists are doctoring the data or engaging in other acts of deception or fraud. Thankfully, this is exceedingly rare.
Everybody wants to be treated fairly. We view ourselves as honest (even if we aren't), and that is true of scientists as well. Treat them as you would like to be treated by giving them the benefit of the doubt -- until extraordinary evidence indicates otherwise.