As the saying goes, "There are three kinds of lies: lies, damned lies, and statistics." We know that's true because statisticians themselves just said so. A jaw-dropping study reveals that nearly 1 in 4 of them report being asked to remove or alter data to better support a hypothesis. That is called scientific fraud.
According to pharmacologist Ray Dingledine, good science is hard to do because of (1) "our drive to create a coherent narrative from new data, regardless of its quality or relevance"; (2) "our inclination to seek patterns in data whether they exist or not"; and (3) our negligence to "always consider how likely a result is regardless of its P-value." The good news is that this can be fixed.
More than 2/3 of animals are transported on just four airlines: Alaska, Delta, American, and United. United was responsible for transporting a plurality (27%) of all animals in 2017, so we would expect -- from sheer volume alone -- for more pets to die on United flights. So, the question is, "Do a statistically disproportionate number of animals die on United?" In 2017, sadly, the answer is yes.
Pollsters have taken a beating the last few years. Getting Brexit and the 2016 U.S. presidential election wrong were spectacular failures that shook the public's faith in prediction models. The media is largely to blame. People like Nate Silver are often portrayed as oracles and polls as divinely inspired. Anyone who questions their accuracy is attacked for rejecting science. But polls aren't science. Instead, they are some combination of fancy math (statistics) and art. If the underlying assumptions are wrong, or if the sampling methods are biased, then polls will be inaccurate.
With 32 G.O.P lawmakers retiring it can be said that the media narrative about a "wave" of Republicans leaving Congress is wrong. Here are the stats behind that assessment.
An investigation by Business Insider found that, "United had more pet deaths in 2016 than any other major US airline." Given United's recent public relations debacle, is this true, too? Technically yes, but statistically no. Becasue it's the statistics that matter, not the raw numbers.
Difference in difference is a statistical technique used in observational studies. It can provide insight – but don't be fooled by numbers and p-values into believing it is necessarily true.
Statistics is difficult, and choosing the proper tools becomes more challenging as experiments become more complex. That's why it's not uncommon for large genetics or epidemiological studies to have a biostatistician as a co-author. Perhaps more biomedical studies should follow suit.
There are a lot of Seattle Seahawks haters out there. Apparently, a popular insult hurled at the NFL team is that it is a "Johnny-come-lately" franchise supported by a bunch of fair-weather fans, now that the team is good. The problem for the haters, however, is that statistics show it's not true.
Anyone remotely familiar with the scientific method understands that just like a ruler or a telescope, statistics is a tool. Scientists use the tool primarily for one purpose: To answer the question, "Is my data meaningful?" Properly used, statistics is one of science's most powerful tools. But used improperly, statistics can be highly misleading.
Nate Silver, statistician and election forecaster, told ABC News that election forecasts that gave Hillary Clinton a 99% of chance of winning didn't "pass a common sense test." That is certainly true. What he left unsaid -- possibly because it wouldn't be good for his career -- is that all election forecasts that provide a "chance of winning" don't pass the science test.
What is a scientific poll? First, it is a misnomer. There is nothing scientific about a poll. Second, it is conducted using sound statistical techniques. What's more, savvy politicos know that not just any poll will do.