P-Hacking — Part 03: 4 Interesting Experiments on p-hacking
- FiveThirtyEight — “You Can’t Trust What You Read About Nutrition”

The website FiveThirtyEight at surveyed 54 people and collected over a thousand variables and through p hacking the results was able to find statistically significant correlations between eating cabbage and having an innie bellybutton drinking iced tea and believing crash didn’t deserve to win Best Picture and eating raw tomatoes and Judaism!
“…Whatever you’re worried about, there’s no shortage of diets or foods purported to help you. Linking dietary habits and individual foods to health factors is easy — ridiculously so — as you’ll soon see from the little experiment we conducted…”
“ …To show you why, we’re going to take you behind the scenes to see how these studies are done. The first thing you need to know is that nutrition researchers are studying an incredibly difficult problem, because, short of locking people in a room and carefully measuring out all their meals, it’s hard to know exactly what people eat.…”
“The problems with food questionnaires go even deeper. They aren’t just unreliable, they also produce huge data sets with many, many variables. The resulting cornucopia of possible variable combinations makes it easy to p-hack your way to sexy (and false) results…
We ended up with 54 complete responses and then looked for associations — much as researchers look for links between foods and dreaded diseases. It was silly easy to find them.”

In a 2013 analysis published in the American Journal of Clinical Nutrition, the researches had selected 50 common ingredients at random from a cookbook and looked for studies evaluating each food’s association to cancer risk. It turned out that studies had found a link between 80 percent of the ingredients including salt, eggs, butter, lemon, bread and carrots and cancer. Some of those studies pointed to an increased risk of cancer, others suggested a decreased risk, but the size of the reported effects were “implausibly large,” while the evidence was weak.

(Source- You Can’t Trust What You Read About Nutrition)
2. TED Global stage 2012
Another example is the second-most viewed TED talk of all time at the TED Global stage in 2012. Amy Cuddy is a psychologist, Harvard Business School professor famous for her work on power posing. But a series of error and disconnections were later discovered about her study. Some other researchers had pointed the serious errors in her methodology and today the entire theory of power posing is being questioned. Some researchers had tried to reproduce her work and failed and some other teams that tried to reproduce her work got exactly the opposite result: Even Professor Cuddy’s co-author, who herself is a prominent academic, has completely disavowed the study saying “I do not believe that power pose effects are real. She had accepted the errors in the way that they conducted their study, such as tiny sample sizes, flimsy data, selectively reported findings. Some of the unsuitable conditions that I mentioned above such as bad research, bad methodologies, small sample sizes, selectively reported findings had been pointed out as their mistakes.
3. 2012 Cookbook experiment

In 2012, a group of researchers randomly chose 50 ingredients from a cookbook, general things like milk, eggs. Then they took a look at the researches done in the previous years relating to these ingredients to see what they could learn about whether these ingredients did or did not cause cancer. In the end they found that for every ingredient you see listed in this slide wine, tomatoes, milk, eggs, corn, coffee, butter, there was at least one research study that argued that the ingredient caused cancer, and at least one research study that argued that the ingredient prevented cancer.
The same scenario was seen in the health care sector. They reported that, “We saw that the authors of the vast majority of clinical trials reported in top medical journals silently changed the outcomes that they were reporting. So, they said they were going to study one thing, but they reported on another”.
4. Pentaquark experiment
If you think this is just a problem for psychology neuroscience or medicine, consider the pentaquark, a particle made up of five quarks (A pentaquark is a subatomic particle consisting of four quarks and one antiquark bound together), as opposed to the regular three for protons or neutrons. Particle physics employs particularly strict requirements for statistical significance referred to as 5-sigma or one chance in 3.5 million of getting a false positive. In 2002 a Japanese experiment found evidence for the Theta-plus pentaquark, and in the two years that followed 11 other independent experiments then looked for and found evidence of that same pentaquark with very high levels of statistical significance. From July 2003 to May 2004 a theoretical paper on pentaquarks was published on average every other day, but later, it was was discovered that it was a false discovery and other experimental attempts to confirm that theta-plus pentaquark using greater statistical power failed to find any trace of its existence. The problem was those first scientists weren’t blind to the data, they knew how the numbers were generated and what answer they expected to get, and the way the data was cut and analyzed, or p-hacked, produced the false finding. (Source-Veritasium)
Continue to read about the dangers, new improvements & the future of experiments in the next article.