Neurocopiae takes a closer look at the carefully crafted pizza study survey by the Wansink lab.
UWhen it comes to reheating leftover pizza, opinions are typically divided. I like cold pizza better because when you reheat a slice of pizza, it gets soggy. This soggy slice of pizza is a fitting metaphor for the next chapter in the Wansink pizzagate saga. I was a bit reluctant to write another post on the sad downfall of ig-nobel laureate Brian Wansink, head of the Food & Brand lab at Cornell University [Mindless publishing garnished with social science apologies], but I had to take a look at the now infamous pizza buffet data myself. A couple of days ago, Wansink posted a statement re-emphasizing that “[he], of course, take[s] accuracy and replication of our research results very seriously.” More importantly, Wansink finally granted access to the data that the four papers, which came under fire months ago, were based on: “My team has also worked to make the full anonymized data and scripts for each study available for review.” This is awesome because everything is settled now, right? Move on, methodological terrorists, nothing to see here. Well, almost. Continue reading “When you handle trash, do you still have to handle it with statistical care?”
Many things may go wrong, but we can count on the standard error to be on the safe side, can’t we? Neurocopiae digs into the data to unearth common sources of error that are not “standard” errors.
One more week has passed since I posted the first part of my take on the presidential upset in the US elections. First of all, I want to say that I was pleasantly surprised to see that it received good attention and was picked by the editors of scienceseeker.org (thanks!). Once you wake up on the wrong side of the error bar, you start to wonder if there is any chance to do better next time. What worked in Trump’s favor, has also led to erroneous estimations of brain activation clusters in fMRI research. Correlated errors are omnipresent in data, but hardly present in statistical models regardless of the domain and I covered this aspect in Part 1. In the second part of my post, I will deal with two more statistical issues that surfaced after the election, but are not echoed in common practice data handling in neuroscience: 1) misconceptions about what the margin of error truly reflects and 2) the gap between a sample (what you got) and the underlying population (what you want to get at). Continue reading “When the margin of error is decisive: Trump’s victory as a lesson for neuroscience, part 2”
The world is not the same after Trump’s election and this blog is no different. Neurcopiae explores how we can learn from the failure of prediction models.
If casting predictions is your bread and butter, you know how hard it is to be spot on. Luckily, in most cases it does not matter when we happen to be a bit off target because the implications are modest at best. This is why every prediction comes with a margin of error or a confidence interval. Still, when Trump defied the odds of poll predictions on election night and edged out the victory, I felt deeply troubled. Stats let me down on this important occasion and it was tough to take. Continue reading “When the margin of error is decisive: Trump’s victory as a lesson for neuroscience, part 1”