Spoiler alert: it is not easy, but worth it.
Four years ago, I obtained my PhD which seems like a very long time by now. As scientists, we are trained to do a lot of different things, but the training how to run a lab is rather informal and largely based on trial-and-error learning. There have been a couple of nice write-ups (e.g., https://users.fmrib.ox.ac.uk/~behrens/Startingalab.htm) from far more influential and experienced researchers than me, but I thought it might still be helpful to publicly document my humble insights as they arise and evolve over time. Continue reading “Five lessons I have learned in starting a new lab”
All of our precious results deserve the same love. But it might take more than love to open a file drawer.
Science is confronted with an apparent paradox. Although almost every researcher agrees that falsification is the key to progress and publication of null results is pivotal in the cumulative process of building knowledge, many more “positive” results are published than expected based on unbiased estimates of effect sizes. Obviously, there are many reasons for this discrepancy. A couple of days ago, Anne Scheel posted a very thoughtful blog post about why we should love null results. If you haven’t read it yet, make sure to do so, it is certainly worth the 10-min read. Whereas I second many of her conclusions, I don’t think having the same love for null effects that we have for significant effects will eventually help in tackling publication bias unless there will be a level playing field. Perhaps this pessimism is due to the take-home of several classics from literature classes taught back in the days in school: think of Romeo and Juliet or Intrigue and Love showing essentially that love is at times not enough to overcome a rigid system of family lineage, class, and seniority rule. Any association with academia is of course only coincidental. Continue reading “Why same love is sadly not enough: Confessions of an ECR about the file drawer”
It’s about magic, or something very close to magic: statistical control. Neurocopiae talks covariates.
In psychology, there is no such thing as a perfect experiment. Often, it is clear from the get-go that there are certain problems (“confounds”) you will not be able to eliminate, no matter how sophisticated your design might be. The remedy is simple and straightforward: measure what you can measure and try to statistically control for these variables. From some enthusiastic applications, I find that the number of covariates appears to be proportional to the degree you want to show off to other researchers how deeply you care about your data. As a psychologist, I can say that we love and embrace covariates (and subtle, yet significant interaction terms, but this practice had some bad press lately). The belief in the statistical procedure of control via covariates thus seems to be deeply rooted in practice, a bit of everyday numerical magic. Continue reading “Covariate magic part 1: That has been accounted for by the covariate!”
There is a lot of buzz around brain stimulation, but new problems start to surface. Neurocopiae reviews news on bad practices and poor reliability.
It hasn’t been a very good week for proponents of the popular brain stimulation method called transcranial direct current stimulation (tDCS). tDCS is a non-invasive technique that uses electrodes to deliver weak current to a person’s forehead. Numerous papers have claimed that tDCS can enhance mood, alleviate pain, or improve cognitive function. Such reports have sparked interest in tDCS at a broader scale. When you enter tDCS in the youtube search, you will find DIY tutorials on how to assemble a device so that you can amp up your brain at home. Including enthusiastic reports of the resulting changes in brain function. To put it in Richard Dawkins’ words: Science? It works, bitches. In particular, it works when you know what the outcome should be. Continue reading “Amping up control? Bad research practices and poor reliability raise concerns about brain stimulation”
Neurocopiae takes a closer look at the carefully crafted pizza study survey by the Wansink lab.
UWhen it comes to reheating leftover pizza, opinions are typically divided. I like cold pizza better because when you reheat a slice of pizza, it gets soggy. This soggy slice of pizza is a fitting metaphor for the next chapter in the Wansink pizzagate saga. I was a bit reluctant to write another post on the sad downfall of ig-nobel laureate Brian Wansink, head of the Food & Brand lab at Cornell University [Mindless publishing garnished with social science apologies], but I had to take a look at the now infamous pizza buffet data myself. A couple of days ago, Wansink posted a statement re-emphasizing that “[he], of course, take[s] accuracy and replication of our research results very seriously.” More importantly, Wansink finally granted access to the data that the four papers, which came under fire months ago, were based on: “My team has also worked to make the full anonymized data and scripts for each study available for review.” This is awesome because everything is settled now, right? Move on, methodological terrorists, nothing to see here. Well, almost. Continue reading “When you handle trash, do you still have to handle it with statistical care?”
There is a new diet in town and neurocopiae is trying to maintain healthy dopamine release on carbs.
Tom Kerridge has a captivating story to tell. The popular chef and presenter on BBC’s Proper Pub Food and Best Ever Dishes lost 70 kilograms (down from 190 kg) and many viewers witnessed that he slimmed down not knowing what his secret recipe to success was. Motivated by the growing interest, Tom Kerridge wrote a book that recently entered the top ten book sales list at amazon.co.uk. It could have been another simplistic take on a low-carb diet, but the publisher decided to go a different route. They dubbed it “Tom Kerridge’s dopamine diet”. Continue reading “Losing weight with loose ideas? Try the dopamine diet now”
Life is hard, science is harder, social science is impossible? Neurocopiae has to digest a bottomless dump of “fun” results.
Last time, I wrote a post about how difficult it is to do good research on nutrition and health (Cereal killer: Is eating breakfast the new smoking?). A couple of weeks later, as the pizzagate unfolds, we painfully learn more about these intricacies slice by slice. At the center of attention is Brian Wansink, who “is Professor and Director of the famed Cornell University Food and Brand Lab, where he is a leading expert in changing eating behavior“. If you have missed the start of the controversy and feel like you need to catch up on the full narrative, I have linked a good summary by Andrew Gelman. Briefly, Wansink wrote a post on his blog. He provided the career advice to never say no to your supervisor’s proposals because this is how you will get tenure by publishing numerous papers. Even if you have a dataset at hand that does not yield the expected result, you can torture it for a while until it finally surrenders and provides one or more significant results. Now, all it takes is little more deep diving into the data and a little pinch of wild story-telling and there you go: you have successfully inflated your list of publications. Treated in this do-or-die way, every study turns into science equivalent of the bottomless soup bowl that Wansink became famous for. Continue reading “Mindless publishing garnished with social science apologies”