Spoiler alert: it is not easy, but worth it.
Four years ago, I obtained my PhD which seems like a very long time by now. As scientists, we are trained to do a lot of different things, but the training how to run a lab is rather informal and largely based on trial-and-error learning. There have been a couple of nice write-ups (e.g., https://users.fmrib.ox.ac.uk/~behrens/Startingalab.htm) from far more influential and experienced researchers than me, but I thought it might still be helpful to publicly document my humble insights as they arise and evolve over time. Continue reading “Five lessons I have learned in starting a new lab”
All of our precious results deserve the same love. But it might take more than love to open a file drawer.
Science is confronted with an apparent paradox. Although almost every researcher agrees that falsification is the key to progress and publication of null results is pivotal in the cumulative process of building knowledge, many more “positive” results are published than expected based on unbiased estimates of effect sizes. Obviously, there are many reasons for this discrepancy. A couple of days ago, Anne Scheel posted a very thoughtful blog post about why we should love null results. If you haven’t read it yet, make sure to do so, it is certainly worth the 10-min read. Whereas I second many of her conclusions, I don’t think having the same love for null effects that we have for significant effects will eventually help in tackling publication bias unless there will be a level playing field. Perhaps this pessimism is due to the take-home of several classics from literature classes taught back in the days in school: think of Romeo and Juliet or Intrigue and Love showing essentially that love is at times not enough to overcome a rigid system of family lineage, class, and seniority rule. Any association with academia is of course only coincidental. Continue reading “Why same love is sadly not enough: Confessions of an ECR about the file drawer”
Life is hard, science is harder, social science is impossible? Neurocopiae has to digest a bottomless dump of “fun” results.
Last time, I wrote a post about how difficult it is to do good research on nutrition and health (Cereal killer: Is eating breakfast the new smoking?). A couple of weeks later, as the pizzagate unfolds, we painfully learn more about these intricacies slice by slice. At the center of attention is Brian Wansink, who “is Professor and Director of the famed Cornell University Food and Brand Lab, where he is a leading expert in changing eating behavior“. If you have missed the start of the controversy and feel like you need to catch up on the full narrative, I have linked a good summary by Andrew Gelman. Briefly, Wansink wrote a post on his blog. He provided the career advice to never say no to your supervisor’s proposals because this is how you will get tenure by publishing numerous papers. Even if you have a dataset at hand that does not yield the expected result, you can torture it for a while until it finally surrenders and provides one or more significant results. Now, all it takes is little more deep diving into the data and a little pinch of wild story-telling and there you go: you have successfully inflated your list of publications. Treated in this do-or-die way, every study turns into science equivalent of the bottomless soup bowl that Wansink became famous for. Continue reading “Mindless publishing garnished with social science apologies”