A couple of weeks ago, I published my first preprint on bioRxiv. Although there are many good reasons to publish preprints, we can put it simple and plain. They are an awesome opportunity to put your work out in the spotlight immediately. No paywall, no editorial evaluation of potential impact, no Reviewer #2. The only drawback is that they are not peer reviewed yet, so you basically have to read the paper as if you were reviewing it for a journal. Luckily, in times of rising numbers of paper submissions, this quickly becomes a habit anyway. Besides posting our drafts as preprints, what is the best thing that we can do to support the cause? Talk about preprints, of course.
This is what I will try to do here on a regular basis. And the future is now. One recent submission caught my eye because it has all the good ingredients for me to cook up a neurocopiaen blog post: simple questions, deceitful lies, and pretty blobs. The work was conducted by Morteza Pishnamazi et al. and focuses on an fMRI correlate of an interesting behavior with high face validity. When participants have to tell a lie, they take longer to respond on average.
Fast truth, slow lies?
In the study by Pishnamazi et al., 20 healthy volunteers had to answer 20 autobiographical questions. First, they answered all of them correctly. Second, participants freely picked half of the questions. Third, they were instructed to lie on this selected subset of questions for the remainder of the experiment. Fourth, participants practiced the procedure for five minutes before they entered the scanner. Inside the scanner, they were confronted with five repetitions of the 20 initial questions within an event-related design (16 min run). For the analyses, the experimenters then classified the answer as “truthful”, “false”, or “mistake” (if they reported answer violated the instructions given). Moreover, they calculated a response time (RT) measure of the cost of lying. Pishnamazi et al. dub this measure “relative appended lie reaction time”, [(RT_lie – RT_truth) ⁄ RT_truth]. In fact, it is not correlated with overall RT which is why the effects can be modeled at the same time. Lastly, Pishnamazi et al. enter baseline RT (RT_truth) and the lying-cost RT as predictors in an attempt to disentangle the brain processes involved in merely generating responses versus lying on a question. In the discussion, they thus argue: “Therefore, if a brain region correlates with RT-cost measure but does not correlate with baseline speed measure the cognitive function of such region is probably exclusively employed for providing intentional false responses, but not truthful answers.”
At the brain level, there are two main findings. Several brain regions show a positive correlation with overall RT, for example, the paracingulate cortex (extending to the preSMA) and the right IFG. Moreover, there are brain regions positively correlated with the cost of lying. These regions also tend to show a positive correlation with overall RT, but despite some overlap of the clusters, there are distinct areas evident in the dorsal anterior cingulate cortex (dACC) and the left IFG. Nevertheless, other brain regions also show a negative correlation with the cost of lying, for example the mid cingulate (sorry, but I would not call this PCC) and the insula. Since I am not sure how strongly we should count on the dissociation between the left and right IFG as I am not an expert on the lateralization of function within the IFG, I will focus on the potential implication of the dACC result.
Figure 2a: Group-level results of [lie > truth] BOLD contrast and correlations with behavioral reaction time (RT) measures. Z statistic images were thresholded using clusters determined by Z > 2.3 and a corrected cluster significance of p < 0.05. Locations of slices are indicated by the x, y, and z coordinates as per the MNI coordinate system. (a) Brain regions where the BOLD signal difference between lying and truth telling conditions correlated with at least one of the behavioral indices. Average speed in answering questions truthfully [RTtruth] was used as representative of participants’ baseline speed. Relative appended lie RT [(RTlie − RTtruth) ⁄ RTtruth] was used as representative of RT-cost of lying. This measure is an indicator of how participants’ RTs changed while lying.
Pishnamazi et al.(2016) @bioRxiv
Is the dACC involved in deception?
The straightforward answer to this question would be: I guess so, based on the data. However, we might be deceived to believe that this is a specific contribution of the dACC to the neural process at hand, namely lying. In fact, I think it is conceivable that this is an overstatement of the evidence. One aspect that the authors mention is that the marked lying-induced increase in RT could translate to corresponding increases in the BOLD response amplitude to the incorrectly answered events. To put it in simple words, if you do the same thing for a longer time inside the scanner, you would expect to see a greater brain response based on the hemodynamic response model applied to the data. This could be picked up by positive correlations with baseline RT and the cost of lying and I think it is great that Pishnamazi et al. try to direct attention to this potential confound, which is not easy to tackle within the selected design of the study.
Also, we have to keep in mind that the colored blobs, which appear to indicate the dissociation, are based on thresholded maps of the brain. In other words, it is possible that the overall correlation with RT gets just a bit weaker along the posterior to anterior axis and the correlation with the cost of lying gets just a bit stronger. This could be a rather marginal change in the absolute correlation value, which is greatly amplified by the display. Any stronger claim about function should therefore be backed up by an interaction analysis that does not require first-pass thresholds.
Another possibility is that the dACC correlate is a general indication of the greater difficulty involved in inverting an answer from, say, a true “yes” to an incorrect “no”. According to popular explanations of the lying RT cost effect, the truth is the “default” answer and the RT cost is caused by an active inhibition of the correct response required to commit the lie. Pishnamazi et al. discuss this aspect as potentially problematic as well, but primarily with regard to overall response time. However, this could also be a confound driving the association with RT cost at a group level. Perhaps those individuals with no measurable RT cost of lying do not even show a heightened dACC response whenever they are lying. If so, wouldn’t it indicate that the dACC is not specifically involved in lying because everyone in the study is lying in the end, but with differential costs of lying? I think second-level RT correlates alone cannot be sufficient to make a strong case for the functional specificty of the dACC signal in lying.
What’s in a lie? Choice difficulty and expected value of control as established dACC correlates
To me, it all comes down to the question here: Do we need to assume a specific contribution of the dACC to lying as a unique mental process? Or could we perhaps explain the results only by referring to the general involvement of dACC in the recruitment of additional cognitive control when choices are difficult (Shenhav et al., 2014)? To answer this question, we would need to model choice difficulty in addition to the response cost of lying and one might succeed to disentangle the two with a clever design of questions. At the same time, this illustrates the limitation of the simple RT cost measure of lying for dichotomous autobiographical questions. Certainly, if there was one important take-home from the recent #cingulategate (“the dACC is selective for pain”; Lieberman & Eisenberger, 2015), I feel it is that selectivity or specificity of the dACC for one particular domain is virtually impossible to establish. Whereas Pishnamazi et al. are wise not to make such far-reaching claims, I think it would be more parsimonious to assume that the lying RT correlate is just one instance of a more general process until proven otherwise. Thus, it is definitely too early to call out the dACC as the deceptive part of the cingulate.
Taken together, Pishnamazi et al. uploaded a good solid paper that raises several interesting questions for follow-up studies. I like that it is very clear and to the point. Also, I second the call to take RTs and RT differences better into account in future fMRI studies. If I was reviewer #2, I would have a few more concerns though. First, the sample is quite small, especially if one goal was to test for inter-individual differences in RT and their brain response correlates. Second, I wonder why the trial-based RTs were not used for analyses at the first level. Wouldn’t we expect RTs to be informative within a given subject as well? Third, why were the questions repeated so many times? Couldn’t one use just a bigger set of questions to avoid repetitions? What is the effect of repeatedly lying about the same question on the brain response and the RT difference? Does it decrease because the discrepancy between the truth and the lie decreases with each repetition of a lie? Fourth, why did Pishnamazi et al. decide to use a rather low cluster-forming threshold, which has been shown to be prone to false positives? Would similar conclusions be obtained with a more conservative first-pass threshold?
I guess in the end, there is definitely some truth in the idea that the dACC is involved in deception. Either it is directly involved in lying blatantly, or it masquerades in a way that it seems as if it is truly involved in lying. That would be quite a smart move, I must say. Until we know for sure, you better watch out. Never trust a dACC response in the meantime.