Are you the publisher? Claim or contact us about this channel

Embed this content in your HTML


Report adult content:

click to rate:

Account: (login)

More Channels


Channel Catalog

Channel Description:

Deconstructing the most sensationalistic recent findings in Human Brain Imaging, Cognitive Neuroscience, and Psychopharmacology

older | 1 | 2 | (Page 3) | 4 | 5 | .... | 14 | newer

    0 0

    I should be preparing for a Very Important Presentation at an upcoming meeting. But I'm not. I'm sitting at home on a Saturday night, blogging about self-sabotage.

    "Self Sabotage is when we say we want something and then go about making sure it doesn't happen."

    I do have a lot of clever ideas and useful data that are relevant for the meeting in question, I just haven't been able to start preparing my presentation yet. Am I afraid of failing? Angry at the complete lack of incentive structures at my workplace (which is organized and run in such a laughably inept manner as to be totally demotivating)?

    Psychology of Self-Handicapping

    What is behind the act of setting yourself up for failure, for unconsciously compiling a list of excuses for why you didn't perform at your best? What motivates this behavior?

    It's an act of self-preservation, actually, to have external reasons for why you didn't achieve what you set out to accomplish. That way, you're not a complete and total failure as a person. It protects your fragile self-esteem, but this comes at a price.

    Zuckerman and Tsai (2005) found the long-term costs of this strategy include a loss of perceived self-competence, negative mood, increased substance use, and a decline in motivation. Self-handicapping can be an effective strategy in the short-term, but eventually you'll suffer the consequences and end up a failure anyway.

    Anatomy of Self-Handicapping

    A group of Japanese researchers (Takeuchi et al., 2013) wanted to determine the neuroanatomical correlates of self-handicapping behavior, to see what sets this population apart from others. They used voxel-based morphometry (VBM) to quantify individual differences in brain anatomy across a large group of healthy students (94 men and 91 women). The participants were administered a Japanese version of the self-handicapping scale, along with assessments of self-esteem and depressive mood. The scale included questions like these (PDF):
    • When I do something wrong, my first impulse is to blame circumstances.
    • I always try to do my best, no matter what.
    • I tend to put things off until the last moment.
    • I would do a lot better if I tried harder.

    Regional gray matter volumes (rGMV) were quantified in a whole-brain analysis and related to scores on the self-handicapping scale with age, sex, total brain volume, intelligence, self-esteem, and depression as covariates.

    The major finding is that self-handicapping was positively correlated with rGMV in a portion of the subgenual cingulate gyrus (sgCG), or Brodmann area 25. This general area has been dubbed the "sad cingulate" by some, because it's the region targeted by deep brain stimulation for severe intractible depression by Helen Mayberg, Andres Lozano and colleagues (e.g., Riva-Posse et al. 2012).1

    Fig. 1a (adapted Takeuchi et al., 2013). Anatomical correlates of self-handicapping tendency. The region of correlation is overlaid on a single subject T1 image rGMV in sgCG was correlated with individual self-handicapping tendency. Results are shown with P < 0.05 after correction for multiple comparisons at voxel-level FWE at the whole brain level.

    The extent of this correlation did not differ between males and females (see fig below). No other regions showed positive or negative correlations with self-handicapping scores. It might seem a little implausible that the size of such a circumscribed area is the only one that correlated with the tendency for self-sabotage, but there you go.

    Fig. 1b (adapted Takeuchi et al., 2013). Scatter plot of the relationship between the self-handicapping scale score and rGMV values at the peak voxel (x, y, z = −5, 11, −16). The blue line represents the regression line for males, while the red line represents that for females.

    A counterintuitive aspect of this result stands in contrast with previous studies of depressed individuals, who show smaller rGMV in sgCG (Drevets et al., 2008). In the present study, higher self-handicapping was correlated positively with depression symptoms and negatively with self-esteem. But remember, this was a non-clinical population of 21 yr old students, not treatment-resistant patients with severe depression. In fact, it would be interesting to follow this population longitudinally, to see if continued use of self-handicapping tactics eventually wears down mood and sgCG volumes to pathologically low levels.

    After a lifetime of self-sabotage, the fill-in-the-blank answer to...

    "When I do something wrong, my first impulse is to _____"

    ...might change from "blame circumstances" to "blame myself for being such a miserable failure." When there's no self-esteem left, why try harder? What's the point?


    1 However, the sgCG region in the present study seems inferior and posterior to the DBS target (Riva-Posse et al. 2012).


    Drevets WC, Savitz J, Trimble M. (2008). The subgenual anterior cingulate cortex in mood disorders. CNS Spectr. 13:663-81.

    Riva-Posse P, Holtzheimer PE, Garlow SJ, Mayberg HS. Practical Considerations in the Development and Refinement of Subcallosal Cingulate White Matter Deep Brain Stimulation for Treatment Resistant Depression. World Neurosurg. 2012 Dec 12. [Epub ahead of print]

    Hikaru Takeuchi, Yasuyuki Taki, Rui Nouchi, Hiroshi Hashizume, Atsushi Sekiguchi, Yuka Kotozaki, Seishu Nakagawa, Carlos Makoto Miyauchi, Yuko Sassa, Ryuta Kawashima (2013). Anatomical correlates of self-handicapping tendency. Cortex doi: 10.1016/j.cortex.2013.01.014

    Zuckerman M, Tsai FF. (2005). Costs of self-handicapping. J Pers. 73:411-42.

    0 0

    Maureen O’Connor, former mayor of San Diego and heir to her late husband Robert O. Peterson’s Jack-in-the-Box fortune, won over $1 billion playing video poker over the course of 9 years (2000-2009), according to U-T San Diego. However, she lost an even greater amount during that time, resulting in a net gambling debt of $13 million. To cover some of these losses, she transferred $2 million from her husband's nonprofit foundation to her personal bank account. She was recently charged with misappropriation of funds in federal court.

    In 2011, O'Connor had surgery to remove a large brain tumor:
    The tumor was in an area of the brain that involves "logic, reasoning and judgment," said O'Connor's attorney, Eugene Iredale.

    Is It Possible That Maureen O’Connor’s Gambling Problem Was Caused by the Brain Tumor?

    Can a tumor cause irrational economic decision-making (Koenigs & Tranel, 2007) and insensitivity to future consequences (Bechara et al., 1994)? In cases of orbitofrontal meningiomas, the answer is yes.

    T1 + contrast MRI scan shows a large olfactory groove meningioma affecting the medial orbitofrontal cortex. Image sourceRadiopedia.

    While I cannot speak to Ms. O'Connor's specific case, there are a number of reports in the neurological literature of patients who do incur large gambling debts during the time a slow-growing, non-fatal tumor impinges upon the frontal lobes. Specifically, a meningioma (a relatively common and “benign” non-infiltrating tumor in the meninges, or membranes that cover the brain) in the region of the orbitofrontal cortex (OFC) can grow to be the size of an orange over decades before it is discovered (Tomasello et al., 2011).1

    Eslinger and Damasio (1985) reported the case study of patient EVR, who had surgery to remove a large meningioma affecting medial OFC bilaterally. Although EVR showed intact cognitive function through standardized neuropsychological testing, he made a series of unwise decisions that led to very negative consequences in his life. His business went bankrupt after he took on an unsavory business partner. He drifted from job to job, often being fired for his unreliability. He got divorced, remarried against the advice of others, and then divorced again shortly thereafter.

    Bechara et al. (1994) developed what came to be known as the Iowa Gambling Task (IGT) to assess the decision-making capacity of patients like EVR. In the task, participants are shown 4 decks of cards (real or virtual) from which they are allowed to draw in a series of gambles. They are told they can win money, but might also win and lose money, and will be informed of the consequences of their choice only after picking a card from one of the decks. Unbenownst to the subjects initially, Decks A and B pay out $100 but also incur larger penalties on an unpredictable schedule ("disadvantageous decks" resulting in a net loss) while Decks C and D only pay $50 but result in smaller penalties ("advantageous decks" resulting in a net gain). In the long run, patients with lesions in medial OFC (aka ventromedial prefrontal cortex, or VMPFC) preferred the higher immediate payoff than the safer decks, while controls showed the opposite pattern.

    In other words, EVR (and 6 other patients like him) chose from the disadvantageous decks significantly more often than control participants, who appeared to better learn the good and bad nature of the decks. Although the IGT is not without its critics in terms of the cognitive and affective processes necessary for optimal task performance, other studies suggest that VMPFC is indeed important for future-oriented thinking (Fellows & Farah, 2005).2

    O'Connor's Plea Bargain

    In court, Ms. O'Connor pleaded not guilty to money laundering under the terms of a deferred prosecution, according to U-T San Diego. As part of the deal, she has two years to pay back funds "borrowed" from the nonprofit foundation, and she must attend treatment for gambling addiction:
    The resolution of the case takes into account her poor health but also requires O’Connor to acknowledge she misappropriated the money and obligates her to pay it back and any tax penalties, [Assistant U.S Attorney Philip Halpern] said.

    She also has to get psychiatric treatment for gambling addiction. [Defense attorney] Iredale said that O’Connor’s doctors have said it’s possible her brain tumor pressed on centers of the brain that affect judgment and reasoning, and could explain in part her gambling addiction.

    Prosecutors dispute that. “We believe the gambling preceded her medical condition,” Halpern said.

    ABC 10 News reported:
    If she does not obey all laws, she could face 10 years in prison.

    All parties agreed that O'Connor's medical condition render it highly improbable -- if not impossible -- that she could be brought to trial.

    "We think largely as a result of the brain tumor, she had engaged in a period of compulsive gambling in which she systematically gambled away an inheritance that was left to her of several million dollars," said Iredale.

    CBS News aired an interview with the former mayor. O'Connor said that video poker was " electronic heroin. You know, the more you did, the more you needed and the more it wasn't satisfied."
    As mayor she was always in control. Her gambling was out of control.

    "I thought I could beat that machine," she said. "And when it got worse, I didn't know I had the silent grenade in my head that could go off at any time."

    The "silent grenade" was a golf ball-sized tumor doctors removed from her brain. They discovered it two years ago when she started hallucinating. She says she believes the slow-growing tumor contributed to her gambling addiction. "It's not an excuse for my gambling, but I think that was, yes, a part of it. You lose your sense of control," she said.

    How slow-growing?

    Prosecuting attorney Halpern was skeptical of the tumor explanation, saying "she began her gambling run in 2001 -- a decade earlier. It would have to be a pretty slow-growing tumor."

    But as we've seen, meningiomas can be very slow-growing. Neurosurgeon Dr. Katrina Firlik presented the case of a giant olfactory groove meningioma on her website (the MRIs alone are worth checking out):
    This patient presented with a several year history of depression, which was, in retrospect, most likely related to this benign tumor. This type of tumor typically grows slowly, over years or even decades.

    Now,  it bears repeating that I do not know whether Ms. O'Connor had this type of tumor. However, her symptoms could be seen as consistent with an olfactory groove meningioma affecting the OFC, including the visual hallucinations (perhaps due to pressure on the optic nerve). Visual disturbances can also be seen in medial sphenoid wing meningiomas, but these are not generally associated with such extreme behavioral changes (Sughrue et al., 2013).

    O'Connor also had a stroke at some point and shows signs of memory loss, difficulty reading, and occasional language comprehension problems, according to her doctor. The latter symptoms are not consistent with an OFC tumor but could be due to the stroke. 3

    Finally, it's important to note that O'Connor no longer feels compelled to gamble now that the tumor has been removed: "After the tumor was taken out and I started healing, I have no desire to gamble."


    1 Olfactory groove meningiomas that exceed 6 cm in diameter are known as "giant olfactory meningiomas" (d'Avella et al., 1999). The largest one in this case series was 9 cm in diameter, the size of an orange (shown below).

    Left: preoperative and Right: early postoperative T1-weighted MRI.

    2 In this study (Fellows & Farah, 2005), patients with VMPFC lesions demonstrated a dissociation between future time perspective (which was impaired relative to controls) and temporal discounting, or "the subjective devaluation of reward as a function of delay" (which was intact). Thus, VMFPC damage did not result in unusual discounting of rewards given in the future, relative to those given in the present.

    3 An infarction of the left posterior cerebral artery, for example, could result in damage to the left hippocampus (memory loss) and left ventral temporal and/or occipital cortices (reading difficulties).

    Additional coverage:Can a Brain Tumor Turn You Into a Gambler?


    Bechara A, Damasio AR, Damasio H, & Anderson SW (1994). Insensitivity to future consequences following damage to human prefrontal cortex. Cognition, 50 (1-3), 7-15 PMID: 8039375

    d'Avella D, Salpietro FM, Alafaci C, Tomasello F. (1999).Giant olfactory meningiomas: the pterional approach and its relevance for minimizing surgical morbidity. Skull Base Surg. 9:23-31.

    Eslinger PJ, & Damasio AR (1985). Severe disturbance of higher cognition after bilateral frontal lobe ablation: patient EVR. Neurology, 35 (12), 1731-41. PMID: 4069365

    Fellows LK, Farah MJ. (2005). Dissociable elements of human foresight: a role for the ventromedial frontal lobes in framing the future, but not in discounting future rewards. Neuropsychologia 43:1214-21.

    Koenigs, M., & Tranel, D. (2007). Irrational Economic Decision-Making after Ventromedial Prefrontal Damage: Evidence from the Ultimatum Game. Journal of Neuroscience 27 (4), 951-956.

    Sughrue ME, Rutkowski MJ, Chen CJ, Shangari G, Kane AJ, Parsa AT, Berger MS, McDermott MW. (2013). Modern surgical outcomes following surgery for sphenoid wing meningiomas. J Neurosurg. Feb 22. [Epub ahead of print]

    Tomasello, F., Angileri, F., Grasso, G., Granata, F., De Ponte, F., & Alafaci, C. (2011). Giant Olfactory Groove Meningiomas: Extent of Frontal Lobes Damage and Long-Term Outcome After the Pterional Approach. World Neurosurgery, 76 (3-4), 311-317 DOI: 10.1016/j.wneu.2011.03.021

    0 0
  • 03/10/13--03:47: The Purring Center in Cats
  • Large black spots show points from which stimulation elicited purring. Small black spots show points in these sections which were stimulated without eliciting purring. Numerous other points in other sections were stimulated with negative results so far as purring was concerned (Gibbs & Gibbs, 1936).

    A 1936 study by Gibbs and Gibbs identified the infundibular region (which connects the hypothalamus and the posterior pituitary) as the purring center in the cat's brain:
    In the course of a study which we conducted on the convulsion threshold of various parts of the cat’s brain, a region was found which when stimulated caused purring. This reaction was so striking and the region from which it was obtained so definitely localized that we consider it worthy of a special report.

    Our experiments were conducted on 400 cats.

    . . .

    The points stimulated in our 400 experiments were fairly well scattered through the brain (Gibbs and Gibbs, ’36). In only three cases, however, did we obtain purring as a response to stimulation. In each this was the first response to weak stimulation; it was obtained with the secondary coil at 10 cm. or more from the primary. In all three cases the tip of the needle lay in the infundibular region (see figures).


    Purring can be elicited by electrical stimulation in the infundibular region of the cat’s brain.

    400 cats!

    Some of the 400 cats that were rescued from a market in Tianjin. 
    Photograph: China Photos/Getty Images.

    ADDENDUM (March 11 2013): Just to be crystal clear, the main reason the authors conducted the study in the first place was to determine seizure thresholds in different parts of the cat brain, not to find the purring center. They did not lay out the rationale for the seizure study in the purring paper, but see abstract below.

    GIBBS, F. A. AND E. L. GIBBS (1936). The convulsion threshold of various parts of the cat’s brain. Arch. Neurol and Psychiat., vol. 35, pp. 109-116.

    In this investigation we have attempted to determine the relative ease or difficulty with which convulsions can be produced by electrical stimulation of various parts of the cat's brain. The problem has significance because it bears directly on the question of whether or not a special part of the brain is concerned with the production of convulsions, a question of major importance to those interested in the etiology of epileptic seizures.

    According to Wikipedia:
    Frederic Andrews Gibbs (1903–1992) was an American neurologist who was a pioneer in the use of electroencephalography (EEG) for the diagnosis and treatment of epilepsy.


    Gibbs EL, Gibbs FA. (1936). A purring center in the cat's brain. Journal of Comparative Neurology 64: 209–211.

    Basal view of a human brain
    (Infundibulum labeled third from the top on right).

    0 0

    "It depends upon what the meaning of the word 'is' is."
    -President Bill Clinton, August 17, 1998

    Dr. Vaughan Bell at Mind Hacks wrote a terrific post on The history of the birth of neuroculture as a follow-up to his Observer piece on Folk Neuroscience. That article explained how neuro talk has invaded many aspects of everyday discourse. In the new post he briefly covers the history of modern neuroscience, a necessary prelude to contemporary neuroculture:
    Neuroscience itself is actually quite new. Although the brain, behaviour and the nervous system have been studied for millennia the concept of a dedicated ‘neuroscience’ that attempts to understand the link between the brain, mind and behaviour only emerged in the 1960s and the term itself was only coined in 1962. Since then several powerful social currents propelled this nascent science into the collective imagination.

    To me, those dates seem quite recent in relation to brain research that has been conducted for centuries. Was there no neuroscience research prior to the 60s? My general perception is that ‘neuroscience’ research has been around a lot longer than that, even if it wasn't called by that precise name. It might have been called psychobiology (Yerkes, 1921), neurobiology (Brodmann, 1909),1 neurophysiology (1938) or neurochemistry (Lewis, 1948), but the types of questions asked and the experiments performed appear to be in line with much of what passes as a dedicated neuroscience in modern times. Here's Dr. Nolan D.C. Lewis speaking at the 96th Annual Session of the American Medical Association, Atlantic City, NJ, June 13, 1947 (Lewis, 1948):
    The actual nature of the thought processes is annoyingly elusive. What is the nature of thought? It is probably a manifestation of energy, but one can ask many questions about this. ... Do small areas of intact brain produce thoughts? Does the brain produce the mind independently or is it an instrument used by some other somatic processes or agents in the body? Does the brain itself think or is it a transmission center utilized by some other force? Is the mind the product of cerebral matter or is it dependent on something else which governs it? Can matter think? Either matter can produce mind or it cannot. Is mind a unique form of matter different from any other known forms of matter? While these questions and problems are probably not solvable by means of present technics, they are challenging, approachable and must eventually become elucidated if we are to get to the core of mental disorders.2

    What's in a name?

    I became curious enough to investigate whether the term ‘neuroscience’ was actually coined in 1962. @AliceProverbio confirmed that "Francis Schmitt used the term Neuroscience for the first time in 1962 to name his Neuroscience reserch group [at] MIT".  I found the paper in the Journal of the History of Neurosciences that clearly recognizes the role of Schmitt, but it also opined that the word might have been invented earlier (Adelman, 2010):
    ...the word might have been coined by Ralph Gerard in the early 1950s...

    Does it really matter when the word itself was first used? No, not for Vaughan's history of the birth neuroculture. I'm not going to get to the bottom of who should get credit, either. But I do find it interesting to see how the word is used in various historical contexts.

    Not to be outdone by MIT, Harrison (2000) reviews the contributions and recollections of Five Scientists at Johns Hopkins in the Modern Evolution of Neuroscience, including those of pioneering neurophysiologist Professor Vernon Mountcastle:
    ‘In the 1940’s, and on, this place [Johns Hopkins University] was red hot for the development of Neuroscience’.

    Noted historian of neuroscience Professor Stanley Finger, in his review on Women and the History of the Neurosciences, named several famous women neuroscientists of the 19th century (Finger, 2002):3
    Women have been underrepresented in the early years of the neurosciences, much as they have been in other scientific endeavors. Nevertheless, the names of many important women contributors stand out if one begins in the latter part of the 19th century...

    Two women, who worked in part with their husbands but also achieved greatness on their own as the 19th century drew to a close and the 20th century began, are Augusta Marie (Dejerine-) Klumpke (1859-1927), who was married to Joseph Jules Dejerine (1849-1917), and Cécile Mugnier Vogt (1875-1962), who was married to Oskar Vogt (1870-1950).

    Three other famous women neuroscientists from the later period are Christine Ladd-Franklin (1847-1930), Maria MichailovnaManasseina (also known as Marie de Manacéine, (1843-1903), and Margaret Floy Washburn (1871-1939).

    But in describing the vision of Professor Francis O. Schmitt in founding the Neurosciences Research Program at MIT, Adelman (2010) gets the last word on ‘neuroscience’:
    Ideally, Schmitt and his colleagues thought, the various physical, biological, and neural sciences could be brought together to attack a single goal, and what a goal — the ultimate one of all science and philosophy — how does the mind/brain work! Every field with some involvement in mind-brain studies would be included, from the molecular and subcellular areas of cell biology to the higher reaches of psychology and psychiatry. Such areas as cognitive psychology might not be able to contribute much to neurobiology; parallel fibers and psychophysical parallelism have little in common. But this field could pose major questions about higher brain function and the mechanisms of thinking, with molecular genetics perhaps providing answers about mechanisms operating at subcellular levels of the nervous system.

    Ha, ha! So much for the modern convergence of brain and behavioral sciences...


    1Dr. Korbinian Brodmann worked as an Assistant in the Neurobiological Laboratory of the University of Berlin.

    2 It goes without saying that modern techniques have opened up new avenues of study. And that ethical standards for the proper conduct of human and animal research (e.g., The Purring Center in Cats) have improved considerably since then.

    3 To be brutally obvious here, it bears repeating that the name of the journal in which this appears is the Journal of the History of the Neurosciences.


    Adelman, G. (2010). The Neurosciences Research Program at MIT and the Beginning of the Modern Field of Neuroscience. Journal of the History of the Neurosciences, 19 (1), 15-23 DOI: 10.1080/09647040902720651

    Brodman K. (1909). Vergleichende Lokalisationslehre der Großhirnrinde : in ihren Prinzipien dargestellt auf Grund des Zellenbaues. Leipzig: Barth. (Translation: Laurence J. Garey, 2006).

    Finger S. (2002). Women and the history of the neurosciences. J Hist Neurosci. 11:80-6.

    Harrison TS. (2000). Five scientists at Johns Hopkins in the modern evolution of neuroscience. Journal of the History of the Neurosciences 9:165-79.

    LEWIS, N. (1948). SUGGESTIVE RESEARCH LEADS IN CONTEMPORARY NEUROCHEMISTRY. JAMA: The Journal of the American Medical Association, 136 (13) DOI: 10.1001/jama.1948.02890300016005

    Yerkes RM. (1921). THE RELATIONS OF PSYCHOLOGY TO MEDICINE. Science 53(1362):106-11.

    0 0

    In case you missed it, I had a guest post this week in Nature's SpotOn NYC series on Communication and the Brain (#BeBraiNY), held in conjunction with Brain Awareness Week. The theme concerned the challenges of engaging the public's interest in cognitive sciences, and communicating the knowns (and unknowns) of brain disorders:
    In the current funding climate of budget cuts and sequestration, there’s a wide latitude between overselling the immediate clinical implications of "imaging every spike from every neuron" in the worm C. elegans (as in the proposed Brain Activity Map Project) and ignoring science communication entirely, leaving it up to the university press office.

    Who occupies the middle ground between the industry cheerleader and the disinterested academic? Science bloggers, for one. Scientist bloggers comprise a growing segment of the science communication world.

    Many of us have been critical of how traditional media channels can distort the actual scientific results and mislead the public. With the mainstreaming of neurocriticism, I felt this topic had been discussed extensively in recent months, so I moved on to the responsibilities we face in presenting accurate information. Some examples were drawn from my posts on unusual neurological disorders, including Prosopometamorphopsia (a condition where faces look distorted on one side) and Othello Syndrome (delusional jealousy). Both posts can turn up on the first page of a Google search, so I do feel an obligation to be factual and informative.

    Another example was a critique of public brain scanning on Celebrity Rehab with Dr. Drew. Although I wrote that post (and a follow-up) in 2010, readers were finding them now because former program participants Mindy McCready and Dennis Rodman were in the news, for very different reasons.

    My guest post concludes with:
    Scientist bloggers serve an important function in the continuum of science communication. We should take our responsibility for presenting high quality, ethical information very seriously, to help stem the ongoing flood of neurocrackpottery.

    Amidst the SpotOn NYC series extolling the virtues of science blogging came a new paper suggesting that science blogs are inferior sources of information relative to traditional media (Allgaier et al., in press):
    Scientists may understand that neuroscience stories in legacy media channels are likely to be of higher quality than similar narratives found in blogs. Stories in social channels are often crafted on the fly, without the help of experienced editors who can point out holes in the narrative or who can insist on rewriting and revision. Blog posts also tend to be shorter narratives, bereft of the kind of complexity and nuance possible only in long-form journalism.

    Obviously, there's a lot of high quality "long-form" journalism (which is never defined in the paper), but a huge number of high quality, complex and nuanced blog posts can be found as well. The passage above sparked quite the discussion on social media. Here's one initiated by respected journalist, blogger, and science writer Carl Zimmer:
    Blogs versus journalism in neuroscience--IT LIVES!

    I found passages like the one I just quoted [the one above] to be puzzling on many levels.

    Science blogs pretty much came into existence as a way for scientists themselves to critique bad coverage in traditional media. And, ten years later, that remains a powerful tradition.

    The paper presents a romantic, uncritical view of the press. Speaking as a journalist, I can say this is a view we can ill-afford.

    What's more, neuroscience blog posts are very often deep, nuanced, and more accurate than "churnalism" driven by glib press releases.

    If neuroscientists are indeed avoiding blogs for this reason (no data provided in the paper that this is true), then they are sadly misguided.

    Eight others joined in the discussion, which is worth reading.  One of the participants was Dominique Brossard, an author on the article in question.

    In brief, Allgaier et al. (in press) randomly contacted 1,248 "productive" neuroscientists who had published at least 8 articles in the preceding 2-year period. The survey participation rate was 21.3% in the US and 32.6% in Germany.
    The scientists responded to questions about three dimensions of public media channels, both traditional and online: (1) their personal use of these channels to “follow news and information about scientific issues”; (2) their assessment of the impact of scientific information in these channels on public opinion about science; and (3) their assessment of the impact of such information on “science-related decisions made by policymakers.” The respondents answered the questions with respect to a comprehensive list of traditional print or broadcast media, online analogs of those media channels, blogs, and content in social networks.
    Respondents were primarily male (78%) and over 40 (79%). Is this a typical sampling of neuroscientists? Obviously not, since it is gender-imbalanced1 and excludes most grad students and the average post-doc.

    The results in this group of participants suggested a preference for old media:
    The results of our survey indicate that the respondents in both countries remained heavily reliant on journalistic narratives, in both traditional and online forms, for information about scientific issues. Only a modest number of the surveyed neuroscientists reported that they use blogs or social networks to monitor such issues.

    Fig. 1a (modified from Allgaier et al., in press). Media use (in percentages) among neuroscientists in the United States and Germany. For the exact wording of the questions, detailed data, and significance information, consult supplemental table S1, available online at  [not online as of this writing].

    The over 40 crowd was more reliant on newspapers and valued online articles less than the younger set, who used social media more often as a source of popular science news. Women were less reliant on newspapers and printed pop sci magazines for science issue information than men.

    Do we really know if the participants consider blogs and social media to be inferior sources of information for the reasons quoted above? We do not. The authors were speculating, as they were in this paragraph (which elicited howls in the Blogs versus journalism discussion): 
    Finally, we speculate that the scientists in this study may value journalistic narratives because they appreciate that journalism is indifferent to the interests and goals of science. Although this may be perceived as a disadvantage of journalism from the scientists’ point of view, it is actually a key advantage. Their role as external observers affords journalists credibility compared with scientific self-presentation.

    This attitude is quite different from the skeptical neuroblogger view of mainstream science journalism, which is covered in many of the posts below. It seems to me that Allgaier et al.'s sampling method potentially excluded many of these voices, who were not considered "productive" neuroscientists by the authors.

    All posts in the #BeBraiNY series:


    1 According to the Society for Neuroscience:
    Women have been an increasing force within the field, more than doubling over the past 20 years – 21 percent of SfN members were women in 1982 compared to 43 percent in 2011, according to membership surveys.


    Joachim Allgaier, Sharon Dunwoody, Dominique Brossard, Yin-Yueh Lo, & Hans Peter Peters (2013). Journalism and Social Media as Means of Observing the Contexts of Science. BioScience : 10.1525/bio.2013.63.4.8 {PDF}

    0 0
  • 03/21/13--03:04: Distrust of Psychology
  • "There is a tendency among physiologistsamong natural scientists generallyto look upon psychology with distrust, if not with indifference or scorn."

    -Yerkes (1904)

    Psychology has been having a crisis of confidence lately: blatant and high-profile fraud cases, questions about sloppy methods and statistics, and the increasingly acknowledged file drawer problem of unpublished negative results. For these reasons, I thought it was interesting to take a look back and see similar criticisms of the field over 100 yrs ago.

    Pure Rot
    "Even the honest and sincere defender of psychology, or of the possibilities of such a science, cannot deny that much work which has been placed upon record as experimental psychology is pure rot."

    -Yerkes (1904)

    Yerkes was a primatologist and an editor of the Journal of Comparative Neurology and Psychology. For most of the journal's existence (1891-present), it has been known as the Journal of Comparative Neurology, but "Psychology" was added to the title from 1904 to 1910. The quotes here are taken from one of Yerkes' editorials:
    "The average German physiologist uses very different tones of voice for the “Physiolog” and the “Psycholog.” Some of them apparently feel that psychology is too near akin to metaphysics to be a safe favorite for the natural scientist, while others are evidently satisfied in their own minds that the psychic is not and cannot be material of a natural science. In America too there is a strong prejudice against psychology, among the natural scientists especially, or, if not prejudice, there is a distrustful curiosity which makes the life of the truly scientific student of psychic reactions at times unpleasant. This general distrust and ridicule of psychology is doubtless due, first, to the fact that the naturalistic movement of the last century was accompanied by a wide spreading and deep distrust of the speculative sciences of which psychology was then, and is still by many, reckoned as one; and second, perhaps almost as largely, to the semi-scientific and too often carelessly used methods of that new psychology which called itself experimental."


    Yerkes was a keen observer of Psychology and a strong supporter of its importance as a natural science. Unfortunately, he also promoted eugenics in the 1910's and 1920's.

    0 0
  • 03/25/13--00:16: Yerkes and Eugenics
  • "Eugenics, the art of breeding better men, imperatively demands reliable measurement of human traits of body and mind, of their inter-relations, and of their modification by environmental factors."

    -Yerkes (1923)

    The previous post on Distrust of Psychology contained several quotes from a 1904 editorial on the dim view of psychology taken by many physiologists of the era. It was written by Robert M. Yerkes, who was the editor of the Journal of Comparative Neurology and Psychology. Yerkes himself was committed to establishing psychology as a respectable field (Yerkes, 1904):
    For those of us who have at heart the establishment and advancement of comparative psychology as a science coordinate with physiology there is the clear duty to make our work eminently worthy of scientific recognition and reliance.

    He was a notable primatologist who later became involved in human intelligence testing as part of America's World War I effort to screen army recruits. In concluding the prior post, I stated:
    Yerkes was a keen observer of Psychology and a strong supporter of its importance as a natural science. Unfortunately, he also promoted eugenics in the 1910's and 1920's.

    This prompted twocomments on my knee-jerk reaction to "eugenics". Has the term been rehabilitated, unbeknownst to me? Is it fortunate that Yerkes believed in the racial inferiority of African Americans, based on theculturally biased intelligence tests he developed (Yerkes, 1923)?

    Is modern-day amniocentesis to screen fetal DNA for Down's syndrome and other (usually fatal) trisomies really the same thing as limiting immigration from specific countries based on the population's lower "intelligence" (as assessed by flawed tests)?
    "Far more interesting doubtless to the practical eugenist than occupational differences in intelligence or specifications are the racial differences which appear when the foreign-born American draft is analysed into its principal constituent groups. The difference even of median score or letter grade distribution are so great as to be significant alike to the American people and to the eugenists of the world."

    -Yerkes (1923)

    Recently on Twitter, evolutionary psychologist and provocateur Jesse Bering posed the question of whether a case could be made for modern-day eugenics. I originally thought he was being trollish, or perhaps had taken a page out of filmmaker Lars von Trier's comedy playbook (whose Nazi jokes got him banned from Cannes).

    Since Bering is an openly gay man, I thought the question was especially preposterous. Who gets to decide the traits and "disorders" slated for elimination? But then I read the essay on Chinese eugenics by evolutionary psychologist Geoffrey Miller -- a response to the question WHAT *SHOULD* WE BE WORRIED ABOUT?

    Chinese Eugenics

    China has been running the world's largest and most successful eugenics program for more than thirty years, driving China's ever-faster rise as the global superpower. I worry that this poses some existential threat to Western civilization. Yet the most likely result is that America and Europe linger around a few hundred more years as also-rans on the world-historical stage, nursing our anti-hereditarian political correctness to the bitter end.

    So the resurgence of interest in eugenics is serious? And not just among white supremacists?

    Miller continues:
    The BGI Cognitive Genomics Project is currently doing whole-genome sequencing of 1,000 very-high-IQ people around the world, hunting for sets of sets of IQ-predicting alleles. I know because I recently contributed my DNA to the project, not fully understanding the implications.1 These IQ gene-sets will be found eventually—but will probably be used mostly in China, for China. Potentially, the results would allow all Chinese couples to maximize the intelligence of their offspring by selecting among their own fertilized eggs for the one or two that include the highest likelihood of the highest intelligence. Given the Mendelian genetic lottery, the kids produced by any one couple typically differ by 5 to 15 IQ points. So this method of "preimplantation embryo selection" might allow IQ within every Chinese family to increase by 5 to 15 IQ points per generation. After a couple of generations, it would be game over for Western global competitiveness.

    What do you think, is the BGI Cognitive Genomics Project a menace to "Western civilization" as we know it?  Or is Miller's scenario a fantasy contingent upon on a vast array of genetic information that is currently unavailable2.... or even unattainable in the foreseeable future?


    1 A very-high-IQ and yet didn't understand the implications??

    2 Major problems with one recently published effort are outlined in False discovery: How not to find the genetic basis of human intelligence.


    Yerkes RM. (1923). Eugenic bearing of measurements of intelligence. Eugen Rev. 14:225-45.

    0 0

    Is it possible for a brain scan to predict whether a recently paroled inmate will commit another crime within 4 years? A new study by Aharoni et al. (2013) suggests that the level of activity within the anterior cingulate cortex might provide a clue to whether a given offender will be rearrested.

    Dress this up a bit and combine with a miniaturized brain-computer interface that continuously uploads EEG activity to the data center at a maximum security prison. There, machine learning algorithms determine with high accuracy whether a given pattern of neural oscillations signals the imminent intent to reoffend that will trigger deep brain stimulation in customized regions of prefrontal cortex, and you have the plot for a 1990s cyberpunk novel.

    But we're getting way ahead of ourselves here...

    Dr. Kent Kiehl outside the mobile scanner his group uses to look at the brains of inmates at New Mexico prisons. Credit: Nature News.

    The actual study in question used functional MRI to scan the brains of 96 male inmates at two New Mexico state correctional facilities while they performed a cognitive task (Aharoni et al., 2013). The task required responding to a frequent stimulus presented 84% of the time ("X") and inhibiting responses to the rare stimulus ("K").

    Fig. S4. (Aharoni et al., 2013). Go/No-Go task.

    The major comparison examined brain activity on incorrect responses to "K" (commission errors) vs. correct responses to "X" (hits). This contrast was restricted to a region of interest (ROI) in the dorsal anterior cingulate cortex (dACC), which has been associated with a wide array of cognitive and emotional control functions (Posner et al., 2007).

    Results from a separate group of 102 age-matched control participants (mean = 33.9 yrs) from Hartford, CT1 determined the a priori ROI, with the peak voxel located at coordinates x = −3, y = 24, z = 33 in the center of a 14 mm sphere. One control ROI was chosen in a more ventral and anterior region of medial prefrontal cortex (mPFC) at 0, 51, −6.

    The most strongly activated voxel in the offender group for the error vs. hit contrast was remarkably close to the one determined from the independent sample and fell well within the a priori ROI (see blue crosshairs in figure below).

    Fig. 2 (modified from Aharoni et al., 2013).(B) Mean hemodynamic response change in offender sample (n = 96) during commission errors vs. correct hits from sagittal (Upper Left), coronal (Right), and axial (Lower Left) orientations. Peak activation located at x = 3, y = 24, z = 33 within the anterior cingulate cortex region of interest (P < 0.00001, FWE).

    The dACC has been strongly implicated in error processing (Simons, 2010), and that was no different in the offenders as a group. Other regions significantly activated by commission errors included bilateral inferior frontal cortex/insula, fusiform gyrus, and cerebellum but these were not discussed.

    Of greatest interest is whether this dACC activity can predict recidivism. For this the authors did a survival analysis:
    First, a Kaplan–Meier survival function was computed to describe the proportion of participants surviving any felony rearrest over the 4-y follow-up period, ignoring the influence of any particular risk factor (Fig. S1). Cox proportional hazards regression was then used to examine (i) the zero-order effects of ACC activity on months to rearrest for any crime, (ii) the shared and unique influence of the ACC and other potential risk factors on months to rearrest for any crime, (iii) for nonviolent crimes, and (iv) the shared and unique influence of the medial prefrontal cortex (mPFC) control region and other potential risk factors on months to rearrest for any crime. ...

    ... A significant association was found whereby, for every one unit increase in ACC activity, there was a 1.39 (i.e., 1/exp[B]) decrease in the probability of rearrest.

    ...Meaning that the participants with greater ACC activity were less likely to reoffend. The mPFC  ROI did not show this association. Then a median split divided the offender sample into high ACC and low ACC groups (survival function shown below).

    Fig. 1 (Aharoni et al., 2013). Cox survival function showing proportional rearrest survival rates of high (solid green) vs. low (dashed red) ACC response groups for any crime over a 4-y period. Results of this median split analysis were equivalent to that of the parametric model: bootstrapped B = 0.96; SE = 0.40; P < 0.01; 95% CI, 0.29–1.84. The mean survival times to rearrest for the low and high ACC activity groups were 25.27 (2.80) mo and 32.42 (2.73) mo, respectively. The overall probabilities of rearrest were 60% for the low ACC group and 46% for the high ACC group.

    So for all felonies (both violent and nonviolent), a substantial percentage of participants were likely to be rearrested within 4 years. The ACC classification scheme would wrongly condemn the 40% of low ACC parolees who did not reoffend, and would miss the 46% of high ACC parolees who did commit crimes after release. When you look at it that way, it's not all that impressive and completely inadmissable as evidence for decision-making purposes. For nonviolent felonies only, the probability of rearrest for high ACC offenders was 31%, compared to 52% for low ACC offenders.

    A number of other variables were considered in the regression models (and singly as predictors), including age at release, drug and alcohol use, scores on the Psychopathy Checklist-Revised (PCL-R) (Hare, 2003), and commission errors. The best predictor was still ACC activity, but age and score on Factor 2 of the PCL-R both came in at around p=.05. On the PCL-R, Factor 1 includes callousness and the inability to experience remorse, guilt, and empathy while Factor 2 includes impulsivity, stimulation seeking, and irresponsibility (Ermer et al., 2012). The authors consider low ACC activity to be a manifestation of impulsivity, but it could just as easily be related to a lack of concern about making mistakes (i.e., irresponsibility).

    Should functional MRI data be used in parole board hearings?

    No, absolutely not. No one is suggesting this, not even Kiehl himself:
    Kiehl isn’t convinced either that this type of fMRI test will ever prove useful for assessing the risk to society posed by individual criminals. But his group is collecting more data — lots more — as part of a much larger study in the New Mexico state prisons. “We’ve scanned 3,000 inmates,” he said. “This is just the first 100.”

    Nonetheless, I was very impressed that fMRI and behavioral data were collected from 96 prison inmates. That's no easy feat. And the total sample size is now up to a staggering 3,000 inmates!!

    Another striking aspect of this paper is that Aharoni and colleagues made their individual subject data available as an Excel spreadsheet that can be downloaded from the PNAS website as supplementary material (Download Dataset_S01, XLSX). It includes the ROI beta weights along with a number of demographic and performance variables.

    In my next post, I'll present the results of some analyses that I've conducted, and what they might suggest about behavioral performance in the Go/NoGo task.


    1 The median income in Hartford is rather low, and 30% of the population lives in poverty. Although not explicitly stated, these participants might be matched to the criminal offenders for socioeconomic status. The mean years of education was not given for either group. One notable difference, however, is that the control group was 52% female while all the offenders were male.


    Aharoni, E., Vincent, G., Harenski, C., Calhoun, V., Sinnott-Armstrong, W., Gazzaniga, M., & Kiehl, K. (2013). Neuroprediction of future rearrest Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.1219302110

    Ermer E, Cope LM, Nyalakanti PK, Calhoun VD, Kiehl KA. (2012). Aberrant paralimbic gray matter in criminal psychopathy. J Abnorm Psychol. 121(3):649-58.

    Kiehl KA, Liddle PF, Hopfinger JB. (2000). Error processing and the rostral anterior cingulate: an event-related fMRI study. Psychophysiology 37(2):216-23.

    Posner MI, Rothbart MK, Sheese BE, Tang Y. (2007). The anterior cingulate gyrus and the mechanism of self-regulation. Cogn Affect Behav Neurosci. 7(4):391-5.

    Simons RF. (2010). The way of our errors: theme and variations. Psychophysiology 47(1):1-14.

    0 0

    Can Brain Activity Predict Criminal Reoffending?  The previous post discussed a functional MRI study suggesting that the level of error-related activation in the anterior cingulate cortex (ACC) might have value in predicting whether a recently released prisoner will be rearrested within 4 years (Aharoni et al. 2013):
    The odds that an offender with relatively low anterior cingulate activity would be rearrested were approximately double that of an offender with high activity in this region, holding constant other observed risk factors. These results suggest a potential neurocognitive biomarker for persistent antisocial behavior.

    However, using ACC activity as a dichotomous variable misclassified 40% of low ACC participants who did not reoffend and 46% of high ACC participants who did commit crimes after release, not exactly the odds you'd want for making parole decisions. Even the senior author was doubtful that an fMRI test would ever be useful for risk assessment purposes on a case by case basis.

    Since Aharoni and colleagues made their individual subject data available as supplementary material (Download Dataset_S01, XLSX), I was interested in how some of the demographic and performance variables might be related to recidivism, since these are obviously cheaper and easier to collect from incarcerated prisoners than MRI scans.

    The cognitive task performed during the fMRI experiment required responding to a frequent stimulus presented 84% of the time ("X") and inhibiting responses to a rare stimulus ("K").

    Fig. S4. (Aharoni et al., 2013). Go/No-Go task.

    The study compared brain activity on incorrect responses to "K" (commission errors) and correct responses to "X" (hits) in a region of interest in the dorsal ACC, which has been implicated in error processing (Simons, 2010), among manyotherthings. The authors framed the results largely in the context of impulse control, but other explanations are possible (as we'll see later).

    Are any of the task performance variables related to recidivism? Starting with some very simple-minded t-tests, the rate of commission errors in the group of participants arrested for nonviolent offenses1 (n=40) did not differ significantly from what was seen in those not arrested again (n=56).2

    Data from (Aharoni et al., 2013). Commission errors in the Go/NoGo task (% incorrect responses on NoGo trials) and omission errors (% missed responses on Go trials) for inmates that went on to commit nonviolent offenses within 4 years after release (Nonviolent) and those that did not (None). The trend for the reoffenders to commit more errors was not significant (p=.09) even without correcting for multiple comparisons.

    Although there were data from a large control group of nonoffenders (n=102) used to set the ACC ROI, we don't have their behavioral results. I consulted an earlier fMRI paper by Kiehl et al. (2000) that used a very similar Go/NoGo task in 14 control participants. Commission errors occurred on 23.7% of NoGo Trials and omission errors on 3% of Go Trials, which is similar to what was seen in the offenders (overall means of 25.04% and 3.44%, respectively).

    Reaction times (RTs) did not differ between the two offender groups either, suggesting there wasn't a differential speed-accuracy tradeoff (e.g., if the reoffenders were slower yet making marginally more errors).

    Data from (Aharoni et al., 2013). RTs in milliseconds for commission errors (incorrect responses on NoGo trials) and hits (correct responses on Go trials) for inmates that went on to commit nonviolent offenses within 4 years after release (Nonviolent) and those that did not (None). There were no group differences.

    Surprisingly, RTs were slower on commission errors (358 ms) than on hits (346 ms), a small but highly significant difference (p=.0005). This is the opposite of what you'd expect if the errors were due to impulsive responses. If the participants were becoming careless and not fully evaluating the NoGo stimulus, they'd be faster on error trials. This is why I'm not convinced the ACC activations are entirely related to behavioral impulsivity. In EEG studies of error processing, the degree of ACC activity3 is related to the emphasis placed on accuracy (Gehring et al., 1993), so if the reoffenders didn't care as much about accuracy, this could account for their low ACC status. One interesting bit of data for the authors to examine would be RT and accuracy on responses following an error, which indicates the amount of behavioral adjustment after making a mistake. Did the reoffenders show a lower propensity to slow down and become more careful? If so, this might reflect a lack of concern about the consequences of their actions.

    However, the most puzzling thing to me were scores on Factor 2 of the Psychopathy Checklist-Revised (PCL-R) (Hare, 2003). Factor 2 is thought to reflect impulsivity, stimulation seeking, and irresponsibility (Ermer et al., 2012). The rearrested and not-rearrested groups were significantly different as expected, but in the opposite direction (unless I'm missing something here) — scores were lower in the group that was rearrested, in comparison to those who were not (p=.001).

    Data from (Aharoni et al., 2013). Beta-weights from the dACC region of interest and a control region in medial prefrontal cortex (mPFC). PCL-R f2 is score on Factor 2 of the Psychopathy Checklist-Revised, normalized using a log transform (p=.001).[ROIs and PCL-R not measured using the same units, obviously.]

    In the paper, Aharoni and colleagues noted that age at release and Factor 2 scores showed predictive effects along with ACC activity. This was only when nonviolent crimes were considered [remember that only nine participants were arrested again for violent crimes]. Some research suggests that the PCL-R may predict violent recidivism, but other work questions this assertion.4 I'm definitely not the expert here, so please weigh in if you have an opinion.

    Returning to behavioral performance on the X/K response inhibition task, this did not clearly differentiate between those inmates who would reoffend after release from those who did not. So we cannot conclude that cognitive factors are related to nonviolent criminal reoffending,5 at least from this one experiment that evaluated one specific executive function.


    1 There were nine participants arrested for violent offenses, and of these six were arrested for both violent and nonviolent offenses. These latter subjects were an inaccurate bunch (42% commission errors), but it's hard to make much of such a small group.

    2 The significance went down further if you controlled for age at release (for instance).

    3 As reflected by the amplitude of the error-related negativity component, which is further modulated by motivational incentives and personality factors.

    4 From my post on The Disconnection of Psychopaths:
    Forensic psychologist Dr. Karen Franklin has written about multiple controversies surrounding the PCL-R, including the failure of Factor 1 to predict violence and Dr. Hare's attempt to block publication of a critical article. Also see this NPR series on Weighing The Value Of A Test For Psychopaths.

    5 WAIS scores were not predictive, either.


    Aharoni, E., Vincent, G., Harenski, C., Calhoun, V., Sinnott-Armstrong, W., Gazzaniga, M., & Kiehl, K. (2013). Neuroprediction of future rearrest. Proceedings of the National Academy of Sciences. DOI: 10.1073/pnas.1219302110

    Ermer E, Cope LM, Nyalakanti PK, Calhoun VD, Kiehl KA. (2012). Aberrant paralimbic gray matter in criminal psychopathy. J Abnorm Psychol. 121(3):649-58.

    Gehring WJ, Goss B, Coles MGH, Meyer DE, Donchin E. (1993). A neural system for error-detection and compensation. Psychological Science 4:385–390.

    Kiehl, K., Liddle, P., & Hopfinger, J. (2000). Error processing and the rostral anterior cingulate: An event-related fMRI study Psychophysiology, 37 (2), 216-223 DOI: 10.1111/1469-8986.3720216

    Simons RF. (2010). The way of our errors: theme and variations. Psychophysiology 47(1):1-14.

    0 0
  • 04/10/13--21:58: branscannr on drugs
  • Which is better: the generic or the name brand? Now drug companies have a tool to test out the moods induced by the name of their latest drug.


    free brain scans for everyone! Over thirty million served! 1

    Let's start with some benzodiazepines!

    brainscannr results

    This is your true brain, the emotions that run your life!

    Uh oh, not so great for lorazepam. How about for the name brand, Ativan?

    There. Don't you feel more relaxed now?

    Moving right along to some atypical antipsychotics. Let's start with olanzapine.

    Hmm, no psychiatrist wants to see a strip of skulls down their patient's postcentral gyrus. Not to mention a frontal lobe that sleeps 16 hours a day.

    But how about the name brand Zyprexa?

    That's more like it... What a happy frontal lobe! And nothing but love for the motor strip. Who cares if the parietal-occipital region is sad, when there's such a big anterior party going on!

    Let's go for another atypical, aripiprazole. Who can even pronounce that??

    I'm so confused!! Am I happy? Sad? Afraid?

    Abilify must be better, right?

    Hi! Hi there, hello, hi... I'm a little shy, but I feel much better!

    I'd like to end with midazolam, an amnestic benzodiazapine sometimes used before or during surgical procedures.

    It is truly the perfect drug. C'mon Roche, you can't do any better than that. Why even try?

    Oh, I see. You're not only happy on Dormicum, your entire brain is in love. But will you remember such bliss after you wake up in the orthopedic recovery ward?


    1 Not to be confused with brainSCANr, developed by Voytek & Voyek (2012):

    The goal of neuroscience is to discover the relationships between brain, behavior, and disease. Using the Brain Systems, Connections, Associations, and Network Relationships (brainSCANr) engine, you can explore the relationships between neuroscience terms in peer reviewed publications.


    Voytek JB, Voytek B. (2012). Automated cognome construction and semi-automated hypothesis generation. J Neurosci Methods 208(1):92-100.

    0 0

    Scene from Rabbits by David Lynch

    In a nameless city, deluged by a continuous rain, three rabbits live with a fearful mystery.”

    The latest "elegant and breathtaking"1 paper in Psychological Science presents a rather muddled view of film aesthetics, continental philosophy, surrealism, mortality salience, and stigmatizing attitudes towards sex work (Randles et al., 2013). Oh, and how Tylenol® brand acetaminophen can ease the existential dread evoked by all of these modern horrors.

    The authors explained the purpose and implications of their study in the APS press release:
    According to lead researcher Daniel Randles and colleagues at the University of British Columbia in Canada, the new findings suggest that Tylenol may have more profound psychological effects than previously thought:

    “Pain extends beyond tissue damage and hurt feelings, and includes the distress and existential angst we feel when we’re uncertain or have just experienced something surreal. Regardless of the kind of pain, taking Tylenol seems to inhibit the brain signal that says something is wrong.”

    Randles and colleagues knew from previous research that when the richness, order, and meaning in life is threatened — with thoughts of death, for instance — people tend to reassert their basic values as a coping mechanism.

    The researchers also knew that both physical and social pain — like bumping your head or being ostracized from friends — can be alleviated with acetaminophen. Randles and colleagues speculated that the existentialist suffering we face with thoughts of death might involve similar brain processes. If so, they asked, would it be possible to reduce that suffering with a simple pain medicine?

    No!!  I think this is a ridiculous assertion that gets away with using language (and dependent measures) that not only lack precision, but also lack an analogical relation to the real phenomenon under discussion. The leaps of logic were so egregious that I don't know where to begin... let's start with the meaning-maintenance model (MMM) that motivated the work. MMM "posits that any violation of expectations leads to an affective experience that motivates compensatory affirmation" (Randles et al., 2013). Any violation?? So all sorts of psycholinguistics experiments that involve syntactic violations2 will motivate compensatory affirmation? If that's the case, then David Lynch films will often "motivate compensatory affirmation."

    But does a David Lynch film “hurt” you?
    ...Lynch’s films have the ability to “disturb, offend or mystify” (Rodley, 2005, p. 245). Insofar as it “hurts” to watch some of Lynch’s films, as it arguably hurts whenever one is assaulted by thoughts and experiences that are at odds with one’s expectations and values, the question arises as to how this uncomfortable feeling is represented in the brain.

    First, David Lynch is one of my favorite directors, and I have never felt "hurt" by watching one of his films. Second, Randles et al.never, at any point in their experiments, address how Lynch-viewing is represented in the brain.

    What did the authors actually do? In brief, they asked ~350 young Vancouverites to participate in one of two experiments. In the first study, 121 subjects wrote about death or about dental pain. In the second study, 228 subjects watched a 4 min clip from Rabbits or from The Simpsons. In each case, half of the participants received acetaminophen, half received placebo. Why? What motivated the choice of acetaminophen, as opposed to aspirin, ibuprofen, or naproxen? This was based on a study by Dewall et al. (2010), another problematic paper3 in Psych Sci. There was no mechanistic reason for the original choice.

    Here's the neuro-rationale for the current study (Randles et al., 2013):
    The present research is predicated on four key findings in the literature: (a) Both physical and social pain are associated with activation in the dACC [dorsal anterior cingulate cortex]4 (e.g., Eisenberger et al., 2003), (b) the dACC is activated in response to anomalies (e.g., Botvinick et al., 2004), (c) social rejection can produce the same compensatory affirmation as other meaning threats (e.g., Nash et al., 2011), and (d) acetaminophen has been shown to reduce physical and social pain, as well as activation in the dACC (DeWall et al., 2010). These findings led us to predict that acetaminophen may also inhibit compensatory affirmation following meaning threats.

    The acetaminophen group in Dewall et al. (dose of 2,000 mg a day for 3 weeks) did show less dACC activity in response to cyberball exclusion, but they did not report lower hurt feelings in that situation. The treatment administered by Randles et al. was quite different: a single acute dose of 1,000 mg Tylenol-brand acetaminophen (Rapid Release formula) or 1,000 mg sugar placebo, given 30 min before the critical manipulation.

    In Exp. 1, writing two paragraphs about what will happen to your body after death was designed to trigger mortality salience, or thoughts about the inevitability of death. This in turn would lead to compensatory affirmation of cultural views. How was this measured? By assessing the severity of punitive attitudes towards women who engage in sex work! This is the worst part of the study, in my opinion.
    Social judgment survey

    Finally, participants read a hypothetical arrest report about a prostitute and were asked to set the amount of the bail (on a scale from $0 to $999). This measure has been used in a number of other meaning-threat studies (Proulx & Heine, 2008; Proulx et al., 2010; Randles et al., 2011; Rosenblatt, Greenberg, Solomon, Pyszczynski, & Lyon, 1989). Participants are expected to increase the bond amount after experiencing a threat, because trading sex for money is both at odds with commonly held cultural views of relationships and against the law. Increasing the bond assessment provides participants n opportunity to affirm their belief that prostitution is wrong.

    The study took place in Vancouver, Canada. What are the laws on prostitution?
    In Canada, the buying and selling of sexual services are legal, but most surrounding activities, such as public communication for the purpose of prostitution, brothels and procuring are offences under the law.

    What are current attitudes towards prostitution in Canada?
    The views of Canadians on prostitution vary greatly according to age and gender, with a large proportion of men and older respondents voicing support for some kind of decriminalization, while most women and younger respondents are not as comfortable with the idea...
    . . .

    As evidenced in surveys conducted by Angus Reid Public Opinion in 2009 and 2010, only about a quarter of Canadians (22%) are aware that exchanging sex for money is legal in Canada, while seven-in-ten (70%) mistakenly believe that the practice is illegal.
    . . .

    Still, there is no clear consensus on how some of these guidelines are currently applied. While 36 per cent of respondents believe the Criminal Code provisions related to communication and brothels are fair to the purpose of protecting the public good, almost half (47%) think the rules are unfair and force prostitutes into unsafe situations.

    Here are my reactions to the Prostitute-Bail dependent measure:

    (1) Yay! Let's stigmatize the prostitute, not the johns!

    (2) Does the baseline for these bail judgments differ by sex? age? religion? ethnicity? As professional polling can attest, attitudes vary greatly along demographic lines. The participant pool was quite diverse, and we know nothing about age.
    We recruited 121 participants (81 women, 40 men). The sample was predominantly of East Asian (45%), European (29%), and South Asian (12%) descent.
    (3) Participants were randomly assigned to one of four groups, but we don't know anything about the randomization  - perhaps the most religious and judgmental people ended up in the mortality salience/placebo condition.

    (4) To reiterate, we don't know anything about possible demographic differences in the amount of bail set. And that is the only dependent measure!! We don't know how anyone would allocate money or set a price in another situation that is not "morally laden". Let's say you're selling a used car - what would you charge?

    At any rate, the authors reported that the mortality-salience/placebo group punished the "norm violator" by a significantly larger amount than the other three groups, t(112) = 2.33, p = .02, d = 0.52.

    Fig. 1 (Randles et al., 2013). Results from Study 1: mean bond value set for the prostitute as a function of group (mortality-salience vs. control condition crossed with placebo vs. acetaminophen condition). The scale ranged from $0 to $999. Error bars represent the standard error for each group.

    Moving right along to Exp. 2, we discover that the authors decided to use a different dependent measure for no clearly motivated reason. This makes it impossible to compare the outcome of the salience mortality manipulation to the David Lynch manipulation.
    We also changed the dependent measure [in Exp. 2]. This study was conducted 3 to 6 months after a well-publicized local riot that followed the Vancouver Canucks’ loss in their bid for the Stanley Cup, and we expected that most students held a negative view of the riot. Thus, we expected that after a threat, participants would affirm this view by calling for stronger punishment for the rioters. Participants were informed that people were debating whether the rioters should be given sentences more lenient than those for comparable individual acts of vandalism, because the rioters had acted impulsively, or should be given stiffer sentences, because they had taken advantage of the city while it was vulnerable. Participants then marked a spot on a line from 0% to 200%. They were told that 0% indicated that rioters should not be fined, that 100% indicated that rioters should receive a normal fine, and that 200% indicated that rioters should receive a doubled fine.

    One initial critique is that the Vancouver hockey riot itself provoked MMM. It was a mob event that people could not explain rationally. The subjects were more likely to have been directly affected by this event (in comparison to the hypothetical sex worker bail), by either knowing someone who participated or who was present, or by witnessing the event live or through social media, or by having a favorite business vandalized. In addition, the assigned fines were relative, not absolute. A 150% fine out of... $100 or $1,000 or $10,000?

    At any rate, the authors reported that participants in the Lynch/placebo group wanted to punish the rioters by a significantly larger amount than did participants in the other three groups, t(203) = 2.64 p < .01, d = 0.43.

    Fig. 2 (Randles et al., 2013). Results from Study 2: mean preference for the penalty to be given individuals convicted of vandalism or theft during the Vancouver hockey riot as a function of group (threat vs. control condition crossed with placebo vs. acetaminophen condition). The rating scale ranged from 0% (no fine for a conviction), through 100% (a normal fine), to 200% (a doubled penalty).

    Collectively, the results were taken as evidence that Tylenol can potentially treat chronic anxiety disorders, a conclusion that filled me with existential dread:
    The study demonstrates that existentialist dread is not limited to thinking about death, but might generalize to any scenario that is confusing or surprising — such as an unsettling movie.

    “We’re still taken aback that we’ve found that a drug used primarily to alleviate headaches can also make people numb to the worry of thinking about their deaths, or to the uneasiness of watching a surrealist film,” says Randles.

    The researchers believe that these studies may have implications for clinical interventions down the road.

    “For people who suffer from chronic anxiety, or are overly sensitive to uncertainty, this work may shed some light on what is happening and how their symptoms could be reduced,” Randles concludes.

    I have a few final questions for the authors, since this violation of my expectations led to an affective experience that motivated my own compensatory affirmation processes:
    • Why wasn't the dose adjusted by weight? A 45 kg woman got the same dose as a 90 kg man. 
    • Was physical pain assessed in the subjects pre/post-treatment? No it was not. 
    • Did anyone have a headache or any other physical pain before treatment? We don't know... which would be important to know, since relieving physical pain will make you less cranky and irritable. 
    • Is there a single neuroimaging study that has administered acetaminophen at the dose and time course used here? No. 
    • What is the evidence that acetaminophen affects the hemodynamic response in the same exact dACC region hypothesized to control physical, existential, and social pain? 
    • Has there been a single fMRI study in which subjects have watched Rabbits and Simpsons, counterbalanced in a single session while their brains were scanned? 
    • What is the Rabbits> Simpsons neural activation pattern? 
    • Why wasn't there a measure that the Lynch clip was actually "disturbing" or that the Simpsons clip was enjoyable? Actually, none of the manipulations induced changes in affective state on the PANAS.

    To ease my existential dread, it's time to watch Rabbits in its entirety.


    1 Former Psychological Science Editor Robert V. Kial:
    At meetings and via email, authors often asked me, “What sort of paper is a good candidate for Psychological Science?” ... And when feeling particularly candid, I might say that the ideal Psychological Science manuscript is difficult to define, but easily recognized — the topic is fundamental to the field, the design is elegant, and the findings are breathtaking.
    2 For example: "The metal was for refined by the goldsmith who was honored" (Friederici et al., 1996).

    3 For a lengthy exposition on the problematic aspects of the Dewall et al. paper, see Suffering from the pain of social rejection? Feel better with TYLENOL®.

    4 At the risk of sounding like a broken record, the dACC has been associated with a wide array of cognitive and emotional control functions (Posner et al., 2007). In the TYLENOL® post, I said:
    The "shared neurobiological systems" [for social and physical pain] are thought to be located in the dorsal anterior cingulate cortex (ACC), a brain structure that contains discrete regions responsive to physical pain (Kwan et al., 2000). Interestingly, externally applied vs. self-administered thermal pain activate anatomically distinct areas of the ACC (Mohr et al., 2005). Furthermore, it is not at all clear whether the same regions of ACC represent social pain and the affective components of physical pain. In a study designed to dissociate expectancy violations from social rejection, the dorsal ACC was activated when expectations were violated, while ventral ACC (quite distant from the physical pain regions) was activated by social rejection (Somerville et al., 2006).


    Dewall CN, Macdonald G, Webster GD, Masten CL, Baumeister RF, Powell C, Combs D, Schurtz DR, Stillman TF, Tice DM, Eisenberger NI. (2010). Acetaminophen reduces social pain: behavioral and neural evidence. Psychol Sci. 21:931-7.

    Randles, D., Heine, S., & Santos, N. (2013). The Common Pain of Surrealism and Death: Acetaminophen Reduces Compensatory Affirmation Following Meaning Threats Psychological Science DOI: 10.1177/0956797612464786

    Further Reading on Surrealism, Dread, and Tylenol:

    Surrealistic Imaging Experiment #1

    Of Mice and Women: Animal Models of Desire, Dread, and Despair

    Suffering from the pain of social rejection? Feel better with TYLENOL®

    0 0

    What do we (not) know about how paracetamol (acetaminophen) works? (Toussaint et al., 2010)

    . . .
    From the beginning, the focus of the search for paracetamol’s analgesic mechanism has concentrated on the central nervous system. When administered intraventricularly [i.e., directly into theventricular systemof the brain], acetaminophen produces no significant analgesia (115, 132). This finding lead to attempts to inject acetaminophen into the spinal cord (i.t.), which produced marked dose-related antinociception (132).

    Yesterday’s post about Tylenol as a cure for mortality salience and existential dread got me a little worked up. The first author’s public endorsement of acetaminophen as a possible treatment for chronic anxiety disorders was too much to handle (along with the less than stellar experimental rigor). Is watching a 4 min clip of a David Lynch film really the same thing as a clinically diagnosed psychiatric disorder (Randles et al., 2013)? Why Tylenol and not other pain relievers? What is the hypothesized mechanism of action? Wouldn’t we already know by now, from epidemiological studies at the very least, if Tylenol was an effective anti-anxiety medication?

    So I started wondering about acetaminophen's actual mechanism of action. I was quite surprised that it's somewhat mysterious. Randles et al. cited one paper on this:
    Second, acetaminophen affects a number of brain regions, some of which are not directly related to physical or social distress (Toussaint et al., 2010).

    This led me to believe there was evidence from human neuroimaging studies. Turns out there isn't, beyond the Dewall et al. (2010) paper, which states:
    Although the precise mechanisms by which acetaminophen exerts an analgesic effect are still unclear, it is widely accepted that acetaminophen reduces pain through central, rather than peripheral, nervous system mechanisms (Anderson, 2008; H.S. Smith, 2009).

    I would like to point out that the spinal cord is part of the central nervous system. So if it's really true that acetaminophen exerts its pain-relieving effects through synapses in the spinal cord, then what does this say about providing relief from the angst of social exclusion, mortality salience, and existential dread? That it's based on nociceptive spinal cord neurons in laminae I, II, and V? For a visual illustration of this pathway, I highly recommend viewing the animation, Dissection of DLF blocks analgesia, at Neuroscience Online.

    One hypothesis is that Tylenol (acetaminophen) may act on descending serotonergic pathways (purple projection) at the level of the spinal cord (red synapses). Figure modified from Neuroscience Online.

    However, it's not that simple. The review paper by Toussaint et al. (2010) concluded, "No one mechanism has been definitively shown to account for its analgesic activity." For its proposed mechanisms of action, they presented evidence both for and against Cyclooxygenase (EC, COX) inhibition, COX-1, COX-2, 'COX-3', peroxidase, nitric oxide synthase, cannabinoid receptors, and of course serotonin:
    There is substantial evidence that paracetamol’s mechanism of analgesia in some manner involves the descending serotonergical pathway. 5-HT neurons, largely originating in raphe nuclei located in the brain stem (117, 118) send projections down to the spinal cord that synapse on afferent neurons entering the spinal cord. These descending projections exert an inhibitory (analgesic) effect on the incoming pain signal before it is transmited to higher CNS centres.

    Note that these are not the same serotonergic pathways often implicated in depression. The terminal synapses for the latter are indeed located in the brain and not the spinal cord.

    Last night, in real life, I followed the Watertown news live via @sethmnookin and @taylordobbs (like many others).

    This morning I dreamt that my workplace had transformed into an institutional fortress taken over by a gang of murderous criminals. The actual law enforcement authorities were too busy watching television talk shows to do anything about it. The thugs were threatening and torturing and killing people in the building. I managed to escape down a balcony exit and hid out for a while, avoiding detection but fearful that the thugs would find me and kill me. They were unstoppable, and there seemed to be no way out. I informed an old West-style sheriff, who managed to detain a carload of the evildoers. While continuing to hide, I wondered whether I would be able to shoot them all dead with a fully automatic weapon before they shot and killed me.

    Then an early morning doorbell rang and woke me up. It was an unexpected FedEx delivery. In my barely awake state, I thought it might be a bomb.

    Why am I telling you all this?? Because I find it very hard to believe that Tylenol, a drug that's relatively ineffective for my own headache pain, could possibly alleviate the anxiety caused by this nightmare. Or by the real life nightmare that's affected so many people in Boston.


    Dewall CN, Macdonald G, Webster GD, Masten CL, Baumeister RF, Powell C, Combs D, Schurtz DR, Stillman TF, Tice DM, Eisenberger NI. (2010). Acetaminophen reduces social pain: behavioral and neural evidence. Psychol Sci. 21:931-7.

    Randles, D., Heine, S., & Santos, N. (2013). The Common Pain of Surrealism and Death: Acetaminophen Reduces Compensatory Affirmation Following Meaning Threats. Psychological Science DOI: 10.1177/0956797612464786

    Toussaint, K., Yang, X., Zielinski, M., Reigle, K., Sacavage, S., Nagar, S., & Raffa, R. (2010). What do we (not) know about how paracetamol (acetaminophen) works? Journal of Clinical Pharmacy and Therapeutics, 35 (6), 617-638 DOI: 10.1111/j.1365-2710.2009.01143.x

    0 0

    Image Credits: fist and brain.

    You might have seen this news story the other day:
    Want to remember something? Clench your fists!

    Giving a speech and need to remember what to say? Just clench your right fist while rehearsing. Then, when it's time to give the speech, clench your left fist, and voila, you’ll recall what you rehearsed! That's what a new study found, which was published April 24 online at PLOS ONE

    Sounds too easy now, doesn't it? And if you're exclaiming, "that's just too good to be true!" then you'd be correct.

    The new study by Propper et al. (2013) has unleashed a torrent of criticism on Twitter, including this starter by @js_simons.

    What motivated such a study in the first place? I'll try to run through the authors' rationale here, starting with statements from the abstract, which are followed by my commentary.

    • Unilateral hand clenching increases neuronal activity in the frontal lobe of the contralateral hemisphere.
    It's true that unilateral hand movement is executed via motor cortex activity in the opposite hemisphere, so the right hemisphere controls the left hand and vice versa.

    • Such hand clenching is also associated with increased experiencing of a given hemisphere’s “mode of processing.”
    This statement is based on EEG studies that have looked at alpha power suppression recorded at scalp electrodes over left and right frontal cortex (Harmon-Jones, 2006). The hypothesis is that left hand contractions "activate" (i.e., suppress alpha waves in) the unhappy right hemisphere, thereby producing negative affect, while right hand contractions activate the happy left hemisphere, which results in positive affect. The affective "modes of processing" aspect of this research isn't directly relevant to the Propper et al. (2013) paper, and further discussion is beyond the scope of this post. I'll just say that attributing EEG activity to a specific cortical region is a dicey proposition, because the spatial resolution of the technique isn't great.1

    • Together, these findings suggest that unilateral hand clenching can be used to test hypotheses concerning the specializations of the cerebral hemispheres during memory encoding and retrieval.
    Here the EEG research on emotion is being applied to memory.

    • We investigated this possibility by testing effects of unilateral hand clenching on episodic memory. The hemispheric Encoding/Retrieval Asymmetry (HERA) model proposes left prefrontal regions are associated with encoding, and right prefrontal regions with retrieval, of episodic memories.
    The Hemispheric Encoding/Retrieval Asymmetry (HERA) model of Tulving et al. (1994) postulates that the left prefrontal cortex encodes information into memory, while the right prefrontal cortex retrieves information from memory. This was back in ye olden days of PET using block designs with 40 seconds of one condition subtracted from 40 seconds of another condition. In other words, poor temporal resolution.

    The HERA model was revisited and confirmed by its proponents using fMRI data (Habib et al., 2003), but the evidence against it was considerable (Owen, 2003). The general consensus is that HERA has been discredited. In fact, noted memory researcher Dr. Jon Simons posted a comment at the PLOS ONE website explaining why the underlying hypothesis of Propper et al. is problematic (among other issues).

    • It was hypothesized that right hand clenching (left hemisphere activation) pre-encoding, and left hand clenching (right hemisphere activation) pre-recall, would result in superior memory. 
    Here we're expecting to see better memory in the R/L condition than in the control condition. There is no mention that the other fist-clenching conditions would result in worse performance than in the control condition.

    • Results supported the HERA model.
    Results did NOT support the HERA model, and I'll explain why below (and you can read the PLOS ONE comment).

    In the experiment, participants studied a list of 36 words, engaged in a filler task, and then recalled as many words as possible. Approximately 10 subjects participated in each of 16 conditions, only five of which are reported in the paper. These involved squeezing a small pink ball in one hand (2 sets of 45 sec) before the encoding and the retrieval phases of the study. The control condition did not involve clenching, but the participants held a small pink ball in each hand.2

    The five conditions are shown below, named by the hand used during encoding/retrieval. You'll notice that the number of participants in each group (n) is pretty small. I calculated standard deviations from the standard error values to determine effect sizes using this effect size calculator. 3

    Although the authors reported the total number of words written down (correct or not) and the number of correct words in Figs. 1 and 2 respectively, the important result is shown in Fig. 3, which takes into account the false alarms, or incorrectly recalled words.

    Figure 3 (Propper et al., 2013). Corrected scores as a function of hand clench condition. [NOTE: NENR = None Encoding/None Recall, or control.]

    The one-way ANOVA for this comparison "did not reach traditional significance" (p=.08), but two of the post hoc comparisons did (uncorrected for multiple comparisons involving 16 groups). The p<.09 bar in the figure is in the wrong place. Below is a table I made using the effect size calculator (ESC) for Cohen's d, compared to what was reported in the paper.

    The key HERA condition (R/L) did not differ from the control condition, so the predicted hand clenching improvement in memory did not materialize. The superiority of R/L over the other two clenching configurations was due to worse performance in the latter. In other words, if you squeeze a ball with your left hand before encoding, you'll do worse than if you didn't (all statistical objections aside) and the L clenching before retrieval didn't help. The authors stated otherwise, however:
    Individuals who encoded language-based information immediately following right hand clenching (left hemisphere activation), and recalled such information immediately following left hand clenching (right hemisphere activation), demonstrated superior episodic memory compared to the other hand clenching conditions. It is noteworthy that this condition was also superior to the no hand clenching control condition, though not significantly so.

    The difference between the two rightmost bars in Fig. 3 above is not terribly close to being significant (as far as I can tell), so the major hypothesis was not supported here.

    Clench Your Fist to Get a Grip on Memory? I don't think so.

    Thanks to blog commenter Lew for first pointing out this study.

    ADDENDUM (May 1, 2013): An Erratum to Figure 3 has been posted in the Comments section of the PLOS ONE article. It might have been in response to my comment in a previous thread, but this comment wasn't addressed directly.


    1 The fist-clenching activation of motor cortex is supposed to spread to dorsolateral prefrontal cortex, or so it goes (Harmon-Jones, 2006).

    2 @neuromusic noted that the authors measured ear temperatures: "Immediately following pre-clenching condition, participants’ ear temperatures were taken (to be reported elsewhere)..." This sounded a little bizarre, but I found this publication by Propper and Brunyé, Lateralized difference in tympanic membrane temperature: emotion and hemispheric activity, which must explain the concept.

    3@rogierK thought the reported effect sizes were way too large.


    Habib R, Nyberg L, Tulving E. (2003). Hemispheric asymmetries of memory: the HERA model revisited. Trends Cogn Sci. 7(6):241-245.

    Harmon-Jones E. (2006), Unilateral right-hand contractions cause contralateral alpha power suppression and approach motivational affective experience. Psychophysiology 43(6):598-603.

    Owen AM. (2003). HERA today, gone tomorrow?Trends Cogn Sci. 7(9):383-384.

    Propper, R., McGraw, S., Brunyé, T., & Weiss, M. (2013). Getting a Grip on Memory: Unilateral Hand Clenching Alters Episodic Recall PLoS ONE, 8 (4) DOI: 10.1371/journal.pone.0062474

    Tulving E, Kapur S, Craik FI, Moscovitch M, & Houle S (1994). Hemispheric encoding/retrieval asymmetry in episodic memory: positron emission tomography findings. Proceedings of the National Academy of Sciences of the United States of America, 91 (6), 2016-20 PMID: 8134342

    0 0

    Dr. Thomas Insel, director of the National Institute of Mental Health (NIMH) in the U.S., recently announced that NIMH will be re-orienting its research away from DSM categories:
    ...While DSM has been described as a “Bible” for the field, it is, at best, a dictionary, creating a set of labels and defining each. The strength of each of the editions of DSM has been “reliability” – each edition has ensured that clinicians use the same terms in the same ways. The weakness is its lack of validity. Unlike our definitions of ischemic heart disease, lymphoma, or AIDS, the DSM diagnoses are based on a consensus about clusters of clinical symptoms, not any objective laboratory measure. In the rest of medicine, this would be equivalent to creating diagnostic systems based on the nature of chest pain or the quality of fever. Indeed, symptom-based diagnosis, once common in other areas of medicine, has been largely replaced in the past half century as we have understood that symptoms alone rarely indicate the best choice of treatment.

    Patients with mental disorders deserve better. NIMH has launched the Research Domain Criteria (RDoC) project to transform diagnosis by incorporating genetics, imaging, cognitive science, and other levels of information to lay the foundation for a new classification system. Through a series of workshops over the past 18 months, we have tried to define several major categories for a new nosology...

    The dimensional approach to studying mental illness was covered in an excellent new blog post by Dr. Russ Poldrack, who describes ongoing work including the Consortium for Neuropsychiatic Phenomics and the Cognitive Atlas project.

    I first wrote about the Research Domain Criteria for Classifying Mental Disorders in 2010:
    There is no absolute timeline of when these [research] advances might occur. Instead of providing an immediate replacement for DSM and its clinical diagnoses, RDoC is a long-term project to help the research community by defining more biologically based organizational principles for various psychopathologies...

    NIMH has been preparing the RDoC criteria for the past 2-3 years, as you can see in the RDoC Publications and in these Proceedings of RDoC Workshops (scroll down). Requests for applications (RFAs) on Dimensional Approaches to Research Classification in Psychiatric Disorders date back to 2011. The workshops and publications haven't been a secret, they've been available all along.1

    Nonetheless, Insel's announcement was treated as a "bombshell", a "potentially seismic move", and a "humiliating blow to the APA." But as 1 Boring Old Man notes, this is old news.

    One of the more alarmist posts on the topic was by John Horgan:
    Psychiatry in Crisis! Mental Health Director Rejects Psychiatric “Bible” and Replaces With… Nothing

    . . .

    Now, in a move sure to rock psychiatry, psychology and other fields that address mental illness, the director of the National Institutes of Mental Health has announced that the federal agency–which provides grants for research on mental illness–will be “re-orienting its research away from DSM categories.” Thomas Insel’s statement comes just weeks before the scheduled publication of the DSM-V, the fifth edition of the Diagnostic and Statistical Manual.

    Note the foreshadowing here: I do think Dr. Insel's timing in announcing the dimensional RDoC was a deliberate attempt to blunt the media circus that will surround the big DSM-5 release at the APA meeting in 2 weeks.

    However, Horgan thinks the timing was more related to President's Obama's ambitious new BRAIN Initiative when he says:
    NIMH director Insel doesn’t mention it, but I bet his DSM decision is related to the big new Brain Initiative, to which Obama has pledged $100 million next year. Insel, I suspect, is hoping to form an alliance with neuroscience, which now seems to have more political clout than psychiatry.

    This is utterly preposterous, since NIMH has been aligned with "neuroscience" for years (which is apparent when looking at funded projects).2 And by "political clout" I assume he means the Society for Neuroscience has more political clout than the American Psychiatric Association (APA). However, it's not likely that NIMH has abandoned APA.

    1 Boring Old Man goes much further and points out that A Research Agenda for DSM-V (2002) was a collaboration between APA and NIMH:
    Do they really think that we won’t notice that the APA and NIMH are working in tandem – that their efforts are coordinated? Do they think we won’t notice that the "cross cutting" dimensional scheme for the DSM-5 that got dropped is the same idea as the RDoC? The articles that have been popping up all day are playing this as Insel’s NIMH throwing the DSM-5 under the bus. No need. The DSM-5 is already under the bus where it belongs.

    bus image © BrokenSphere / Wikimedia Commons


    1 All this activity might not have been apparent to those outside the U.S. funding system, however. Or to the majority of the planet who haven't read old blog posts on the topic.

    2 On the front page of NIH RePORTER, search 'NIMH' and '1989' (the oldest date available).

    0 0

    I'm Blogging for Mental Health.

    The month of May is a violent thing
    In the city their hearts start to sing
    Well, some people sing, it sounds like they're screaming
    I used to doubt it, but now I believe it

    Month Of May
       ------The Arcade Fire

    Today is Mental Health Month Blog Day, sponsored by the American Psychological Association (APA). It's designed to:
    ...educate the public about mental health, decrease stigma about mental illness, and discuss strategies for making lasting lifestyle and behavior changes that promote overall health and wellness.

    If the public has been following the recent hullabaloo about how to diagnose mental illnesses, they might be confused about the current and future direction of the field. How did we get here?

    As most of you know, the American Psychiatric Association (the other APA) is about to release its updated Diagnostic and Statistical Manual of Mental Disorders, the much maligned DSM-5. Weeks before the big launch, however, the National Institute of Mental Health (NIMH) stole the show by announcing that it will be re-orienting its research away from DSM categories:
    ...While DSM has been described as a “Bible” for the field, it is, at best, a dictionary, creating a set of labels and defining each. The strength of each of the editions of DSM has been “reliability” – each edition has ensured that clinicians use the same terms in the same ways. The weakness is its lack of validity. Unlike our definitions of ischemic heart disease, lymphoma, or AIDS, the DSM diagnoses are based on a consensus about clusters of clinical symptoms, not any objective laboratory measure.

    Instead, the Research Domain Criteria (RDoC) framework would become the preferred method for organizing biologically-based research on mental illnesses, with the ultimate goal of constructing a new classification scheme.

    This caused quite a commotion, leading many to comment on NIMH's shocking repudiation of DSM-5. However, to long-time observers of RDoC's development, this was not a surprise. And the initial lack of clarity on the distinction between the RDoC Dimensional Approach for Research vs. DSM-5 for Diagnosis didn't help matters, nor did the uncertainty about whether NIMH would fund DSM-based research at all.1

    NIMH issued a press release on May 13 to clarify its position:
    DSM-5 and RDoC: Shared Interests

    Thomas R. Insel, M.D., director, NIMH
    Jeffrey A. Lieberman, M.D., president-elect, APA

    NIMH and APA have a shared interest in ensuring that patients and health providers have the best available tools and information today to identify and treat mental health issues, while we continue to invest in improving and advancing mental disorder diagnostics for the future.

    Today, the APA's Diagnostic and Statistical Manual of Mental Disorders (DSM), along with the International Classification of Diseases (ICD) represents the best information currently available for clinical diagnosis of mental disorders  Patients, families, and insurers can be confident that effective treatments are available and that the DSM is the key resource for delivering the best available care. The NIMH has not changed its position on DSM-5. As NIMH’s Research Domain Criteria (RDoC) project website states, “The diagnostic categories represented in the DSM-IV and the International Classification of Diseases-10 (ICD-10, containing virtually identical disorder codes) remain the contemporary consensus standard for how mental disorders are diagnosed and treated.”

    Yet, what may be realistically feasible today for practitioners is no longer sufficient for researchers. Looking forward, laying the groundwork for a future diagnostic system that more directly reflects modern brain science will require openness to rethinking traditional categories. It is increasingly evident that mental illness will be best understood as disorders of brain structure and function that implicate specific domains of cognition, emotion, and behavior. This is the focus of the NIMH’s Research Domain Criteria (RDoC) project. RDoC is an attempt to create a new kind of taxonomy for mental disorders by bringing the power of modern research approaches in genetics, neuroscience, and behavioral science to the problem of mental illness.

    So what is RDoC, and how might it be applied to new research projects? From the DSM perspective of categorical disorders (e.g, schizophrenia, major depression, and obsessive compulsive disorder), RDoC embraces diagnostic messiness. Patients previously excluded from a study due to comorbidities, or because they don't meet full criteria? Misfits from the "Not Otherwise Specified" (NOS) category? Now they're in. Specifically, the instructions for RFA-MH-14-050 state:
    Priority will be given to applications that have a well-justified plan to include patients from multiple diagnostic groups (including Not Otherwise Specified and forme fruste diagnoses) as appropriate for explicating the dimensions and constructs of interest in the study design. Studies that include patients from a single diagnostic group may also be considered if there is a particularly strong justification for examining constructs of interest within one diagnostic category.  A defensible approach might be to study all patients presenting themselves at a specialty clinic, e.g., mood disorders clinic, anxiety clinic, or psychotic disorders clinic, regardless of whether they meet criteria for a particular DSM diagnosis.

    One potential pitfall of this approach is the money required to enroll huge numbers of patients. If commonalities in cognitive function or brain circuitry or especially genetic risk factors are to emerge from studying all patients with mood disorder-like symptoms, then sample sizes must be very large to overcome potential noise in the system(s).

    The applicant would propose to study one or more of the five different domains, or constructs, that have been fleshed out at NIMH Workshops:

    Negative Valence Systems
    Positive Valence Systems
    Cognitive Systems
    Systems for Social Processes
    Arousal/Regulatory Systems

    The possible units of analysis run the gamut from genes to circuits to behavior, and the studies should use specific tasks (paradigms) and self-report measures, as shown in the Negative Valence Systems matrix below.

    Animal models of active threat ("fear") like the startle response are pretty well established and would allow a much more detailed analysis of mechanisms, from genes to behavior. In the realm of human research, one example of a proposed project for RFA-MH-14-050 is:
    • Evaluation of the relationship between measures of exaggerated fear response, reports of overall distress and anxiety, and chronicity of internalizing disorders
    This project could study common and unique aspects of the startle response in patients with phobias, panic disorder, OCD, generalized anxiety, and major depression ("internalizing disorders"), as described in this review article (Vaidyanathan et al.  2011).

    Another recent review tackles the neurobiology of reward, which falls under the rubric of Positive Valence Systems. It's a 43 page tour de force with 756 references (Dichter et al., 2012):
    This review summarizes evidence of dysregulated reward circuitry function in a range of neurodevelopmental and psychiatric disorders and genetic syndromes. First, the contribution of identifying a core mechanistic process across disparate disorders to disease classification is discussed, followed by a review of the neurobiology of reward circuitry. We next consider preclinical animal models and clinical evidence of reward-pathway dysfunction in a range of disorders, including psychiatric disorders (i.e., substance-use disorders, affective disorders, eating disorders, and obsessive compulsive disorders), neurodevelopmental disorders (i.e., schizophrenia, attention-deficit/hyperactivity disorder, autism spectrum disorders, Tourette’s syndrome, conduct disorder/oppositional defiant disorder), and genetic syndromes (i.e., Fragile X syndrome, Prader–Willi syndrome, Williams syndrome, Angelman syndrome, and Rett syndrome). We also provide brief overviews of effective psychopharmacologic agents that have an effect on the dopamine system in these disorders. This review concludes with methodological considerations for future research designed to more clearly probe reward-circuitry dysfunction, with the ultimate goal of improved intervention strategies.

    The idea is to find common neurobiological substrates of altered reward circuitry that cut across DSM-esque categories (e.g., drug and alcohol use disorders, serious gambling problems, mania), where individuals seek out reward without regard to the consequences. At the other end of the spectrum is anhedonia, or the inability to feel pleasure from previously rewarding activities. Anhedonia is often (but not always) seen in major depression and schizophrenia, two disorders usually considered to have little overlap.

    Will RDoC succeed in carving out a new nosology and generating a new guidebook? Will it lead to diagnostic tests that can identify specific cognitive, emotional, motivational, or social weaknesses that can be treated with targeted pharmaceuticals, deep brain stimulation, and/or improved psychotherapies?

    This quote from Chapter 1 of a 2002 white paper collection indicates the DSM-5 revision committee (a joint APA / NIMH production) didn't exactly expect that all of its goals would be reached (PDF):
    "Given the relatively short time frame for generating breakthrough research findings between now [1999] and the probable publication of DSM-V in 2010 [2013], it is anticipated that some of the research agendas suggested in these chapters might not bear fruit until the DSM-VI or even DSM-VII revision processes!"

    So don't hold your breath, unless you want to experience severe anoxia.


    1 See the Appendix for a compendium of quotes.


    Dichter, G., Damiano, C., & Allen, J. (2012). Reward circuitry dysfunction in psychiatric and neurodevelopmental disorders and genetic syndromes: animal models and clinical findings Journal of Neurodevelopmental Disorders, 4 (1) DOI: 10.1186/1866-1955-4-19

    Vaidyanathan, U., Nelson, L., & Patrick, C. (2011). Clarifying domains of internalizing psychopathology using neurophysiology Psychological Medicine, 42 (03), 447-459 DOI: 10.1017/S0033291711001528


    May 6
    Matthew Herper (Forbes reporter covering science and medicine): “I spoke to NIMH. This is a broadening, not an exclusion.” ...

    DSM-5 Task Force member Dr.  David J. Kupfer strikes back in the NYT, blaming NIMH and the sluggish rate of scientific progress: “The problem that we’ve had in dealing with the data that we’ve had over the five to 10 years since we began the revision process of D.S.M.-5 is a failure of our neuroscience and biology to give us the level of diagnostic criteria, a level of sensitivity and specificity that we would be able to introduce into the diagnostic manual.

    May 7
    “Some people have the idea that we’re trying to ditch or diss the DSM and that’s not a fair assessment,” says Insel.

    Dr. Bruce Cuthbert: “The sensationalist headlines out there are entirely misleading, and we will continue to support DSM-based research as we increase our portfolio of RDoC grants. RDoC is intended to inform future versions of the ICD and DSM; we have no intention of coming out with a competing system. The implication of this is that the fruits of RDoC are likely to be taken up into the ICD/DSM piecemeal rather than in one entire set, at such times as the evidence for various aspects becomes strong enough to warrant changes to the nosologies.”

    Dr. Thomas Insel: “We cannot ‘ditch’ or ‘reject’ terms like schizophrenia or bipolar. We just need to view them as constructs, perhaps including many different disorders that require different treatments or obscuring disorders than cut across the current categories. A symptom-only system will not be sufficient for identifying brain disorders—whether the initial label is dementia or schizophrenia.”

    Dr. Cuthbert: “As with most shifts in science, changes in research priorities require a transition. Because almost all clinical researchers today grew up with the DSM system both clinically and in research, it will take some time to get a “feel” for the relationships between DSM disorders and various kinds of RDoC phenomena (both in terms of the types of symptoms, and in overall severity), learn how to write grant applications with the new criteria, and evolve new review criteria. So, there will be a period of some time while these crosswalks are worked out.”

    May 8
    Dr. Cuthbert: “Using DSM diagnoses for research has become a de facto standard ever since the DSM-III came out in 1980. What we are trying to do is to study neural systems directly because they cut across lots of the dsm disorders. ... We are moving in a new direction. That doesn’t mean that next month we’ll stop accepting DSM diagnoses. It rather is a shift in emphasis.”

    0 0

    Does Smoking Pot Offer Relief to the Lonely?  A new paper by the original Tylenol and social pain researchers claims that it does (Deckman et al., 2013). Let's take a closer look.
    Comfortably Numb: Marijuana Use Reduces Social Pain, Research Finds

    Marijuana use buffers people from experiencing social pain, according to research published online on May 14 in Social Psychological and Personality Science.

    "Prior work has shown that the analgesic acetaminophen, which acts indirectly through CB1 receptors, reduces the pain of social exclusion," Timothy Deckman of the University of Kentucky and his colleagues wrote in the study. "The current research provides the first evidence that marijuana also dampens the negative emotional consequences of social exclusion on negative emotional outcomes."

    You could be forgiven if you thought, as I initially did, that the University of Kentucky IRB must hold a liberal view on the administration of controlled substances to undergrads participating in psychology experiments. But that's not what happened here... the data are entirely correlational, based on self-report, and largely problematic (in my view).

    Marijuana Lowers Self-Worth and Worsens Mental Health in Those Who Are Not Lonely

    That's my interpretation of the article, which is SO clunky compared to the fun and breezy query, Can Marijuana Reduce Social Pain?2

    The paper begins with the premise that "Social and physical pain share common overlap at linguistic, behavioral, and neural levels" (Deckman et al., 2013). So let's give a pain reliever to reduce the sting of rejection!  A critique of the original work asked why the authors chose Tylenol, as opposed to an NSAID like aspirin, ibuprofen, or naproxen. In the current study they tried to develop a mechanistic account of why acetaminophen might reduce social pain:
    Prior research has shown that acetaminophen—an analgesic medication that acts indirectly through cannabinoid 1 receptors—reduces the social pain associated with exclusion. Yet, no work has examined if other drugs that act on similar receptors, such as marijuana, also reduce social pain.

    The problem is that acetaminophen's mechanism of action is surprisingly unclear (Toussaint et al., 2010). One prominent hypothesis claims that Tylenol might exert its analgesic effects through descending serotonergic pathways at the level of the spinal cord. In fact, the paper that Deckman et al. cited in favor of cannabinoid 1 (CB1) receptors describes a very complex pathway that includes indirect involvement of CB1, with actual pain suppression occurring in the spinal cord.3

    An even more basic question: if acetaminophen acts through CB1 receptors, then why isn't it a potential drug of abuse, or known by experienced pharmanauts for its psychoactive properties?  The drug experience vault Erowid says:
    Acetaminophen is a non-salicylate analgesic and antipyretic (pain killer and fever reducer). It is a common over-the-counter pain medication found in hundreds of products around the world. At higher doses it is known to cause liver-damage and has a low therapeutic index (ratio of effective dose to toxic dose), making it dangerous when included in recreationally used pharmaceuticals [e.g., Tylenol with codeine]. It is not known to be psychoactive.

    On the other hand, we all know that cannabis is psychoactive. The design of the cannabis study included cross-sectional national survey data, a two year longitudinal survey of 400 high school students, and a Mechanical Turk-implemented version of cyberball, an online game to simulate social exclusion. In all cases, participants reported their marijuana use, and this was related to the variables of interest.

    I'll focus on the national survey data in this post, which comprised Study 1 (Marijuana Use Buffers Lonely People From Lower Self-Worth and Self-Rated Mental Health) and Study 2 (Marijuana Use Predicts Fewer Major Depressive Episodes Among the Lonely).

    Study 1 used data from the National Comorbidity Survey: Baseline (NCS-1), 1990-1992 (ICPSR 6693), which you can download for yourself. The survey recruited 8,098 individuals from the ages of 15 to 54 living in the U.S., and included over 4,000 variables. Only four variables were chosen for the present study: self-reported loneliness (1= often, 4 = never), marijuana use (0 = none, 1 = daily, 8 = once or twice a year), self-worth (1 = high, 4 = low), and overall mental health (1 = excellent, 5 = poor).

    Loneliness was used as a proxy for social pain. Contrary to what the headlines suggested, the impact of pot smoking on social pain was not directly examined. Instead, the study assessed the effects of loneliness (high, low), marijuana use (high, low) and their interaction on self-worth and mental health.

    Loneliness and pot smoking interacted to predict feelings of self-worth [B = 0.03, t(5609) = 2.20, p = .03]. Given the huge number of participants, this level of statistical significance is not very impressive.

    Fig. 1 (modified from Deckman et al., 2013). Study 1: Marijuana use moderates the relationship between loneliness and self-reported feelings of self-worth. [NOTE: items were reverse-scored for display purposes.]

    For lonely people, the amount of pot smoked didn't make too much of a difference in their self-worth (see red arrow above).  For socially connected people, greater marijuana use resulted in lower self-worth, although it's not clear this was significant (pairwise statistical tests were not reported).

    I also question how the High Marijuana Use and Low Marijuana Use groups were determined, because over 5,000 participants did not smoke pot at all in the last 12 months. Does the heavy use group combine those who smoke 6 joints a year with those who smoke daily?

    Table depicting the mean level of loneliness (1=often to 4=never) for participants at 9 levels of pot smoking (0=none, 1=daily, 8=once or twice a year). Unlike the figure above, the values were not reverse-scored. Data from the National Comorbidity Survey: Baseline (NCS-1), 1990-1992 (ICPSR 6693).

    In the lonely group, the frequency of marijuana use had even less of an impact on self-rated mental health. In contrast, heavy pot use resulted in worse mental health among the socially connected. A modest loneliness by marijuana use interaction was observed for mental health [B = 0.03, t(5609) = 2.07, p = .04], similar to what was seen for self-worth.

    Fig. 2 (modified from Deckman et al., 2013). Study 1: Marijuana use moderates the relationship between loneliness and self-reported mental health. [NOTE: items were reverse-scored for display purposes.]

    Looking at Fig. 2 above, it's clear that marijuana use does not buffer the lonely from the negative consequences of social pain: the black circle and gray square are overlapping. But the authors interpret this result differently:
    Marijuana use buffered the lonely from both negative self-worth and poor mental health. This evidence suggests that at relatively high levels of social pain, marijuana use lessens negative consequences of social pain.

    As part of the six sentence Discussion of Study 1, they point out one weakness to motivate Study 2:
    This study contained some limitations. First, it only assessed self-ratings of both self-worth and mental health. If marijuana use weakens the relationship between social pain and self-reported psychological well-being, then there should also be a lower rate of validated clinical diagnoses of poor psychological well-being.
    . . .
    To address the limitation of Study 1, Study 2 sought to show that marijuana buffered lonely participants from experiencing a standardized diagnosis of poor psychological well-being. Study 2 used a different nationally representative sample from Study 1 to test this hypothesis.

    HOWEVER, the dataset used in Study 1 has extensive information on DSM-III-R diagnoses (including depression) for the majority of participants, so I'm not sure why this wasn't included. Study 2 used data from the National Comorbidity Survey Replication (NCS-R; Kessler & Merikangas, 2004), a different national sample of 10,000 respondents.

    Speaking of replication, Deckman et al. should have been able to completely replicate the pot × loneliness analyses for self-worth, self-rated mental health, and DSM depression in both National Comorbidity Samples. I'm not sure why they didn't.

    For Study 2, non-users were excluded (unlike in Study 1). The final sample included 537 participants with info on loneliness, marijuana use, and whether they experienced a major depressive episode during the past year. Once again, the results demonstrated that if you're lonely, smoking a lot vs. a little pot will not affect whether you'll experience a major depressive event (red arrow below). If you're not lonely, heavy marijuana use increases the risk of major depression.

    Fig. 3 (modified from Deckman et al., 2013). Study 2: Marijuana use moderates the relationship between loneliness and a having a DSM-IV major depressive event in the past 12 months.

    Study 3, a survey of 400 high school students, was the most puzzling of all. At Time 1 the students were asked about loneliness, lifetime marijuana use, and depression. Two years later, they were asked again about depression, using the Behavior Assessment System for Children (second edition), but not about marijuana use and loneliness (which could have changed drastically in 2 years).

    At any rate, lonely heavy pot users were the least depressed of all at Time 2. I'm not sure how to interpret this; the pattern differs from what was seen in adults. Maybe the lonely heavy pot users at Time 1 bonded with their peer group over two years and were no longer lonely at Time 2.

    Fig. 4 (modified from Deckman et al., 2013). Study 3: Marijuana use moderates the relationship between loneliness and depression over 2 years in adolescents. 

    Conflicting earlier studies in adolescents have suggested that lonely high school students are more likely (Page, 2000) and less likely (Grunbaum et al., 2000) to use marijuana. A recent study indicated that heavy marijuana users are more likely to engage in self-injury (Giletta et al., 2012), but this was true only for Americans and not for Dutch and Italian students. I imagine there's a huge literature on these issues, but it wasn't addressed at all in the present paper.

    Overall, I don't think the authors have demonstrated that marijuana reduces social pain, at least not in adults. They used a very select set of questions from huge, comprehensive national surveys and then called this a limitation of the study:
    Another potential limitation to some of the above studies lies in how social pain was measured. In Studies 1–3, single-item measures of loneliness were used as a proxy for social pain. These studies use large community sample data sets and thus our ability to include numerous measures was limited.

    There were many other questions that could have assessed social pain in NCS-1 and NCS-R, including a series of questions about friendships, e.g. "How much do your friends really care about you--a lot, some, a little, or not at all?"

    Has this paper advanced the agenda of the social pain/physical pain isomorphists? We already knew that opiates were good at alleviating both types of pain. And it's a truism to say that people turn to alcohol and all sorts of recreational drugs to dull the pain of a lonely existence. For the most part, we assume this isn't a healthy way to cope. Some studies suggests that depression is decreased in heavy marijuana users (Denson & Earleywine, 2006) but others find an increase (Pacek et al., 2013).

    In sum, Deckman et al., (2013) presented evidence that heavy marijuana use is detrimental to the mental health of socially connected individuals and not especially effective in buffering lonely users from social pain.


    1 However, please note that highly reliable source TMZ claims "Akon doesn't drink ... Akon doesn't smoke ... but Akon was pretty damn surprised when he found out his pal Justin Bieber might be doin' both."

    2 University press offices!! I'm sure you'd love to hire me to write your press releases. Price quotes are available upon request, please leave a comment.

    3 You might also want to know something about the distribution of CB1 receptors in the anterior cingulate cortex, the purported locale of physical/social pain overlap.

    4 Survey questions were:

    LONELY - During the past 30 days how often did you feel lonely?
    POT - On the average, how often in the past 12 months have you used marijuana or hashish? Just
    WORTHY - I feel I am a person of worth, at least equal with others.
    MENTAL HEALTH - How would you rate your overall mental health? Is it excellent, very good, good, fair, or poor?

    Data for these four questions were available from 5,631 participants. Ratings were standardized, reverse-scored, and analyzed using weighted least squares regression.


    Deckman, T., DeWall, C., Way, B., Gilman, R., & Richman, S. (2013). Can Marijuana Reduce Social Pain? Social Psychological and Personality Science. DOI: 10.1177/1948550613488949

    Dewall CN, Macdonald G, Webster GD, Masten CL, Baumeister RF, Powell C, Combs D, Schurtz DR, Stillman TF, Tice DM, Eisenberger NI. (2010). Acetaminophen reduces social pain: behavioral and neural evidence. Psychol Sci. 21:931-7.

    Toussaint K, Yang X, Zielinski M, Reigle K, Sacavage S, Nagar S, Raffa R. (2010). What do we (not) know about how paracetamol (acetaminophen) works?Journal of Clinical Pharmacy and Therapeutics 35 (6), 617-638.

    0 0

    DISCLAIMER: This is a hypothetical question and not a medical recommendation. But it might be an idea worth investigating in epidemiological studies.

    Everyone knows that pot gives you the munchies. So the paradoxical finding that marijuana use is associated with a lower prevalence of obesity and diabetes came as a quite surprise to me. Now, a new study has concluded that pot smokers also have lower fasting insulin levels and smaller waistlines (Penner et al., 2013).

    I'll let the authors summarize the clinical significance of their study (Penner et al., 2013):
    • Marijuana use is increasingly common, and use of medical marijuana is now legal in 19 states and the District of Colombia.
    • Despite its associations with increased appetite and caloric intake, marijuana use also is associated with lower body mass index and prevalence of diabetes.
    • In a nationally representative survey population, we found current use of marijuana to be associated with lower levels of fasting insulin, lower insulin resistance (homeostasis model assessment of insulin resistance), and smaller waist circumference.

    More complete coverage of this article is available at Addiction Inbox and Time Healthland.

    Marijuana Use and Mental Illness

    Some other observations that I will attempt to string together:
    I will not address the issue of whether cannabis use is a risk factor for psychosis here.1 In fact, all of my observations will be related to the metabolic effects of marijuana and not to its psychoactive properties and possible detrimental effects on mental health.

    Although cigarette smoking, alcohol use, unhealthy diet, and lack of exercise may contribute to shorter life expectancy in patients with serious mental illnesses (Lawrence et al., 2013), one has to wonder about the effects of atypicals on physical health.2 These drugs can have a very positive effect on mental health, but it comes at a cost.

    • Interestingly, cannabis use is not associated with greater mortality. In fact, the opposite has been reported by Koola et al. (2012), who "observed a lower mortality risk in cannabis-using psychotic disorder patients compared to cannabis non-users despite subjects having similar symptoms and treatments."
    A total of 762 patients with a psychotic disorder were included in that study. All were on atypical antipsychotics, and 39% used marijuana (although this is often under-reported). The authors speculated on the potential health benefits of cannabis, including its anti-inflammatory effects. However, they didn't mention reductions in obesity and diabetes as possible causes of lower mortality in cannabis users. This association bears further investigation, in my view.

    • Nevertheless, eliminating marijuana to counteract the increase in appetite brought on by atypicals seems like common sense. In fact, this has been proposed as a specific behavioral intervention (Werneke et al., 2013).
    Those authors assumed that cannabis contributes to the weight gain caused by the prescription medication, which I also assumed (until reading the new papers cited here). But this relationship hasn't really been studied (Werneke et al., 2013):
    As the endocannabioid system is linked to increased appetite and cannabioid receptor antagonists can induce weight loss [15] cannabis consumption will most likely potentiate antipsychotic-associated weight gain. As the prevalence of cannabis use in people suffering from psychosis is so high, the contribution of cannabis to weight gain in this population is likely to be significant. Surprisingly, this link between cannabis and weight gain remains largely ignored at present.

    We recently discovered that the prevalence of obesity is paradoxically much lower in cannabis users as compared to non-users and that this difference is not accounted for by tobacco smoking status and is still present after adjusting for variables such as sex and age. Here, we propose that this effect is directly related to exposure to the Δ9-tetrahydrocannabinol (THC) present in cannabis smoke. We therefore propose the seemingly paradoxical hypothesis that THC or a THC/cannabidiol combination drug may produce weight loss and may be a useful therapeutic for the treatment of obesity and its complications.
    These authors have filed a patent application for 'Use of marihuana and compounds therein for treating obesity' (which they acknowledge in the paper).

    • One of the same authors (Le Foll) has also published on 'Cannabis use and cannabis use disorders among individuals with mental illness' (Lev-Ran et al., 2013), which they found to be particularly high in individuals with Bipolar I disorder (especially in men). 
    Many of these bipolar cannabis users are probably on atypical antipsychotics. This information was not reported in the paper, but it might be available in the National Epidemiologic Survey on Alcohol and Related Conditions (although this is not certain).

    To be completely clear, I am not advocating the use of marijuana by persons with schizophrenia or bipolar disorder. Rather, I am suggesting that the relationship between atypical antipsychotics and variables such as body mass index, waist circumference, insulin, glucose, and diabetes be compared between groups who do use cannabis vs. those who don't. If there is a benefit in the pot smokers, perhaps there could be a psychiatrically safe, cannabis-derived compound for weight loss in the future. Isn't that more likely than the development of 'third generation' antipsychotics that do not cause substantial weight gain?


    1 Interested readers can consult thesearticles and posts.

    2 See also Rising Mortality Rates for People with Serious Mental Illness and Improving the Physical Health of People With Serious Mental Illness.


    Green B, Young R, Kavanagh D. (2005) Cannabis use and misuse prevalence among people with psychosis. Br J Psychiatry 187:306-13.

    Koola, M., McMahon, R., Wehring, H., Liu, F., Mackowick, K., Warren, K., Feldman, S., Shim, J., Love, R., & Kelly, D. (2012). Alcohol and cannabis use and mortality in people with schizophrenia and related psychotic disorders. Journal of Psychiatric Research, 46 (8), 987-993 DOI: 10.1016/j.jpsychires.2012.04.019

    Lawrence D, Hancock KJ, Kisely S (2013). The gap in life expectancy from preventable physical illness in psychiatric patients in Western Australia: retrospective analysis of population based registers. BMJ 2013; 346 (Published 21 May 2013).

    Le Foll, B., Trigo, J., Sharkey, K., & Strat, Y. (2013). Cannabis and Δ9-tetrahydrocannabinol (THC) for weight loss? Medical Hypotheses, 80 (5), 564-567 DOI: 10.1016/j.mehy.2013.01.019

    Lev-Ran, S., Le Foll, B., McKenzie, K., George, T., & Rehm, J. (2013). Cannabis use and cannabis use disorders among individuals with mental illness, Comprehensive Psychiatry DOI: 10.1016/j.comppsych.2012.12.021

    Penner, E., Buettner, H., & Mittleman, M. (2013). The Impact of Marijuana Use on Glucose, Insulin, and Insulin Resistance among US Adults. The American Journal of Medicine DOI: 10.1016/j.amjmed.2013.03.002

    Werneke U, Taylor D, Sanders TA. (2013). Behavioral interventions for antipsychotic induced appetite changes. Curr Psychiatry Rep. 15(3):347.

    0 0
  • 06/03/13--02:54: Lybrido for Low Libido?
  • A feature article in last week's New York Times Magazine served as an extended ad for a new book by Daniel Bergner, What Do Women Want? Adventures in the Science of Female Desire. It's filled with post-fashionable pop neuroscience and simplistic neurotransmitter stereotypes that rival those of Naomi Wolf (including her infamous“dopamine is the ultimate feminist chemical in the female brain” quote). The focus of Bergner’s article is on pharmaceutical treatments for the controversial diagnosis of Hypoactive Sexual Desire Disorder (HSDD), particularly the subtly named Lybrido (along with its younger sister, Lybridos).

    The heavy-handed branding of Lybrido and Lybridos (both 'working titles') was fascinating to me. While trying to identify the marketing firm behind it, I discovered the trademark was abandoned 6 years ago by Emotional Brain, the Dutch drug company developing them. Finding the active ingredients in Lybrido and Lybridos wasn’t readily apparent from the Emotional Brain site. Nor was it immediately evident from the NYT article, which even used obfuscatory language:
    “Female Viagra” is the way drugs like Lybrido and Lybridos tend to be discussed. But this is a misconception.

    Actually, this is not a misconception. Both drugs contain a major male sex hormone plus a second ingredient: Lybrido is testosterone + sildenafil (Viagra), while Lybridos is testosterone + buspirone (a serotonin 5-HT1A receptor partial agonist). The two formulations are in clinical trials for variants of HSDD identified by Emotional Brain researchers and described in a three part series published in the Journal of Sexual Medicine. These pilot studies used the related PDE5 inhibitor, vardenafil (Levitra). PDE5 inhibitors are widely used to treat erectile dysfunction, so claims that Lybrido doesn’t affect physical function in women are disingenuous:
    Viagra meddles with the arteries; it causes physical shifts that allow the penis to rise. A female-desire drug would be something else. It would adjust the primal and executive regions of the brain. It would reach into the psyche.

    Do Viagra and testosterone replacement therapy reach into the male psyche? Hmm? I don't think so.

    HSDD is a diagnosis that can be given to women who have a low (or nonexistent) libido and are distressed about it. Dr. Petra Boynton has written extensively about the problematic aspects of the HSDD diagnosis and the screening tools used to assess it, as well as the medicalization of sex for pharmaceutical marketing purposes. An earlier post provided thorough coverage of issues concerning the safety and effectiveness of the Intrinsa testosterone patch, including its rejection by the FDA.

    Nevertheless, plenty of women have voluntarily enrolled in the Lybrido trials. Bergner interviewed some of them to determine the reasons for seeking out an experimental treatment:
    Every woman raised a mix of possible reasons. There were the demands of graduate school, the demands of children, the demands of work, medical issues, men who weren’t always as kind or nearly as engaged as they could be. But at bottom there seemed to be one common cause: they had all grown tired of sex with their long-term partners.

    Why medicalize boredom within marriage?
    …Lori Brotto, a psychologist at the University of British Columbia who has worked clinically with scores of H.S.D.D. patients and who recently led the American Psychiatric Association’s attempt to better delineate the condition in The Diagnostic and Statistical Manual of Mental Disorders. (H.S.D.D. is being reconceived as sexual interest/arousal disorder, S.I.A.D.) “The impact of relationship duration is something that comes up constantly,” she told me about her therapy sessions. “Sometimes I wonder whether it” — H.S.D.D. — “isn’t so much about libido as it is about boredom.”

    Basically, to participate in the trials, a woman has to be in a stable, long-term monogamous relationship. How many female patients have tried couples counseling before turning to drugs? Or did they all take their advice from the Daily Mail?

    'Women have a responsibility to keep their libidos high for their husbands': Could 'female Viagra' save YOUR marriage?

    What efforts have the husbands expended to improve the sexual relationship, what work have they put in to make themselves more desirable to their wives? Are they taking a pill to make them less loutish?
    [Lybrido developer Adriaan] Tuiten didn’t openly acknowledge monogamy as the core of the desire problem, but he knew he couldn’t use single subjects who might well find new lovers during the course of the trials. Their results might have to be tossed out because, with or without chemical aids, new lovers bring surges of lust.
    Did the clinical trials for Viagra require men to be monogamous?

    Dopamine Is Impulse; Serotonin Is Inhibition and Organization

    How do the drugs work to restore female desire? Based on very little evidence, they purportedly restore the balance of dopamine and serotonin, despite taking a sledgehammer approach. Here's where Mr. Bergner devolves into dopamine/serotonin stereotypes that are just as bad as those from Naomi Wolf, but more boring. Divorced from the personal, unable to understand the phenomenology of female desire from the inside, Bergner is left with sterile rehashes of rat lust from Ms. Wolf's guru, Dr. James G. Pfaus. He even resorts to the old 'SSRIs simply cure depression by increasing serotonin' saw:
    ...And then there’s serotonin, dopamine’s foil. It allows the advanced regions of the brain, the domains that lie high and forward, to exert what is termed executive function. Serotonin is a molecule of self-control. It instills calm, stability, coherence (and, too, a sense of well-being, which is why S.S.R.I.'s, by bathing the brain in serotonin, can counter depression). Roughly speaking, dopamine is impulse; serotonin is inhibition and organization. And in sexuality, as in other emotional realms, the two have to work in balance. If dopamine is far too dominant, craving can splinter into attentional chaos. If serotonin overwhelms, the rational can displace the randy.

    I guess he hasn't seen the data on the important role of dopamine in executive function in 'the advanced regions of the brain' (e.g., Prefrontal dopamine and behavioral flexibility: shifting from an “inverted-U” toward a family of functions and Dopamine D₂ receptor modulation of human response inhibition and error awareness).

    Bergner continues:
    To help predict which women will most benefit from which drug, Tuiten has blood drawn from each subject and examines genetic markers related to brain chemistry. Tuiten also asks subjects questions about their comfort with sexual feelings and fantasies. Since our dopamine and serotonin networks are reinforced or attenuated by all we learn, all we think and do, he believes that the answers may provide clues about a given woman’s neurotransmitter systems, which he uses as part of his diagnostic method.
    The three part series in the Journal of Sexual Medicine might be worth a future post to describe the methods Tuiten et al. use to guide treatment and decide who gets which drug.

    Dr. Helen Fisher, advisor for, developed the concept of four neurotransmitter “archetypes” in her quest for a better, more scientific brand of matchmaking.

    Each of these chemistry types is associated with a dominant neurotransmitter or hormone (serotonin, testosterone, dopamine, estrogen). But she knows this is a metaphor and not to be taken literally. "We're a combination of all four systems," Fisher says in a USA Today article.

    Neuroplasticity: It's a Girl Thing

    Let's conclude with the most puzzling brain-based explanation for HSDD: it's neuroplasticity! I couldn't comprehend the logic of this paragraph, no matter how hard I tried. It's one of those sex-and-relationship-type accounts that's seemingly neuro-related but really devoid of actual neuroscientific content:
    This interplay of experience and neural pathways is widely known as neuroplasticity. The brain is ever altering. And it is neuroplasticity that may help explain why hypoactive sexual desire disorder is a mostly female condition, why it seems that women, more than men, lose interest in having sex with their long-term partners. If boys and men tend to take in messages that manhood is defined by sex and power, and those messages encourage them to think about sex often, then those neural networks associated with desire will be regularly activated and will become stronger over time. If women, generally speaking, learn other lessons, that sexual desire and expression are not necessarily positive, and if therefore they don’t think as much about sex, then those same neural networks will be less stimulated and comparatively weak. The more robust the neural pathways of eros, the more prone you are to feel lust at home, even as stimuli dissipate with familiarity and habit.

    The book What Do Women Want? Adventures in the Science of Female Desire will be released tomorrow. I doubt that Ecco will be sending me a review copy.

    Further Reading:

    Media HSDD: "Hyperactive Sexual Disorder Detection"

    Underwear Models and Low Libido

    Feminist Dopamine, Conscious Vaginas, and the Goddess Array

    Of Mice and Women: Animal Models of Desire, Dread, and Despair

    0 0
  • 06/09/13--23:33: How to Measure Female Desire
  • A Sexual Laboratory of One's Own, 
    aka A Clean Well-Lighted Place for Sex

    Psychophysiologic studies of sexual response should be done in a comfortable, well-designed laboratory to minimize subject anxiety and discomfort (Woodard & Diamond, 2009, Fig. 5).

    How do scientists measure the physiological aspects of sexual arousal in women? A 2009 paper by Woodard and Diamond reviewed 45 years of research using instruments that measure female sexual function. These devices include the vaginal photoplethysmograph (right), vaginal and labial thermistors, pressure/compliance balloons, clitoral electromyography, and the electrovaginogram. For a full list, see Table 1 at the bottom of this post.

    The authors note that these physiological measures do not correlate very well with subjective ratings of sexual arousal. Furthermore, clinicians who treat women with sexual dysfunctions are of two minds. Some say the distinction between female desire and arousal may be artificial (see DSM-5 changes, p. 13), while others maintain that the merger of female sexual arousal disorder (FSAD) with Hypoactive Sexual Desire Disorder (HSDD) will be disastrous (Clayton et al., 2012).

    The previous post about Lybrido and Lybridos, the drugs in clinical trials for HSDD, talked briefly about Emotional Brain, the Dutch drug company that is developing them. Putting aside the manyobjections to the HSDD diagnosis for now, and the fact that the trials pathologize sexual boredom within marriage, the company has conducted some interesting studies1 to assess sexual desire.

    Foremost among these is the development of an at-home testing environment, or ambulatory lab, to conduct studies of sexual function (Bloemers et al., 2010).

    Fig. 1 (Bloemers et al., 2010). Schematic overview of the ambulatory measurement setting. (1) Generic laptop, (2) genital probe, (3) wireless sensor system, (4) handheld computer, and (5) secure central database.

    The participants must be so much more comfortable watching hardcore porn and measuring their own vaginal pulse amplitude and clitoral blood volume in the privacy of their homes, without the prying eyes of hoards of scientists in white lab coats (although some people might be into that).

    And that's what was found, for the most part (Bloemers et al., 2010):
    The results of this study support our hypothesis that in healthy controls, clitoral and subjective laboratory measures of sexual arousal show stronger increases to erotic stimuli in the home environment than in the environment of the institutional laboratory. This effect was apparent in response to hardcore stimuli, but not to erotic fantasy. ... To our knowledge, this is the first study that investigates ecological validity of sexual psychophysiological measures by comparing those assessed in the institutional laboratory to those assessed at home with an ambulatory laboratory.


    1Albeit flawed studies, from a cognitive perspective (especially their implementation of an 'Emotional Stroop' task). I am not particularly qualified to comment on other aspects of this research.


    Bloemers, J., Gerritsen, J., Bults, R., Koppeschaar, H., Everaerd, W., Olivier, B., & Tuiten, A. (2010). Induction of Sexual Arousal in Women Under Conditions of Institutional and Ambulatory Laboratory Circumstances: A Comparative Study Journal of Sexual Medicine, 7 (3), 1160-1176 DOI: 10.1111/j.1743-6109.2009.01660.x

    Woodard, T., & Diamond, M. (2009). Physiologic measures of sexual function in women: a review Fertility and Sterility, 92 (1), 19-34 DOI: 10.1016/j.fertnstert.2008.04.041

    0 0

    Is a laboratory test or brain scanning method for diagnosing psychiatric disorders right around the corner? How about a test to choose the best method of treatment? Many labs around the world are working to solve these problems, but we don't yet have such diagnostic procedures (despite what some might claim). A new study by McGrath et al. (2013) might be a step in that direction, but the results are very preliminary and await further validation.

    The principal investigator of that study is Dr. Helen Mayberg, a leader in neuroimaging studies of major depression. She and her colleagues have pioneered the use of deep brain stimulation (DBS) as a treatment for severe, intractable depression, which was "the culmination of 15 years of research using brain imaging technology," says Dr. Mayberg.

    Psychotherapy or Drugs?

    The choice of treatment modality in depression, as in other psychiatric disorders, is by trial and error. If one drug doesn't work, switch to another one. If your insurance covers it, a short course of evidence-based psychotherapy1 might be in order.

    The whole concept of a DSM-based classification scheme for mental illnesses has come under fire, especially with the release of the new Diagnostic and Statistical Manual. In the real world, psychiatric disorders don't always show such clear boundaries; overlap and co-morbidity are common. The National Institute of Mental Health has endorsed a new approach, the Research Domain Criteria project, that incorporates dimensions of observable behavior along with neurobiological measures.

    Here's where the new work by McGrath et al. (2013) fits in. Their goal was...
    To identify a candidate neuroimaging “treatment-specific biomarker” that predicts differential outcome to either medication or psychotherapy.

    Fewer than 40% of depressed patients remit with their first course of treatment, so this would be an important advance. A more scientific way of choosing among possible treatment options would benefit patients and society at large.

    The study (registered at, NCT00367341) enrolled a total of 82 depressed people. The neuroimaging method might surprise some of you: FDG-PET to measure glucose metabolism -- not the popular and trendy resting state fMRI to examine functional connectivity or any sort of fMRI activation study. However, the authors cite an established literature using this technique in studies of antidepressant treatment response.

    Patients diagnosed with moderate to severe depression (a score of 18 or more on the Hamilton Depression Rating Scale, HDRS) received a PET scan and were randomized to receive 12 weeks of either cognitive behavioral therapy (CBT, n=41) or escitalopram (Lexapro, n=39), an SSRI antidepressant. Sixty-three patients completed this phase and also had a PET scan. The endpoint considered a successful response to treatment was remission (HDRS score of 7 or less), while non-response was a change in HDRS of 30% or less. Partial responders were omitted, leaving the final groups as follows:
    • CBT remission, n=12
    • escitalopram remission, n=11
    • CBT nonresponse, n=9
    • escitalopram nonresponse, n=6
    Right away we see that the number of patients in each group is very small, particularly for a study designed to identify biomarkers that will generalize to a larger population. Let me repeat that: a successful biomarker must generalize to an independent population. We haven't seen that here, so any conclusions drawn from this paper must be considered very preliminary.

    How was the biomarker identified? The PET images were co-registered with the corresponding structural MRIs. A whole brain analysis identified regions showing a treatment × outcome interaction (at a significance level of p<.001 uncorrected). Six regions met this uncorrected standard: right anterior insula, right inferior temporal cortex, left amygdala,2 left premotor cortex, right motor cortex, and precuneus (medial superior parietal lobe). Most of these are pretty surprising, but even more surprising is that the rostral anterior cingulate (and subgenual cingulate, BA 25) were not involved:
    Contrary to past published studies,63 the rostral anterior cingulate did not discriminate the outcome subgroups in either the main effect or interaction analyses. A post hoc examination of responder and nonresponder differences within each treatment arm did reveal a nonsignificant rostral cingulate activity difference, with metabolism in responders greater than nonresponders, but solely in the escitalopram group. While consistent with past reports, this finding did not meet the TSB [treatment-specific biomarker] criteria defined for the current study, ie, a region whose activity can differentiate both good and poor outcomes for both treatments.
    - click on image for a larger view -

    Effect sizes are shown in the table above. The brain regions were ranked in order of size of activation (which doesn't make sense for the amygdala), and the right anterior insula was chosen as the best potential biomarker because.... it had the largest cluster size? Or because it did marginally better than the other regions in terms of effect size (although this was not shown statistically).  As a hub for interoceptive awareness, attention, and emotion, the anterior insula makes the most sense scientifically (Craig, 2009). Certainly, it would be odd if glucose metabolism in the right motor cortex could predict response to CBT or SSRI...

    At any rate, right insula hypometabolism at baseline was associated with remission to CBT and poor response to SSRI, and vice versa for hypermetabolism. There was overlap between the groups as shown below, but increasing the chances of successful treatment (even with no guarantees) would be better than a completely trial-and-error approach.3

    Figure 3A (modified from McGrath et al., 2013). Right anterior insula as the optimal treatment-specific biomarker candidate.  A. Scatterplot of insular activity from individual subjects in the remitter (REM) and nonresponder (NR) groups. Note: the anterior insula is the only region where the interaction subdivides patients into hypermetabolic (region/whole-brain mean >1.0) and hypometabolic (region/whole-brain mean <1.0) subgroups.

    A Nature news story says that Brain scan predicts best therapy for depression, but that would be a premature conclusion at best. Although this study might be considered promising, the results must be validated in larger independent samples of patients who are assigned to treatments according to their baseline insula PET scans.

    With the newly prominent nattering nabobs of neuroimaging negativity, it's important to remember that it's not all neuroprattle and bunk. Some of this research is trying to alleviate human suffering.

    Further Reading

    The Sad Cingulate

    Sad Cingulate on 60 Minutes and in Rats

    The Sad Cingulate Before CBT

    Deep Brain Stimulation for Bipolar Depression

    Is CBT Worthless?

    Where Are the Clinical Tests for Psychiatric Disorders?

    The Dark Side of Diagnosis by Brain Scan


    1 But read LawsDystopiaBlog by Professor Keith Laws to see how flimsy the "evidence base" can sometimes be.

    2 An earlier experiment showed that the amygdala might be a region that could help predict CBT response, using fMRI and response to emotional words.

    3 Not to be a pedantic stick in the mud, but the combination of drugs and therapy is often the most successful.


    Craig AD. (2009). How do you feel--now? The anterior insula and human awareness. Nat Rev Neurosci. 10:59-70.

    McGrath CL, Kelley ME, Holtzheimer PE, Dunlop BW, Craighead WE, Franco AR, Craddock RC, & Mayberg HS (2013). Toward a Neuroimaging Treatment Selection Biomarker for Major Depressive Disorder. JAMA psychiatry (Chicago, Ill.), 1-9 PMID: 23760393

older | 1 | 2 | (Page 3) | 4 | 5 | .... | 14 | newer