Are you the publisher? Claim or contact us about this channel


Embed this content in your HTML

Search

Report adult content:

click to rate:

Account: (login)

More Channels


Showcase


Channel Catalog


Channel Description:

Deconstructing the most sensationalistic recent findings in Human Brain Imaging, Cognitive Neuroscience, and Psychopharmacology

older | 1 | .... | 6 | 7 | (Page 8) | 9 | 10 | .... | 14 | newer

    0 0

    Photo illustration by Andrea Levy for The Chronicle Review


    Inflammatory title, isn't it. Puzzled by how it could possibly happen? Then read on!

    A few days ago, The Chronicle of Higher Education published a piece called Neuroscience Is Ruining the Humanities. You can find it in a Google search and at reddit, among other places. The url is http://chronicle.com/article/Neuroscience-Is-Ruining-the/150141/ {notice the “Neuroscience-Is-Ruining” part.}

    Oh wait. Here's a tweet.


    At some point along the way, without explanation, the title of the article was changed to the more mundane The Shrinking World of Ideas. The current take-home bullet points are:
    • We have shifted our focus from the meaning of ideas to the means by which they’re produced.
    • When professors began using critical theory to teach literature they were, in effect, committing suicide by theory.

    The author is essayist Arthur Krystal, whose 4,000+ word piece can be summarized as “postmodernism ruined everything.” In the olden days of the 19th century, ideas mattered. Then along came the language philosophers and some French historians in the 1920s/30s, who opened the door for Andy Warhol and Jacques Derrida and what do you know, ideas didn't matter any more. That's fine, he can express that opinion, and normally I wouldn't care. I'm not going to debate the cultural harms or merits of postmodernism today.

    What did catch my eye was this: “...what the postmodernists indirectly accomplished was to open the humanities to the sciences, particularly neuroscience.”

    My immediate response: “that is the most ironic thing I've ever heard!! there is no truth [scientific or otherwise] in postmodernism!” Meaning: scientific inquiry was either irrelevant to these theorists, or something to be distrusted, if not disdained. So how could they possibly invite Neuroscience into the Humanities Building?

    Let's look at Krystal's extended quote (emphasis mine):
    “...By exposing the ideological codes in language, by revealing the secret grammar of architectural narrative and poetic symmetries, and by identifying the biases that frame "disinterested" judgment, postmodern theorists provided a blueprint of how we necessarily think and express ourselves. In their own way, they mirrored the latest developments in neurology, psychology, and evolutionary biology. [Ed. warning: non sequitur ahead.] To put it in the most basic terms: Our preferences, behaviors, tropes, and thoughts—the very stuff of consciousness—are byproducts of the brain’s activity. And once we map the electrochemical impulses that shoot between our neurons, we should be able to understand—well, everything. So every discipline becomes implicitly a neurodiscipline, including ethics, aesthetics, musicology, theology, literature, whatever.”

    I'm as reductionist as the next neuroscientist, sure, but Krystal's depiction of the field is either quite the caricature, or incredibly naïve. Ultimately, I can't tell if he's actually in favor of "neurohumanities"...
    In other words, there’s a good reason that "neurohumanities" are making headway in the academy. Now that psychoanalytic, Marxist, and literary theory have fallen from grace, neuroscience and evolutionary biology can step up. And what better way for the liberal arts to save themselves than to borrow liberally from science?

    ...or opposed:
    Even more damning are the accusations in Sally Satel and Scott O. Lilienfeld’s Brainwashed: The Seductive Appeal of Mindless Neuroscience , which argues that the insights gathered from neurotechnologies have less to them than meets the eye. The authors seem particularly put out by the real-world applications of neuroscience as doctors, psychologists, and lawyers increasingly rely on its tenuous and unprovable conclusions. Brain scans evidently are "often ambiguous representations of a highly complex system … so seeing one area light up on an MRI in response to a stimulus doesn’t automatically indicate a particular sensation or capture the higher cognitive functions that come from those interactions." 1

    Then he links to articles like Adventures in Neurohumanities and Can ‘Neuro Lit Crit’ Save the Humanities? (in a non-critical way) 2  before meandering back down memory lane. They sure don't make novelists like they used to!

    So you see, neuroscience hasn't really ruined the humanities.3 Have the humanities ruined neuroscience? Although there has been a disturbing proliferation of neuro- fields, I think we can weather the storm of Jane Austen neuroimaging studies.


    Footnotes

    1Although I haven't always seen eye to eye with Satel and Lilienfeld, here Krystal clearly overstates the extent of their dismissal of the entire field (which has happened before).

    2 Read Professor of Literary Neuroimaging instead.

    3 The author of the Neurocultures Manifesto may disagree, however.

    link via @vaughanbell

    0 0
  • 12/08/14--04:59: Hipster Neuroscience


  • According to Urban Dictionary,
    Hipsters are a subculture of men and women typically in their 20's and 30's that value independent thinking, counter-culture, progressive politics, an appreciation of art and indie-rock, creativity, intelligence, and witty banter.  ...  Hipsters reject the culturally-ignorant attitudes of mainstream consumers, and are often be seen wearing vintage and thrift store inspired fashions, tight-fitting jeans, old-school sneakers, and sometimes thick rimmed glasses.

    by Trey Parasuco November 22, 2007 

    Makes them sound so cool. But we all know that everyone loves to complain about hipsters and the endless lifestyle/culture/fashion pieces written about them.





    And they're so conformist in their nonconformity.

    Recently, Jonathan Touboul posted a paper at arXiv to model The hipster effect: When anticonformists all look the same:
    The hipster effect is this non-concerted emergent collective phenomenon of looking alike trying to look different. Uncovering the structures behind this apparent paradox ... can have implications in deciphering collective phenomena in economics and finance, where individuals may find an interest in taking positions in opposition to the majority (for instance, selling stocks when others want to buy). Applications also extend to the case of neuronal networks with inhibition, where neurons tend to fire when others and silent, and reciprocally.

    You can find great write ups of the paper at Neuroecology and the Washington Post:
    There are two kinds of people in this world: those who like to go with the flow, and those who do the opposite — hipsters, in other words. Over time, people perceive what the mainstream trend is, and either align themselves with it or oppose it.
    ...

    What if this world contained equal numbers of conformists and hipsters? No matter how the population starts out, it will end up in some kind of cycle, as the conformists try to catch up to the hipsters, and the hipsters try to differentiate themselves from the conformists.

    But there aren't equal numbers of conformists and hipsters. And this type of cycle doesn't apply to neuroscience research, which is always moving forward in terms of trends and technical advances (right)?



    It may be the Dream of the 1890s in Portland, but it's BRAIN 2015 all the way (RFA-MH-15-225):

    BRAIN Initiative: Development and Validation of Novel Tools to Analyze Cell-Specific and Circuit-Specific Processes in the Brain (U01)


    Although hipsters are in their 20s and 30s, the august NIH crowd (and its advisors) has set the BRAIN agenda that everyone else has to follow. When the cutting-edge tools (e.g., optogenetics) become commonplace, you have to do amazing things with them like create false memories in mice, or else develop methods like Dreadd2.0: An Enhanced Chemogenetic Toolkit or Ultra-Multiplexed Nanoscale In Situ Proteomics for Understanding Synapse Types.

    The BRAIN Initiative wants to train the hipsters and other "graduate students, medical students, postdoctoral scholars, medical residents, and/or early-career faculty" in Research Tools and Methods and Computational Neuroscience. This will "complement and/or enhance the training of a workforce to meet the nation’s biomedical, behavioral and clinical research needs."

    But this is an era when the average age of first-time R01 Principal Investigators is 421and post-docs face harsh realities:
    Research in 2014 is a brutal business, at least for those who want to pursue academic science as a career. Perhaps the most telling line comes from the UK report: of 100 science PhD graduates, about 30 will go on to postdoc research, but just four will secure permanent academic posts with a significant research component. There are too many scientists chasing too few academic careers.

    How do you respond to these brutal challenges? I don't have an answer.2  But many young neuroscientists may have to start pickling their own vegetables, raising their own chickens, and curing their own meats.



    Footnotes

    1  The average age of first-time Principal Investigators on NIH R01 grants has risen from 36 in 1980 to 42 in 2001, where it remains today (see this PPT). So this has been going on for a while.

    2  Or at least, not an answer that will fit within the scope of this post. Some obvious places to start are to train fewer scientists, enforce a reasonable retirement age, and increase funding somehow. And decide whether all research should be done by 20 megalabs, or else reduce the $$ amount and number of grants awarded to any one investigator.




    0 0

    Source: Alyssa L. Miller, Flickr.


    For nearly 9 years, this blog has been harping on the blight of overblown press releases, with posts like:

    Irresponsible Press Release Gives False Hope to People With Tourette's, OCD, and Schizophrenia

    Press Release: Press Releases Are Prestidigitation

    New research provides fresh evidence that bogus press releases may depend largely on our biological make-up

    Save Us From Misleading Press Releases

    etc.


    So it was heartening to see a team of UK researchers formally evaluate the content of 462 heath-related press releases issued by leading universities in 2011 (Sumner et al., 2014). They classified three types of exaggerated claims and found that 40% of the press releases contained exaggerated health advice, 33% made causal statements based on correlational results, and 36% extrapolated from animal research to humans.

    A fine duo of exaggerated health advice and causal statements based on correlational results recently caught my eye. Here's a press release issued by Springer, the company that publishes Cognitive Therapy and Research:

    Don’t worry, be happy: just go to bed earlier

    When you go to bed, and how long you sleep at a time, might actually make it difficult for you to stop worrying. So say Jacob Nota and Meredith Coles of Binghamton University in the US, who found that people who sleep for shorter periods of time and go to bed very late at night are often overwhelmed with more negative thoughts than those who keep more regular sleeping hours.

    The PR issues health advice (“just go to bed earlier”) based on correlational data: “people who sleep for shorter periods of time and go to bed very late at night are often overwhelmed with more negative thoughts.” But does staying up late cause you to worry, or do worries keep you awake at night? A survey can't distinguish between the two.

    The study by Nota and Coles (2014) recruited 100 teenagers (or near-teenagers, mean age = 19.4 + 1.9) from the local undergraduate research pool. They filled out a number of self-report questionnaires that assessed negative affect, sleep quality, chronotype (morning person vs. evening person), and aspects of repetitive negative thinking (RNT).

    RNT is a transdiagnostic construct that encompasses symptoms typical of depression (rumination), anxiety (worry), and obsessive-compulsive disorder (obsessions). Thus, the process of RNT is considered similar across the disorders, but the content may differ. The undergraduates were not clinically evaluated so we don't know if any of them actually had the diagnoses of depression, anxiety, and/or OCD. But one can look at whether the types of symptoms that are endorsed (whether clinically relevant or not) are related to sleep duration and timing. Which is what the authors did.

    Shorter sleep duration and a later bedtime were indeed associated with more RNT. However, when accounting for levels of negative affect, the sleep variables no longer showed a significant correlation.Not a completely overwhelming relationship, then.

    But as expected, the night owls reported more RNT than the non-night owls. 

    Here's how the findings were interpreted in the Springer press release and conspicuously, by the authors themselves (the study of Sumner et al., 2014 also observed this pattern). Note the exaggerated health advice and causal statements based on correlational results.

    “Making sure that sleep is obtained during the right time of day may be an inexpensive and easily disseminable intervention for individuals who are bothered by intrusive thoughts,” remarks Nota.

    The findings also suggest that sleep disruption may be linked to the development of repetitive negative thinking. Nota and Coles therefore believe that it might benefit people who are at risk of developing a disorder characterized by such intrusive thoughts to focus on getting enough sleep.

    “If further findings support the relation between sleep timing and repetitive negative thinking, this could one day lead to a new avenue for treatment of individuals with internalizing disorders,” adds Coles. “Studying the relation between reductions in sleep duration and psychopathology has already demonstrated that focusing on sleep in the clinic also leads to reductions in symptoms of psychopathology.”

    As they mentioned, we already know that many psychiatric disorders are associated with problematic sleep, and that improved sleep is helpful in these conditions. Recommending that people suffering with debilitating and uncontrollable intrusive thoughts to “just go to bed earlier” isn't particularly helpful. Not only that, such advice can be downright irritating.

    Here's a news story from Yahoo that plays up the “sleep reduces worry” causal relationship even more:
    This Sleep Tweak Could Help You Worry Less

    Can the time you hit the hay actually influence the types of thoughts you have? Science says yes.

    Are you a chronic worrier? The hour you’re going to sleep, and how much sleep you’re getting overall, may exacerbate your anxiety, according to a new study published in the journal Cognitive Therapy and Research.

    The great news here? By tweaking your sleep habits you could actually help yourself worry less. Really.

    Great! So internal monologues of self-loathing (“I'm a complete failure”, “No one likes me”) and deep anxiety about the future (“My career prospects are dismal”, “I worry about my partner's terrible diagnosis”) can be cured by going to bed earlier!

    Even if you could forcibly alter your chronotype (and I don't know if this is possible), what do you do when you wake up in the middle of the night haunted by your repetitive negative thoughts?


    Further Reading


    Alexis Delanoir on the RNT paper and much more in Depression And Stress/Mood Disorders: Causes Of Repetitive Negative Thinking And Ruminations

    Scicurious, with an amusingly titled piece: This study of hype in press releases will change journalism


    Footnotes

    Chronotype was dichotomously classified as evening type vs. moderately morning-type / neither type (not a lot of early birds, I guess). And only 75 students completed questionnaires in this part of the study.

    2 It's notable that the significance level for these correlations was not corrected for multiple comparisons in the first place.


    References

    Nota, J., & Coles, M. (2014). Duration and Timing of Sleep are Associated with Repetitive Negative Thinking. Cognitive Therapy and Research DOI: 10.1007/s10608-014-9651-7

    Sumner, P., Vivian-Griffiths, S., Boivin, J., Williams, A., Venetis, C., Davies, A., Ogden, J., Whelan, L., Hughes, B., Dalton, B., Boy, F., & Chambers, C. (2014). The association between exaggeration in health related science news and academic press releases: retrospective observational study, BMJ, 349 (dec09 7) DOI: 10.1136/bmj.g7015




    0 0

    Ho ho ho!

    “Laughter consists of both motor and emotional aspects. The emotional component, known as mirth, is usually associated with the motor component, namely, bilateral facial movements.”

    -Yamao et al. (2014)

    The subject of laughter has been under an increasing amount of scientific scrutiny.  A recent review by Dr. Sophie Scott and colleagues (Scott et al., 2014) emphasized that laughter is a social emotion. During conversations, voluntary laughter by the speaker is a communicative act. This contrasts with involuntary laughter, which is elicited by external events like jokes and funny behavior.

    One basic idea about the neural systems involved in the production of laughter relies on this dual process theme:
    The coordination of human laughter involves the periaqueductal grey [PAG] and the reticular formation [RF], with inputs from cortex, the basal ganglia, and the hypothalamus. The hypothalamus is more active during reactive laughter than during voluntary laughter. Motor and premotor cortices are involved in the inhibition of the brainstem laughter centres and are more active when suppressing laughter than when producing it.


    Figure 1 (Scott et al., 2014). Voluntary and involuntary laughter in the brain.


    An earlier paper on laughter and humor focused on neurological conditions such as pathological laughter and gelastic epilepsy (Wild et al., 2003). In gelastic epilepsy, laughter is the major symptom of a seizure. These gelastic (“laughing”) seizures usually originate from the temporal poles, the frontal poles, or from benign tumors in the hypothalamus (Wild et al., 2003). Some patients experience these seizures as pleasant (even mirthful), while others do not:
    During gelastic seizures, some patients report pleasant feelings which include exhilaration or mirth. Other patients experience the attacks of laughter as inappropriate and feel no positive emotions during their laughter. It has been claimed that gelastic seizures originating in the temporal regions involve mirth but that those originating in the hypothalamus do not. This claim has been called into question, however...

    In their extensive review of the literature, Wild et al. (2003) concluded that the “laughter‐coordinating centre” must lie in the dorsal midbrain, with intimate connections to PAG and RF. Together, this system may comprise the “final common pathway” for laughter (i.e., coordinating changes in facial muscles, respiration, and vocalizations). During emotional reactions, prefrontal cortex, basal temporal cortex, the hypothalamus, and the basal ganglia transmit excitatory inputs to PAG and RF, which in turn generates laughter.


    Can direct cortical stimulation produce laughter and mirth?

    It turns out that the basal temporal cortex (wearing a Santa hat above) plays a surprising role in the generation of mirth, at least according to a recent paper by Yamao et al., (2014). Over a period of 13 years, they recorded neural activity from the cortical surface of epilepsy patients undergoing seizure monitoring, with the purpose of localizing the aberrant epileptogenic tissue. They enrolled 13 patients with implanted subdural grids to monitor for left temporal lobe seizures, and identified induced feelings of mirth in two patients (resulting from electrical stimulation in specific regions).

    Obviously, this is not the typical way we feel amusement and utter guffaws of delight, but direct stimulation of the cortical surface goes back to Wilder Penfield as a way for neurosurgeons to map the behavioral functions of the brain. Of particular interest is the localization of language-related cortex that should be spared from surgical removal if at all possible.

    The mirth-inducing region (Yamao et al., 2014) encompasses what is known as the basal temporal language area (BTLA), first identified by Lüders and colleagues in 1986. The region includes the left fusiform gyrus, about 3-7 cm from the tip of the temporal lobe. Stimulation at high intensities produces total speech arrest (inability to speak) and global language comprehension problems. Low stimulation intensity produces severe anomia, an inability to name things (or places or people). Remarkably, however, Lüders et al. (1991) found that “Surgical resection of the basal temporal language area produces no lasting language deficit.”

    With this background in mind, let's look at the results from the mirthful patients. The location of induced-mirth (shown below) is the white circle in Patient 1 and the black circles in Patient 2.  In comparison, the locations of stimulation-induced language impairment are shown in diamonds. Note, however, that mirth was co-localized with language impairment in Patient 2.



    Fig. 1 (modified from Yamao et al., 2014). The results of high-frequency electrical cortical stimulation. “Mirth” (circles) and “language” (diamonds) electrodes are shown in white and black colors for Patients 1 and 2, respectively. Note that mirth was elicited at or adjacent to the electrode associated with language impairment.  R = right side. The view is of the bottom of the brain.


    How do the authors interpret this finding?
    ...the ratio of electrodes eliciting language impairment was higher for the mirth electrodes than in no-mirth electrodes, suggesting an association between mirth and language function. Since the BTLA is actively involved in semantic processing (Shimotake et al., 2014 and Usui et al., 2003), this semantic/language area was likely involved in the semantic aspect of humor detection in our cases.

    Except there was no external humor to detect, as the laughter and feelings of mirth were spontaneous. After high-frequency stimulation, one patient reported, “I do not know why, but something amused me and I laughed.” The other patient said, “A familiar melody that I had heard in a television program in my childhood came to mind; its tune sounded funny and amused me.”

    The latter description sounds like memory-induced nostalgia or reminiscence, which can occur with electrical stimulation of the temporal lobe (or TL seizures). But most of the relevant stimulation sites for those déjà vu-like experiences are not in the fusiform gyrus, which has been mostly linked to higher-level visual processing.

    The authors also found that stimulation of the left hippocampus consistently caused contralateral (right-sided) facial movement that led to laughter.

    I might have missed it, but one thing we don't know is whether stimulation of the right fusiform gyrus would have produced similar effects. Another thing to keep in mind is that these little circles are only one part of a larger system (see Scott et al. figure above). Presumably, the stimulated BTLA sites send excitatory projections to PAG and RF, which initiate laughter. But where is mirth actually represented, if you can feel amused and laugh for no apparent reason? By bypassing higher-order regions1, laughter can be a surprising and puzzling experience.


    Footnote

    1 Like, IDK, maybe ventromedial PFC, other places in both frontal lobes, hypothalamus, basal ganglia, and more "classically" semantic areas in the left temporal lobe...


    link originally via @Neuro_Skeptic:



    References

    LÜDERS, H., LESSER, R., HAHN, J., DINNER, D., MORRIS, H., WYLLIE, E., & GODOY, J. (1991). BASAL TEMPORAL LANGUAGE AREA Brain, 114 (2), 743-754 DOI: 10.1093/brain/114.2.743

    Scott, S., Lavan, N., Chen, S., & McGettigan, C. (2014). The social life of laughter Trends in Cognitive Sciences, 18 (12), 618-620 DOI: 10.1016/j.tics.2014.09.002

    Wild, B., & et al. (2003). Neural correlates of laughter and humour Brain, 126 (10), 2121-2138 DOI: 10.1093/brain/awg226

    Yamao, Y., Matsumoto, R., Kunieda, T., Shibata, S., Shimotake, A., Kikuchi, T., Satow, T., Mikuni, N., Fukuyama, H., Ikeda, A., & Miyamoto, S. (2014). Neural correlates of mirth and laughter: A direct electrical cortical stimulation study Cortex DOI: 10.1016/j.cortex.2014.11.008




    0 0



    Traumatic Brain Injury (TBI) is a serious public health problem that affects about 1.5 million people per year in the US, with direct and indirect medical costs of over $50 billion. Rapid intervention to reduce the risk of death and disability is crucial. The diagnosis and treatment of TBI is an active area of preclinical and clinical research funded by NIH and other federal agencies.

    But during the White House BRAIN Conference, a leading neurosurgeon painted a pessimistic picture of current treatments for acute TBI. In response to a question about clinical advances based on cellular neurobiology, Dr. Geoffry Manley noted that the field is on its 32nd or 33rd failed clinical trial. The termination of a very promising trial of progesterone for TBI had just been announced (the ProTECT III, Phase III Clinical Trial “based on 17 years of work with 200 positive papers in preclinical models”), although I couldn't find any notice at the time (Sept 30 2014).

    Now, the results from ProTECT III have been published in the New England Journal of Medicine (Wright et al., 2014). 882 TBI patients from 49 trauma centers were enrolled in the study and randomized to receive progesterone, thought to be a neuroprotective agent, or placebo within 4 hours of major head injury. The severity of TBI fell in the moderate to severe range, as indicated by scores on the Glasgow Coma Scale (which rates the degree of impaired consciousness).

    The primary outcome measure was the Extended Glasgow Outcome Scale (GOS-E) at six months post-injury. The trial was stopped at 882 patients (out of a planned 1140) because there was no way that progesterone would improve outcomes:
    After the second interim analysis, the trial was stopped because of futility. For the primary hypothesis comparing progesterone with placebo, favorable outcomes occurred in 51.0% of patients assigned to progesterone and in 55.5% of those assigned to placebo. 

    Analysis of subgroups by race, ethnicity, and injury severity showed no differences between them, but there was a suggestive (albeit non-significant) sex difference.

    - click on image for a larger view -


    Modified from Fig. 2 (Wright et al., 2014).Adjusted Relative Benefit in Predefined Subgroups. Note the red boxp value for sex differences.


    Squares to the left of the dotted line indicate that placebo performed better than progesterone in a given patient group, while values to the right favor progesterone. The error bars show confidence intervals, which indicate that nearly all groups overlap with 0 (representing zero benefit for progesterone) The red box indicates a near-significant difference between men and women, with women actually faring worse with progesterone than with placebo. You may quibble about conventional significance, but women on average deteriorated with treatment, while men were largely unaffected.

    This was a highly disappointing outcome for a well-conducted study that built on promising results in smaller Phase II Clinical Trials (which were backed by a boatload of preclinical data). The authors reflect on this gloomy state of affairs:
    The PROTECT III trial joins a growing list of negative or inconclusive trials in the arduous search for a treatment for TBI. To date, more than 30 clinical trials have investigated various compounds for the treatment of acute TBI, yet no treatment has succeeded at the confirmatory trial stage. Many reasons for the disappointing record of translating promising agents from the laboratory to the clinic have been postulated, including limited preclinical development work, poor drug penetration into the brain, delayed initiation of treatment, heterogeneity of injuries, variability in routine patient care across sites, and insensitive outcome measures.

    If that isn't enough, a second failed trial of progesterone was published in the same issue of NEJM (Skolnick et al., 2014). This group reported on negative results from an even larger pharma-funded trial (SyNAPse, which is the tortured acronym for Study of a Neuroprotective Agent, Progesterone, in Severe Traumatic Brain Injury). The SyNAPse trial enrolled the projected number of 1180 patients across 21 countries, all with severe TBI. The percentage of patients with favorable outcomes at six months was 50.4% in the progesterone group and 50.5% in the placebo group.
    The negative result of this study, combined with the results of the PROTECT III trial, should stimulate a rethinking of procedures for drug development and testing in TBI.

    This led Dr. Lee H. Schwamm (2014) to expound on the flawed culture of research in an Editorial, invoking the feared god of false positive findings (Ioannidis, 2005) and his minions: small effect sizes, small n's, too few studies, flexibility of analysis, and bias. Schwamm pointed to problematic aspects of the Phase II Trials that preceded ProTECT III and SyNAPse, including modest effect sizes and better-than expected outcomes in the placebo group.


    Hope for the Future

    “And you have to give them hope.”
    --Harvey Milk


    When the going gets tough in research, who better to rally the troops than your local university press office? The day after Dr. Manley's presentation at the BRAIN conference on Sept. 30, the University of California San Francisco issued this optimistic news release:

    $17M DoD Award Aims to Improve Clinical Trials for Traumatic Brain Injury

    An unprecedented, public-private partnership funded by the Department of Defense (DoD) is being launched to drive the development of better-run clinical trials and may lead to the first successful treatments for traumatic brain injury, a condition affecting not only athletes and members of the military, but also millions among the general public, ranging from youngsters to elders.

    Under the partnership, officially launched Oct. 1 with a $17 million, five-year award from the DoD, the research team, representing many universities, the Food and Drug Administration (FDA), companies and philanthropies, will examine data from thousands of patients in order to identify effective measures of brain injury and recovery, using biomarkers from blood, new imaging equipment and software, and other tools.
    . . .

    “TBI is really a multifaceted condition, not a single event,” said UCSF neurosurgeon Geoffrey T. Manley, MD, PhD, principal investigator for the new award... “TBI lags 40 to 50 years behind heart disease and cancer in terms of progress and understanding of the actual disease process and its potential aftermath. More than 30 clinical trials of potential TBI treatments have failed, and not a single drug has been approved.”

    The TED (TBI Endpoints Development) Award is meant to accelerate research to improve TBI diagnostics, classification, and patient selection for clinical trials. Quite a reversal of fortune in one day.

    Out of the ashes of two failed clinical trials, a phoenix arises. Hope for TBI patients and their families takes wing.


    Further Reading (and viewing)

    White House BRAIN Conference (blog post)

    90 min video of the conference

    Brief Storify (summary of the conference)

    ClinicalTrials.gov listings for SyNAPSe and ProTECT III.


    References

    Schwamm, L. (2014). Progesterone for Traumatic Brain Injury — Resisting the Sirens' Song New England Journal of Medicine, 371 (26), 2522-2523 DOI: 10.1056/NEJMe1412951

    Skolnick, B., Maas, A., Narayan, R., van der Hoop, R., MacAllister, T., Ward, J., Nelson, N., & Stocchetti, N. (2014). A Clinical Trial of Progesterone for Severe Traumatic Brain Injury New England Journal of Medicine, 371 (26), 2467-2476 DOI: 10.1056/NEJMoa1411090

    Wright, D., Yeatts, S., Silbergleit, R., Palesch, Y., Hertzberg, V., Frankel, M., Goldstein, F., Caveney, A., Howlett-Smith, H., Bengelink, E., Manley, G., Merck, L., Janis, L., & Barsan, W. (2014). Very Early Administration of Progesterone for Acute Traumatic Brain Injury. New England Journal of Medicine, 371 (26), 2457-2466 DOI: 10.1056/NEJMoa1404304

    0 0

    The Incredible Grow Your Own Brain (Barron Bob)


    Using super absorbent material from disposable diapers, MIT neuroengineers Ed Boyden, Fei Chen, and Paul Tillberg went well beyond the garden variety novelty store "Grow Brain" to expand real brain slices to nearly five times their normal size.

    Boyden, E., Chen, F. & Tillberg, P. / MIT / Courtesy of NIH

    A slice of a mouse brain (left) was expanded by nearly five-fold in each dimension by adding a water-soaking salt. The result — shown at smaller magnification (right) for comparison — has its anatomical structures are essentially unchanged. (Nature - E. Callaway)


    As covered by Ewan Callaway in Nature:
    Blown-up brains reveal nanoscale details

    Material used in diaper absorbant can make brain tissue bigger and enable ordinary microscopes to resolve features down to 60 nanometres.

    Microscopes make living cells and tissues appear bigger. But what if we could actually make the things bigger?

    It might sound like the fantasy of a scientist who has read Alice’s Adventures in Wonderland too many times, but the concept is the basis for a new method that could enable biologists to image an entire brain in exquisite molecular detail using an ordinary microscope, and to resolve features that would normally be beyond the limits of optics.

    The technique, called expansion microscopy, involves physically inflating biological tissues using a material more commonly found in baby nappies (diapers).

    . . .

    “What we’ve been trying to do is figure out if we can make everything bigger,” Boyden told the meeting at the NIH in Bethesda, Maryland. To manage this, his team used a chemical called acrylate that has two useful properties: it can form a dense mesh that holds proteins in place, and it swells in the presence of water.

    Sodium polyacrylate (via Leonard Gelfand Center, CMU)


    Acrylate, a type of salt also known as waterlock, is the substance that gives nappies their sponginess. When inflated, Boyden's tissues grow about 4.5 times in each dimension.




    Just add water

    Before swelling, the tissue is treated with a chemical cocktail that makes it transparent, and then with the fluorescent molecules that anchor specific proteins to the acrylate, which is then infused into tissue. Just as with nappies, adding water causes the acrylate polymer to swell. After stretching, the fluorescent-tagged molecules move further away from each other; proteins that were previously too close to distinguish with a visible-light microscope come into crisp focus. In his NIH presentation, Boyden suggested that the technique can resolve molecules that had been as close as 60nm before expansion.

    Most scientists thought it was cool, but there were some naysayers: “This is certainly highly ingenious, but how much practical use it will be is less clear,” notes Guy Cox, a microscopy specialist at the University of Sydney, Australia.

    Others saw nothing new with the latest brain-transforming gimmick. Below, Marc Schuster displays his 2011 invention, the inflatable brain.



    “An inflatable brain makes a great prop for your Zombie Prom King costume,” says Schuster, author of The Grievers.


    Link via Roger Highfield:




    0 0



    The Boston Marathon bombings of April 15, 2013 killed three people and injured hundreds of others near the finish line of the iconic footrace. The oldest and most prominent marathon in the world, Boston attracts over 20,000 runners and 500,000 spectators. The terrorist act shocked and traumatized and unified the city.

    What should the survivors do with their traumatic memories of the event? Many with disabling post-traumatic stress disorder (PTSD) receive therapy to lessen the impact of the trauma. Should they forget completely? Is it possible to selectively “alter” or “remove” a specific memory? Studies in rodents are investigating the use of pharmacological manipulations (Otis et al., 2014) and behavioral interventions (Monfils et al., 2009) to disrupt the reconsolidation of a conditioned fear memory. Translating these interventions into clinically effective treatments in humans is an ongoing challenge.

    The process of reconsolidation may provide a window to altering unwanted memories. When an old memory is retrieved, it enters a transiently labile state, when it's susceptible to change before becoming consolidated and stored again (Nader & Hardt et al., 2009). There's some evidence that the autonomic response to a conditioned fear memory can be lessened by an “updating” procedure during the reconsolidation period (Schiller et al., 2010).1 How this might apply to the recollection of personally experienced trauma memories is uncertain.


    Remembering the Boston Bombings

    Can you interfere with recall of a traumatic event by presenting competing information during the so-called reconsolidation window? A new study by Kredlow and Otto (2015) recruited 113 Boston University undergraduates who were in Boston on the day of the bombings. In the first testing session, participants wrote autobiographical essays recounting the details of their experience, prompted by specific questions. In principle, this procedure re-activated the traumatic memory, rendering it vulnerable to updating during the reconsolidation window (~6 hours).

    The allotted time for the autobiographical essay was 4 min. After that, separate groups of subjects read either a neutral story, a negative story, or a positive story (for 5 min). The fourth group did not read a story. Presentation of a story that is not one's own would presumably “update” the personal memory of the bombings.

    A second session occurred one week later. The participants were again asked to write an autobiographical essay for 4 min, under the same conditions as Session #1. They were also asked about their physical proximity to the bombings, whether they watched the marathon in person, feared for anyone's safety, and knew anyone who was injured or killed. Nineteen subjects were excluded for various reasons, leaving the final n=94.

    One notable weakness is that we don't know anything about the mental health of these undergrads, except that they completed the 10 item Positive and Negative Affective Schedule (PANAS-SF) before each session. And they were “provided with mental health resources” after testing (presumably links to resources, since the study was conducted online).

    In terms of proximity, 10% of the participants were within one block of the bombings (“Criterion A” stressor), placing them at risk for developing of PTSD. Most (95%) feared for someone's safety and 12% knew someone who was injured or killed (also considered Criterion A). But we don't know if anyone had a current or former PTSD diagnosis.

    The authors predicted that reading the negative stories during the “autobiographical reconsolidation window” would yield the greatest reduction in episodic details recalled from Session #1 (S1) to Session #2 (S2), relative to the No-Story condition. This is because the negative story and the horrific memories are both negative in valence [although I'm not sure of what mechanism would account for this effect].2
    Specifically, we hypothesized that learning a negative affective story during the reconsolidation window compared to no interference would interfere with the reconsolidation of memories of the Boston Marathon bombings. In addition, we expected the neutral and positive stories to result in some interference, but not as much as the negative story.

    The essays were coded for the number of memory details recalled in S1 and S2 (by 3-5 raters3), and the main measure was the number of details recalled in S2 for each of the four conditions. Other factors taken into account were the number of words used in S1, and time between the Boston Marathon and the testing session (both of which influenced the number of details recalled).

    The results are shown in Table 1 below. the authors reported comparisons between Negative Story vs. No Story (p<.05, d = 0.62), Neutral Story vs. No Story (p=.20, d = 0.39), and Positive Story vs. No Story (p=.83, d = 0.06). The effect sizes are “medium-ish” for both the Negative and Neutral comparisons, but only “significant” for Negative.


    I would argue that the comparison between Negative Story vs. Neutral Story which was not reported is the only way to evaluate the valence aspect of the prediction, i.e. whether the reduction in details recalled was specific to reading a negative story vs. potentially any story. I wasn't exactly sure why they didn't do an ANOVA in the first place, either.


    Nonetheless, Kredlow and Otto (2015) suggest that their study...
    ...represent[s] a step toward translating reconsolidation interference work to the clinic, as, to our knowledge, no published studies to date have examined nonpharmacological reconsolidation interference for clinically-relevant negative memories. Additional studies should examine reconsolidation interference paradigms, such as this one, in clinical populations.

    If this work was indeed extended to clinical populations, I would suggest conducting the study under more controlled conditions (in the lab, not online), which would also allow close monitoring of any distress elicited by writing the autobiographical essay (essentially a symptom provocation design). As the authors acknowledge, it would be especially important to evaluate not only the declarative, detail-oriented aspects of the traumatic memories, but also any change in their emotional impact.


    Further Reading

    Brief review of memory reconsolidation

    Media’s role in broadcasting acute stress following the Boston Marathon bombings

    Autobiographical Memory for a Life-Threatening Airline Disaster

    I Forget...


    Footnotes

    1 But this effect hasn't replicated in other studies (e.g., Golkar et al., 2012).

    2 Here, the authors say:
    ...some degree of similarity between the original memory and interference task may be required to achieve interference effects. This is in line with research suggesting that external and internal context is an important factor in extinction learning and may also be relevant to reconsolidation. As such, activating the affective context in which a memory was originally consolidated may facilitate reconsolidation interference.
    This is a very different strategy than the “updating of fear memories” approach, where a safety signal occurs before extinction. But conditioned fear (blue square paired with mild shock) is very different from episodic memories of a bombing scene.

    3 Details of the coding system:
    A group consensus coding system was used to code the memories. S1 and S2 memory descriptions for each participant were compared and coded for recall of memory details. One point was given for each detail from the S1 memory description that was recalled in the S2 memory description. Each memory pair was coded by between three to five raters until a consensus between three raters was reached. Raters were blind to participant randomization, but not to each other's ratings. Consensus was reached in 83% of memory pairs.

    References

    Kredlow MA, & Otto MW (2015). Interference with the reconsolidation of trauma-related memories in adults. Depression and anxiety, 32 (1), 32-7 PMID: 25585535

    Monfils MH, Cowansage KK, Klann E, LeDoux JE. (2009). Extinction-reconsolidation boundaries: key to persistent attenuation of fear memories. Science 324:951-5.

    Nader K, Hardt O. (2009). A single standard for memory: the case for reconsolidation. Nat Rev Neurosci. 10:224-34.

    Otis JM, Werner CT, Mueller D. (2014). Noradrenergic Regulation of Fear and Drug-Associated Memory Reconsolidation. Neuropsychopharmacology. [Epub ahead of print]

    Schiller D, Monfils MH, Raio CM, Johnson DC, Ledoux JE, & Phelps EA (2010). Preventing the return of fear in humans using reconsolidation update mechanisms. Nature 463: 49-53.


    0 0


    “It is feasible to recruit and retain a cohort of female participants to perform a functional magnetic resonance imaging [fMRI] task focused on making decisions about sex, on the basis of varying levels of hypothetical sexual risk, and to complete longitudinal prospective diaries following this task. Preliminary evidence suggests that risk level differentially impacts brain activity related to sexual decision making in these women [i.e., girls aged 14-15 yrs], which may be related to past and future sexual behaviors.”

    -Hensel et al. (2015)

    Can the brain activity of adolescents predict whether they are likely to make risky sexual decisions in the future?  I think this is the goal of a new pilot study by researchers at Indiana University and the Kinsey Institute (Hensel et al., 2015). While I have no reason to doubt the good intentions of the project, certain aspects of it make me uncomfortable.

    But first, I have a confession to make. I'm not an expert in adolescent sexual health like first author Dr. Devon Hensel. Nor do I know much about pediatrics, adolescent medicine, health risk behaviors, sexually transmitted diseases, or the epidemiology of risk, like senior author Dr. J. Dennis Fortenberry (who has over 300 publications on these topics).  His papers include titles such as Time from first intercourse to first sexually transmitted infection diagnosis among adolescent women and Sexual learning, sexual experience, and healthy adolescent sex. Clearly, these are very important topics with serious personal and public health implications. But are fMRI studies of a potentially vulnerable population the best way to address these societal problems?

    The study recruited 14 adolescent girls (mean age = 14.7 yrs) from health clinics in lower- to middle-income neighborhoods. Most of the participants (12 of the 14) were African-American, most did not drink or do drugs, and most had not yet engaged in sexual activity.  However, the clinics served areas with “high rates of early childbearing and sexually transmitted infection” so the implication is that these young women are at greater risk of poor outcomes than those who live in different neighborhoods.

    Detailed sexual histories were obtained from the girls upon enrollment (see below). They also kept a diary of sexual thoughts and behaviors for 30 days.




    Given the sensitive nature of the information revealed by minors, it's especially important to outline the informed consent procedures and the precautions taken to protect privacy. Yes, a parent or guardian gave their approval, and the girls completed informed consent documents that were approved by the local IRB. But I wanted to see more about this in the Methods. For example, did the parent or guardian have access to their daughters' answers and/or diaries, or was that private? This could have influenced the willingness of the girls to disclose potentially embarrassing behavior or “verboten” activities (prohibited by parental mores, church teachings, legal age of consent,1 etc.). 

    I don't know, maybe the standard procedures are obvious to those within the field of sexual health behavior, but they weren't to me.

    Turning to more familiar territory, the experimental design for the neuroimaging study involved presentation of four different types of stimuli: (1) faces of adolescent males; (2) alcoholic beverages; (3) restaurant food; (4) household items (e.g., frying pan). My made-up examples of the stimuli are shown below.



    Each picture was presented with information that indicated the item's risk level (“high” or “low”):
    • Adolescent male faces: number of previous sexual partners and typical condom use (yes/no)
    • Alcoholic beverages: number of alcohol units and whether there was a designated driver (yes/no)
    • Food: calorie content and whether the restaurant serving the food had been cited in the past year for health code violations (yes/no)
    • Household items: whether the object could be returned to the store (yes/no)

    For each picture, participants rated how likely they were to: (1) have sex with the male, (2) drink the beverage, (3) eat the food, or (4) purchase the product (1 = very unlikely to 4 = very likely). There were 35 exemplars of each category, and each stimulus was presented in both “high” and “low” risk contexts. So oddly, the pizza was 100 calories and from a clean restaurant on one trial, compared to 1,000 calories and from a roach-infested dump on another trial.

    The faces task was adapted from a study in adult women (Rupp et al., 2009) where the participants gave a mean likelihood rating of 2.45 for sex with low risk men vs. 1.41 for high risk men (significantly less likely for the latter). The teen girls showed the opposite result: 2.85 for low risk teen boys vs. 3.85 for high risk teen boys (significantly more likely) the “bad boy” effect?

    But the actual values were quite confusing. At one point the authors say they omitted the alcohol condition: “The present study focused on the legal behaviors (e.g., sexual behavior, buying item, and eating food) in which adolescents could participate.”

    But in the Fig. 1 legend, they say the opposite (that the alcohol condition was included):
    Panel (A) provides the average likelihood of young women's endorsing low- and high-risk decisions in the boy, alcohol, food, and household item (control) stimulus categories.

    Then they say that the low-risk male faces were rated as the most unlikely (i.e., least preferred) of all stimuli.  But Fig. 1 itself shows that the low-risk food stimuli were rated as the most unlikely...



    Regardless of the precise ratings, the young women were more drawn to all stimuli when they were in the high risk condition. The authors tried to make a case for more "risky" sexual choices among participants with higher levels of overt or covert sexual reporting, but the numbers were either impossibly low (for behavior) or thought-crimes only (for dreams/fantasy). So it's really hard to see how brain activity of any sort could be diagnostic of actual behaviorat this point in their lives.

    And the neuroimaging results were confusing as well. First, the less desirable low-risk stimuli elicited greater responses in cognitive and emotional control regions:
    Neural activity in a cognitive-affective network, including prefrontal and anterior cingulate (ACC) regions, was significantly greater during low-risk decisions.

    But then, we see that the more desirable high-risk sexual stimuli elicited greater responses in cognitive/emotional control regions:
    Compared with other decisions, high-risk sexual decisions elicited greater activity in the anterior cingulate, and low-risk sexual decision elicited greater activity in regions of the visual cortex. 

    This pattern went in the opposite direction from what was seen in adult women (Rupp et al., 2009), and it implicated a different region of the ACC. It's difficult to draw comparisons, though, because the adult and adolescent groups diverged in age, demographic characteristics, and sexual experience.


    Figure adapted from Hensel et al., 2015 (left) and Rupp et al., 2009 (right).


    So is it feasible to use fMRI to understand teen girls' sexual decision making? Maybe, from the point of view of logistics and subject compliance, which is no mean feat. But is it necessary, or even informative? Certainly not, in my view. It's not clear what neuroimaging will add to the picture, beyond the participants' fully disclosed sexual histories. Finally, is it ethical to use brain imaging to understand teen girls' sexual decision making? While the future predictive value of the fMRI data is uncertain, linking a biomarker to sensitive sexual information requires extra protection, especially when from a potentially vulnerable adolescent population.


    Footnote

    1 In the state of Indiana, it is illegal for an individual 18 years of age or older to have sex with one of the participants in the present study. So if a young women engaged in sexual activity with an 18 year old senior, he could potentially go to jail. Not that this was necessarily the case for anyone here.


    References

    Hensel, D., Hummer, T., Acrurio, L., James, T., & Fortenberry, J. (2015). Feasibility of Functional Neuroimaging to Understand Adolescent Women's Sexual Decision Making. Journal of Adolescent Health. DOI: 10.1016/j.jadohealth.2014.11.004

    Rupp, H., James, T., Ketterson, E., Sengelaub, D., Janssen, E., & Heiman, J. (2009). The role of the anterior cingulate cortex in women's sexual decision making. Neuroscience Letters, 449 (1), 42-47 DOI: 10.1016/j.neulet.2008.10.083

    0 0


    The Neurocritic (the blog) began 9 years ago today.

    I've enjoyed the journey immensely and look forward to the years to come, by Nodes of Ranvier (the band — not the myelin sheath gaps).






    Node of Ranvier



    And now a word from our sponsors,  Episode 3979 of Sesame Street...

    The Number 9



    The Letter k



    Thank you for watching! (and reading).

    0 0


     ...or should I say braindoggle...


    I've been reading The Future of the Brain, a collection of Essays by the World's Leading Neuroscientists edited by Gary Marcus and Jeremy Freeman. Amidst the chapters on jaw-dropping technical developments, Big Factory Science, and Grand Neuroscience Initiatives, one stood out for its contrarian stance (and personally reflective tone). Here's Professor Leah Krubitzer, who heads the Laboratory of Evolutionary Biology at University of California, Davis:

    “From a personal rather than scientific standpoint, the final important thing I've learned is don't be taken in by the boondoggle, don't get caught up in technology, and be very suspicious of "initiatives." Science should be driven by questions that are generated by inquiry and in-depth analysis rather than top-down initiatives that dictate scientific directions. I have also learned to be suspicious of labels declaring this the "decade of" anything: The brain, The mind, Consciousness. There should be no time limit on discovery. Does anyone really believe we will solve these complex, nonlinear phenomena in ten years or even one hundred? Tightly bound temporal mandates can undermine the important, incremental, and seemingly small discoveries scientists make every day doing critical, basic, nonmandated research. These basic scientific discoveries have always been the foundation for clinical translation. By all means funding big questions and developing innovative techniques is worthwhile, but scientists and the science should dictate the process.”

    ...although it should be said that a bunch of scientists did at least contribute to the final direction taken by the BRAIN Initiative (Brain Research through Advancing Innovative NeurotechnologiesSM)...


    An AS @ UVA Project
    by Meagan Hess
    May 2004



    Top image: vintage spoof Monopoly game issued during the 1936 US presidential campaign.




    0 0


    What do schizophrenia, bipolar disorder, major depression, addiction, obsessive compulsive disorder, and anxiety have in common? A loss of gray matter in the dorsal anterior cingulate cortex (dACC) and bilateral anterior insula, according to a recent review of the structural neuroimaging literature (Goodkind et al., 2015). These two brain regions are important for executive functions, the top-down cognitive processes that allow us to maintain goals and flexibly alter our behavior in response to changing circumstances. The authors modestly concluded they had identified a “Common Neurobiological Substrate for Mental Illness.”

    One problem with this view is that the specific pattern of deficits in executive functions, and their severity, differ across these diverse psychiatric disorders. For instance, students with anxiety perform worse than controls in verbal selection tasks, while those with depression actually perform better (Snyder et al., 2014). Another problem is that gray matter volume in the dorsolateral prefrontal cortex, a key region for working memory (a core impairment in schizophrenia and to a lesser extent, in major depression and non-psychotic bipolar disorder), was oddly unaffected in the meta-analysis.

    The NIMH RDoC movement (Research Domain Criteria) aims to explain the biological basis of psychiatric symptoms that cut across traditional DSM diagnostic categories. But I think some of the recent research that uses this framework may carry the approach too far (Goodkind et al., 2015):
    Our findings ... provide an organizing model that emphasizes the import of shared endophenotypes across psychopathology, which is not currently an explicit component of psychiatric nosology. This transdiagnostic perspective is consistent...with newer dimensional models such as the NIMH’s RDoC Project.

    However, not even the Director of NIMH believes this is true:
    "The idea that these disorders share some common brain architecture and that some functions could be abnormal across so many of them is intriguing," said Thomas Insel, MD...

    [BUT]

    "I wouldn't have expected these results. I've been working under the assumption that we can use neuroimaging to help classify the different forms of mental illness," Insel said. "This makes it harder."

    Anterior Cingulate and Anterior Insula and Everyone We Know

    The dACC and anterior insula are ubiquitously activated 1in human neuroimaging studies (leading Micah Allen to dub it the ‘everything’ network), and comprise either a salience network or task-set network (or even two separate cingulo-opercular systems) in resting state functional connectivity studies. But the changes reported in the newly published work were structural in nature. They were based on a meta-analysis of 193 voxel-based morphometry (VBM) studies that quantified gray matter volume across the entire brain in psychiatric patient groups, and compared this to controls.

    Goodkind et al., (2015) included a handy flow chart for how they selected the papers for their review.



    I could be wrong, but it looks like 34 papers were excluded because they found no differences between patients and controls. This would of course bias the results towards greater differences between patients and controls. And we don't know which of the six psychiatric diagnoses were included in the excluded batch. Was there an over-representation of null results in OCD? Anxiety? Depression?


    What Does VBM Measure, Anyway?

    Typically, VBM measures gray matter volume, which in the cortex is determined by surface area (which can vary due to differences in folding patterns) and by thickness (Kanai & Rees, 2011). These can be differentially related to some ability or characteristic. For example, Song et al. (2015) found that having a larger surface area in early visual cortex (V1 and V2) was correlated with better performance in a perceptual discrimination task, while larger cortical thickness was actually correlated with worse performance. Other investigators warn that volume really isn't the best measure of structural differences between patients and controls, and that cortical thickness is better (Ehrlich et al., 2012):
    Cortical thickness is assumed to reflect the arrangement and density of neuronal and glial cells, synaptic spines, as well as passing axons. Postmortem studies in patients with schizophrenia showed reduced neuronal size and a decrease in interneuronal neuropil, dendritic trees, cortical afferents, and synaptic spines, while no reduction in the number of neurons or signs of gliosis could be demonstrated.
    This leads us to the huge gap between dysfunction in cortical and subcortical microcircuits and gross changes in gray matter volume.


    Psychiatric Disorders Are Circuit Disorders

    This motto tells us that mental illnesses are disorder of neural circuits, in line with the funding priorities of NIMH and the BRAIN Initiative. But structural MRI studies tell us nothing about the types of neurons that are affected. Or how their size, shape, and synaptic connections might be altered. Basically, volume loss in dACC and anterior insula could be caused by any number of reasons, and by different mechanisms across the disorders under consideration. Goodkind et al., (2015) state:
    Our connection of executive functioning to integrity of a well-established brain network that is perturbed across a broad range of psychiatric diagnoses helps ground a transdiagnostic understanding of mental illness in a context suggestive of common neural mechanisms for disease etiology and/or expression.

    But actually, we might find a reduction in the density of von Economo neurons in the dACC of individuals with early-onset schizophrenia (Brüne et al., 2010), but not in persons with other disorders. Or a reduction in the density of GAD67 mRNA-expressing neurons in ACC cortical layer 5 in schizophrenia, but not in bipolar disorder. On the other hand, we could see something like an alteration in the synapses onto parvalbumin inhibitory interneurons (due to stress) that cuts across multiple diagnoses.

    And it's not always the case that bigger is better: smaller cortical volumes can also be associated with better performance (Kanai & Rees, 2011).

    As Kanai and Rees (2011) noted in their review:
    ...a direct link between microstructures and macrostructures has not been established in the human brain. A histological study directly compared whether histopathological measurements of resected temporal lobe tissue correlated with grey matter density as used in typical VBM studies. However, none of the histological measures — including neuronal density — showed a clear relationship with the grey matter volume. 

    So where do we go from here? Bridging the technological gulf between exceptionally invasive methods (like optogenetics and chemogenetics in animals) and non-invasive ones (TMS, MRI in humans) is a minor funding priority of the BRAIN Initiative. Another more manageable strategy for the present would be a comprehensive review of imaging, genetic, and post-mortem neuroanatomical studies of brains from people who lived with schizophrenia, bipolar disorder, major depression, addiction, obsessive compulsive disorder, and anxiety. This has been done most extensively (perhaps) for schizophrenia (e.g., Meyer-Lindenberg, 2010; Arnsten, 2011). Certain types of electrophysiological studies in primate prefrontal cortex may provide another bridge, although this has been disputed.

    Goodkind and colleagues have indeed uncovered some “biological commonalities that may have been underappreciated in prior work,” but it's also clear there are “some fairly obvious distinctions between schizophrenia and bipolar disorder” at a clinical level (to give one example). In the rush to cut up psychiatric nosology along the RDoC dotted lines, let's not forget the limitations of current methods that are designed to do the carving.

    Further Reading

    Other comprehensive reviews:

    Large-scale brain networks and psychopathology: a unifying triple network model

    Does the salience network play a cardinal role in psychosis? An emerging hypothesis of insular dysfunction

    Salience processing and insular cortical function and dysfunction


    Critiques of phrenology-like VBM studies:

    Now Is That Gratitude?

    Should Policy Makers and Financial Institutions Have Access to Billions of Brain Scans?

    Anthropomorphic Neuroscience Driven by Researchers with Large TPJs

    Liberals Are Conflicted and Conservatives Are Afraid


    Great discussion of a failure to replicate VBM studies (at Neuroskeptic):


    Failed Replications: A Reality Check for Neuroscience?


    Footnotes

    1To quote Russ Poldrack:
    In Tal Yarkoni's recent paper in Nature Methods, we found that the anterior insula was one of the most highly activated part of the brain, showing activation in nearly 1/3 of all imaging studies!
    2Links to recent J Neurosci articles via @prerana123 and @MyCousinAmygdala.


    References

    Brüne M, Schöbel A, Karau R, Benali A, Faustmann PM, Juckel G, Petrasch-Parwez E. (2010). Von Economo neuron density in the anterior cingulate cortex is reduced inearly onset schizophrenia. Acta Neuropathol. 119(6):771-8.

    Ehrlich S, Brauns S, Yendiki A, Ho BC, Calhoun V, Schulz SC, Gollub RL, Sponheim SR. (2012). Associations of cortical thickness and cognition in patients with schizophrenia and healthy controls. Schizophr Bull. 38(5):1050-62.

    Goodkind, M., Eickhoff, S., Oathes, D., Jiang, Y., Chang, A., Jones-Hagata, L., Ortega, B., Zaiko, Y., Roach, E., Korgaonkar, M., Grieve, S., Galatzer-Levy, I., Fox, P., & Etkin, A. (2015). Identification of a Common Neurobiological Substrate for Mental Illness. JAMA Psychiatry DOI: 10.1001/jamapsychiatry.2014.2206

    Kanai, R., & Rees, G. (2011). The structural basis of inter-individual differences in human behaviour and cognition. Nature Reviews Neuroscience, 12 (4), 231-242. DOI: 10.1038/nrn3000

    Song C, Schwarzkopf DS, Kanai R, Rees G. (2015). Neural population tuning links visualcortical anatomy to human visual perception. Neuron 85(3):641-56.

    Snyder HR, Kaiser RH, Whisman MA, Turner AE, Guild RM, Munakata Y. (2014). Opposite effects of anxiety and depressive symptoms on executive function: the case of selecting among competing options. Cogn Emot. 28(5):893-902.


    Fig. 3 (Meyer-Lindenberg, 2010). Schematic summary of putative alterations in dorsolateral prefrontal cortex circuitry in schizophrenia.

    0 0



    Could one's chronotype (degree of "morningness" vs. "eveningness") be related to your membership on Team white/gold vs. Team blue/black?

    Dreaded by night owls everywhere, Daylight Savings Time forces us to get up an hour earlier. Yes, [my time to blog and] I have been living under a rock, but this evil event and an old tweet by Vaughan Bell piqued my interest in melanopsin and intrinsically photosensitive retinal ganglion cells.


    I thought this was a brilliant idea, perhaps differences in melanopsin genes could contribute to differences in brightness perception. More about that in a moment.


    {Everyone already knows about #thedress from Tumblr and Buzzfeed and Twitter obviously}

    In the initial BuzzFeed poll, 75% saw it as white and gold, rather than the actual colors of blue and black. Facebook's more systematic research estimated this number was only 58% (and influenced by probably exposure to articles that used Photoshop). Facebook also reported differences by sex (males more b/b), age (youngsters more b/b), and interface (more b/b on computer vs. iPhone and Android).

    Dr. Cedar Riener wrote two informativeposts about why people might perceive the colors differently, but Dr. Bell was not satisfied with this and other explanations. Wired consulted two experts in color vision:
    “Our visual system is supposed to throw away information about the illuminant and extract information about the actual reflectance,” says Jay Neitz, a neuroscientist at the University of Washington. “But I’ve studied individual differences in color vision for 30 years, and this is one of the biggest individual differences I’ve ever seen.”
    and
    “What’s happening here is your visual system is looking at this thing, and you’re trying to discount the chromatic bias of the daylight axis,” says Bevil Conway, a neuroscientist who studies color and vision at Wellesley College. “So people either discount the blue side, in which case they end up seeing white and gold, or discount the gold side, in which case they end up with blue and black.”

    Finally, Dr. Conway threw out the chronotype card:
    So when context varies, so will people’s visual perception. “Most people will see the blue on the white background as blue,” Conway says. “But on the black background some might see it as white.” He even speculated, perhaps jokingly, that the white-gold prejudice favors the idea of seeing the dress under strong daylight. “I bet night owls are more likely to see it as blue-black,” Conway says.

    Melanopsin and Intrinsically Photosensitive Retinal Ganglion Cells

    Rods and cones are the primary photoreceptors in the retina that convert light into electrical signals. The role of the third type of photoreceptor is very different. Intrinsically photosensitive retinal ganglion cells (ipRGCs) sense light without vision and:
    • ...contribute to the regulation of pupil size and other behavioral responses to ambient lighting conditions...
    • ...contribute to photic regulation of, and acute photic suppression of, release of the hormone melatonin...

    Recent research suggests that ipRGCs may play more of a role in visual perception than was originally believed. As Vaughan said, melanopsin (the photopigment in ipRGCs) is involved in brightness discrimination and is most sensitive to blue light. Brown et al. (2012) found that melanopsin knockout mice showed a change in spectral sensitivity that affected brightness discrimination; the KO mice needed higher green radiance to perform the task as well as the control mice.

    The figure below shows the spectra of human cone cells most sensitive to Short (S), Medium (M), and Long (L) wavelengths.



    Spectral sensitivities of human cone cells, S, M, and L types. X-axis is in nm.


    The peak spectral sensitivity for melanopsin photoreceptors is in the blue range. How do you isolate the role of melanopsin in humans?  Brown et al. (2012) used metamers, which are...
    ...light stimuli that appear indistinguishable to cones (and therefore have the same color and photopic luminance) despite having different spectral power distributions.  ... to maximize the melanopic excitation achievable with the metamer approach, we aimed to circumvent rod-based responses by working at background light levels sufficiently bright to saturate rods.

    They verified their approach in mice, then used a four LED system to generate stimuli that diffed in presumed melanopsin excitation, but not S, M, or L cone excitation. All six of the human participants perceived greater brightness as melanopsin excitation increased (see Fig. 3E below). Also notice the individual differences in test radiance with the fixed 11% melanopic excitation (on the right of the graph).


    Modified from Fig. 3E (Brown et al. (2012). Across six subjects, there was a strong correlation between the test radiance at equal brightness and the melanopic excitation of the reference stimulus (p < 0.001).1


    Maybe Team white/gold and Team blue/black differ on this dimension? And while we're at it, is variation in melanopsin related to circadian rhythms, chronotype, even seasonal affective disorder (SAD)? 2 There is some evidence in favor of the circadian connections. Variants of the melanopsin (Opn4) gene might be related to chronotype and to SAD, which is much more common in women. Another Opn4 polymorphism may be related to pupillary light responses, which would affect light and dark adaptation. These genetic findings should be interpreted with caution, however, until replicated in larger populations.


    Could This Device Hold the Key to “The Dress”?

    ADDENDUM (March 10 2015):NO, according to Dr. Geoffry K. Aguirre of U. Penn.: Speaking as a guy with a 56-primary version of This Device to study melanopsin, I think the answer to your question is 'no'…” His PNAS paper, Opponent melanopsin and S-cone signals in the human pupillary light response, is freely available.3


    A recent method developed by Cao, Nicandro and Barrionuevo (2015) increases the precision of isolating ipRGC function in humans. The four-primary photostimulator used by Brown et al. (2012) assumed that the rod cells were saturated at the light levels they used. However, Cao et al. (2015) warn that “a four-primary method is not sufficient when rods are functioning together with melanopsin and cones.” So they:
    ...introduced a new LED-based five-primary photostimulating method that can independently control the excitation of melanopsin-containing ipRGC, rod and cone photoreceptors at constant background photoreceptor excitation levels.

    Fig. 2 (Cao et al., 2015). The optical layout and picture of the five-primary photostimulator.


    Their Journal of Vision article is freely available, so you can read all about the methods and experimental results there (i.e., I'm not even going to try to summarize them here).

    So the question remains: beyond the many perceptual influences that everyone has already discussed at length (e.g., color constancy, Bayesian priors, context, chromatic bias, etc.), could variation in ipRGC responses influence how you see “The Dress”?




    Footnotes

    1Fig 3E (continued). The effect was unrelated to any impact of melanopsin on pupil size. Subjects were asked to judge the relative brightness of three metameric stimuli (melanopic contrast −11%, 0%, and +11%) with respect to test stimuli whose spectral composition was invariant (and equivalent to the melanopsin 0% stimulus) but whose radiance changed between trials.

    2This would test Conway's quip that night owls are more likely to see the dress as blue and black.

    3Aguirre also said that a contribution from melanopsin (to the dress effect) was doubtful, at least from any phasic effect: “It's a slow signal with poor spatial resolution and subtle perceptual effects.” It remains to be seen whether any bias towards discarding blue vs. yellow illuminant information is affected by chronotype.

    Interesting result from Spitschan, Jain, Brainard, & Aguirre 2014):
    The opposition of the S cones is revealed in a seemingly paradoxical dilation of the pupil to greater S-cone photon capture. This surprising result is explained by the neurophysiological properties of ipRGCs found in animal studies.

    References

    Brown, T., Tsujimura, S., Allen, A., Wynne, J., Bedford, R., Vickery, G., Vugler, A., & Lucas, R. (2012). Melanopsin-Based Brightness Discrimination in Mice and Humans. Current Biology, 22 (12), 1134-1141 DOI: 10.1016/j.cub.2012.04.039

    Cao, D., Nicandro, N., & Barrionuevo, P. (2015). A five-primary photostimulator suitable for studying intrinsically photosensitive retinal ganglion cell functions in humans. Journal of Vision, 15 (1), 27-27 DOI: 10.1167/15.1.27


    0 0

    Website for the BROADEN™ study, which was terminated


    In these days of irrational exuberance about neural circuit models, it's wise to remember the limitations of current deep brain stimulation (DBS) methods to treat psychiatric disorders. If you recall (from Dec. 2013), Neurotech Business Report revealed that "St. Jude Medical failed a futility analysis of its BROADEN trial of DBS for treatment of depression..."

    A recent comment on my old post about the BROADEN Trial1 had an even more pessimistic revelation: there was only a 17.2% chance of a successful study outcome:
    Regarding Anonymous' comment on January 30, 2015 11:01 AM, as follows in part:
    "Second, the information that it failed FDA approval or halted by the FDA is prima facie a blatant lie and demonstratively false. St Jude, the company, withdrew the trial."

    Much of this confusion could be cleared up if the study sponsors practiced more transparency.
    A bit of research reveals that St. Judes' BROADEN study was discontinued after the results of a futility analysis predicted the probability of a successful study outcome to be no greater than 17.2%. (According to a letter from St. Jude)

    Medtronic hasn't fared any better. Like the BROADEN study, Medtronics' VC DBS study was discontinued owing to inefficacy based on futility Analysis.

    If the FDA allowed St. Jude to save face with its shareholders and withdraw the trial rather than have the FDA take official action, that's asserting semantics over substance.

    If you would like to read more about the shortcomings of these major studies, please read (at least):
    Deep Brain Stimulation for Treatment-resistant Depression: Systematic Review of Clinical Outcomes,
    Takashi Morishita & Sarah M. Fayad &
    Masa-aki Higuchi & Kelsey A. Nestor & Kelly D. Foote
    The American Society for Experimental NeuroTherapeutics, Inc. 2014
    Neurotherapeutics
    DOI 10.1007/s13311-014-0282-1

    The Anonymous Commenter kindly linked to a review article (Morishita et al., 2014), which indeed stated:
    A multicenter, prospective, randomized trial of SCC DBS for severe, medically refractory MDD (the BROADEN study), sponsored by St. Jude Medical, was recently discontinued after the results of a futility analysis (designed to test the probability of success of the study after 75 patients reached the 6-month postoperative follow-up) statistically predicted the probability of a successful study outcome to be no greater than 17.2 % (letter from St. Jude Medical Clinical Study Management).

    I (and others) had been looking far and wide for an update on the BROADEN Trial, whether in ClinicalTrials.gov or published by the sponsors. Instead, the authors of an outside review article (who seem to be involved in DBS for movement disorders and not depression) had access to a letter from St. Jude Medical Clinical Studies.

    Another large randomized controlled trial that targeted different brain structures (ventral capsule/ventral striatum, VC/VS) also failed a futility analysis (Morishita et al., 2014):
    Despite the very encouraging outcomes reported in the open-label studies described above, a recent multicenter, prospective, randomized trial of VC/VS DBS for MDD sponsored by Medtronic failed to show significant improvement in the stimulation group compared with a sham stimulation group 16 weeks after implantation of the device. This study was discontinued owing to perceived futility, and while investigators remain hopeful that modifications of inclusion criteria and technique might ultimately result in demonstrable clinical benefit in some cohort of severely debilitated, medically refractory patients with MDD, no studies investigating the efficacy of VC/VS DBS for MDD are currently open.
    In this case, however, the results were published (Dougherty et al., 2014):
    There was no significant difference in response rates between the active (3 of 15 subjects; 20%) and control (2 of 14 subjects; 14.3%) treatment arms and no significant difference between change in Montgomery-Åsberg Depression Rating Scale scores as a continuous measure upon completion of the 16-week controlled phase of the trial. The response rates at 12, 18, and 24 months during the open-label continuation phase were 20%, 26.7%, and 23.3%, respectively.

    Additional studies (with different stimulation parameters, better target localization, more stringent subject selection criteria) are needed, one would say. Self-reported outcomes from the patients themselves range from “...the side effects caused by the device were, at times, worse than the depression itself” to “I feel like I have a second chance at life.”

    So where do we go now?? Here's a tip: all the forward-looking investors are into magnetic nanoparticles these days (see Magnetic 'rust' controls brain activity)...


    Footnote

    1 BROADEN is an tortured acronym for BROdmannArea 25 DEep brain Neuromodulation. The target was subgenual cingulate cortex (aka BA 25). The trial was either halted by the FDA or withdrawn by the sponsor.


    References

    Dougherty DD, Rezai AR, Carpenter LL, Howland RH, Bhati MT, O'Reardon JP, Eskandar EN, Baltuch GH, Machado AD, Kondziolka D, Cusin C, Evans KC, Price LH, Jacobs K, Pandya M, Denko T, Tyrka AR, Brelje T, Deckersbach T, Kubu C, Malone DA Jr. (2014). A Randomized Sham-Controlled Trial of Deep Brain Stimulation of the Ventral Capsule/Ventral Striatum for Chronic Treatment-Resistant Depression. Biol Psychiatry Dec 13. [Epub ahead of print].

    Morishita, T., Fayad, S., Higuchi, M., Nestor, K., & Foote, K. (2014). Deep Brain Stimulation for Treatment-resistant Depression: Systematic Review of Clinical Outcomes. Neurotherapeutics, 11 (3), 475-484. DOI: 10.1007/s13311-014-0282-1



    DBS for MDD targets as of November 2013
    (Image credit: P. HUEY/SCIENCE)

    0 0
  • 03/28/15--12:25: Follow #CNS2015


  • Whether or not you're in sunny San Francisco for the start of Cognitive Neuroscience Society Meeting today, you can follow Nick Wan's list of conference attendees on Twitter: @nickwan/#CNS2015. There's also the #CNS2015 hashtag, and the official @CogNeuroNews account.

    Nick will also be blogging from the conference at True Brain. You may see a post or two from The Neurocritic, but I'm usually not very prompt about it. Please comment if you'll be blogging too.

    Two of the program highlights are today:

    Keynote Address, Anjan Chatterjee:
    “The neuroscience of aesthetics and art”

    2015 Distinguished Career Contributions Awardee, Marta Kutas:
    “45 years of Cognitive Electrophysiology: neither just psychology nor just the brain but the visible electrical interface between the twain”


    Here are the CNS interviews with Dr. Chatterjee and Dr. Kutas.

    Enjoy the meeting!

    0 0

    What can we do to solve the mind/body problem once and for all? How do we cure devastating brain diseases like Alzheimer's, Parkinson's, schizophrenia, and depression? I am steadfast in following the course of my 500 year plan that may eventually solve these pressing issues, to the benefit of all Americans!

    There's nothing like attending a conference in the midst of a serious family illness to make one take stock of what's important. My mind/brain has been elsewhere lately, along with my body in a different location. My blogging output has declined while I live in this alternate reality. But aside from the disunion caused by depersonalization/derealization, what is my view of the state of Cognitive Neuroscience in 2015?

    But first, let's examine what we're trying to unify. Studies of mind and studies of brain?  Cognition and neuroscience?  Let's start with “neuroscience”.

    Wikipedia says:
    Neuroscience is the scientific study of the nervous system. Traditionally, neuroscience has been seen as a branch of biology. ... The term neurobiology is usually used interchangeably with the term neuroscience, although the former refers specifically to the biology of the nervous system, whereas the latter refers to the entire science of the nervous system. 

    This reminds me of a recent post by Neuroskeptic, who asked: Is Neuroscience Based On Biology? On the face of it, this seemed like an absurd question to me, because the brain is a biological organ and of course we must know its biology to understand how it works. But what he really meant was, Is Cognitive Science Based On Biology? I say this because he adopted a functionalist view and used the brain-as-computer metaphor:
    Could it be that brains are only accidentally made of cells, just as computers are only accidentally made of semiconductors? If so, neuroscience would not be founded on biology but on something else, something analogous to the mathematical logic that underpins computer science. What could this be?

    See John Searle on Beer Cans & Meat Machines (1984):
    This view [the brain is just a digital computer and the mind is just a computer program] has the consequence that there’s nothing essentially biological about the human mind. The brain just happens to be one of an indefinitely large number of different kinds of hardware computers that could sustain the programs which make up human intelligence. ... So, for example, if you made a computer out of old beer cans powered by windmills, if it had the right program. It would have to have a mind.

    The infamous argument-by-beer-cans. In the end, Neuroskeptic admitted he's not sure he subscribes to this view. But the post sparked an interesting discussion. There were a number of good comments, e.g. Jayarava said: “Neuro-science absolutely needs to be neuron-science, to focus on brains made of cells because that's what we need to understand in the first place.” Indeed, some neuroscientists don't consider “cognitive neuroscience” to be “neuroscience” at all, because the measured units are higher (i.e., less reductionist) than single neurons.1

    A comment by Adam Calhoun gets to the heart of the matter, making a sharp point about the disunity of neuroscience:
    Although we use the term 'neuroscience' as though it refers to one coherent discipline, the problem here is that it does not. If you were to pick a neuroscientist at random and ask: "what does your field study?" you will not get the same answer two times in a row.

    Neural development? Molecular pathways? Cognition? Visual processing? Are these the same field? Or different fields that have been given the same name?

    One of the selling points of neuroscience is its interdisciplinary nature, but it's really hard to talk to each other if we don't speak the same language (or work in the same field). Some graduate programs dwell in an idealized world where students can become knowledgeable in molecular, cellular, developmental, systems, and cognitive neuroscience in one year. The reality is that professors in some subfields couldn't pass the exams given in another subfield. And why would they possibly want to do this, given they're way too busy writing grants.

    Sometimes I think cognitive neuroscience is on a completely different planet from the other branches, estranged from even its closest cousin, behavioral neuroscience.2 It's even further away these days from systems neuroscience3 which used to be dominated by the glamour of single unit recordings in monkeys, but now is all about manipulating circuits with opto- and chemogenetics.

    But as the Systems/Circuits techniques get more and more advanced (and invasive and mechanistic), the gulf between animal and human studies grows larger and the prospects for clinical translation fade.  [Until the neuroengineers come in and save the day.]

    I'll end on a more optimistic note, with a quote from a man who wished to bridge the gap between Aplysia californica and Sigmund Freud.





    Footnotes

    1And often not even a direct measure of neural activity at all (e.g. the hemodynamic response in fMRI). The rare exceptions to this are studies in patients with epilepsy, which have revealed the existence of Marilyn Monroe neurons and Halle Berry neurons and (my personal favorite) the rare multimodal Robert Plant neuron in the medial temporal lobe.

    2Though if you look at the mission of the journal called Behavioral Neuroscience, its scope has broadened to include just about anything:
    We seek empirical papers reporting novel results that provide insight into the mechanisms by which nervous systems produce and are affected by behavior. Experimental subjects may include human and non-human animals and may address any phase of the lifespan, from early development to senescence.

    Studies employing brain-imaging techniques in normal and pathological human populations are encouraged, as are studies using non-traditional species (including invertebrates) and employing comparative analyses. Studies using computational approaches to understand behavior and cognition are particularly encouraged.

    In addition to behavior, it is expected that some aspect of nervous system function will be manipulated or observed, ranging across molecular, cellular, neuroanatomical, neuroendocrinological, neuropharmacological, and neurophysiological levels of analysis. Behavioral studies are welcome so long as their implications for our understanding of the nervous system are clearly described in the paper.

    3 Actually, systems neuroscience is mostly about engineering and computational modelling these days.


    Some Final Definitions (for the record)

    The Society for Neuroscience (SfN) explanation of what neuroscientists do:
    Neuroscientists specialize in the study of the brain and the nervous system. They are inspired to try to decipher the brain’s command of all its diverse functions. Over the years, the neuroscience field has made enormous progress. Scientists continue to strive for a deeper understanding of how the brain’s 100 billion nerve cells [NOTE: the number is only 86 billion] are born, grow, and connect. They study how these cells organize themselves into effective, functional circuits that usually remain in working order for life.

    The SfN mission:
    SfN advances the understanding of the brain and the nervous system by bringing together scientists of diverse backgrounds, facilitating the integration of research directed at all levels of biological organization, and encouraging translational research and the application of new scientific knowledge to develop improved disease treatments and cures. 

    The CNS mission:
    The Cognitive Neuroscience Society (CNS) is committed to the development of mind and brain research aimed at investigating the psychological, computational, and neuroscientific bases of cognition.

    The term cognitive neuroscience has now been with us for almost three decades, and identifies an interdisciplinary approach to understanding the nature of thought.

    And according to Wikipedia:
    Cognitive neuroscience is an academic field concerned with the scientific study of biological substrates underlying cognition,[1] with a specific focus on the neural substrates of mental processes. It addresses the questions of how psychological/cognitive functions are produced by neural circuits in the brain. Cognitive neuroscience is a branch of both psychology and neuroscience, overlapping with disciplines such as physiological psychology, cognitive psychology and neuropsychology.[2] Cognitive neuroscience relies upon theories in cognitive science coupled with evidence from neuropsychology and computational modeling.[2]




    Barack Obama, Jan. 24, 2012:
    “…We should all want a smarter, more effective government. And while we may not be able to bridge our biggest philosophical differences this year, we can make real progress. With or without this Congress, I will keep taking actions that help the economy grow. But I can do a whole lot more with your help. Because when we act together, there is nothing the United States of America can’t achieve.”


    0 0


    What are the Hot Topics in cognitive neuroscience? We could ask these people, or we could take a more populist approach by looking at conference abstracts. I consulted the program for the recent Cognitive Neuroscience Society meeting (CNS 2015) and made a word cloud using Wordle.1 For comparison, we'll examine the program for the most recent Computational and Systems Neuroscience meeting (Cosyne 2015).

    CNS is all about memory, people, and cognitive processing.

    Cosyne is about neurons, models, and neural activity.




    Word cloud for the 2015 CNS Program



    Word cloud for the 2015 Cosyne Program





    Cosyne is also about network dynamics, information, and learning.




    On the other hand, CNS relies heavily on tasks, studies, and results.








    Both Wordles were constructed from poster titles and abstracts. The Coysne cloud incorporated the titles of talks, and the CNS cloud included titles and abstracts for symposia.1

    What if we only used the abstract titles? Would that provide a more accurate view of the Hot Topics? Since I already had access to a file with the 2014 CNS abstract titles, I started with that.


    Word cloud for CNS 2014 (abstract titles only)


    “Memory” is even more dominant now. “Neural” 2 is brought to the fore, accompanied by “processing” and her younger sibling, “correlates”. Those competitors for memory — “attention”, “language”, “emotional”, et al. — are a wee bit more assertive. {The “visual” bully stays boss of the senses, as usual.} And all those “participants” have faded into the background, relegated to the methods.

    So cognitive neuroscience isn't people after all...

    Finally we have the abstract titles from last year's Organization for Human Brain Mapping meeting3 which, despite the society's name, isn't about people either (although humans and their diseases do play a dramatic role). Instead, “connectivity” is king!


    Word cloud for OHBM 2014 (abstract titles only)


    In a real stunner for a methods-driven conference on human brain mapping, “ brain” and “fmri” were key terms, and functional connectivity the key concept.


    These Wordles were inspired by the tweets of ‏@CousinAmygdala, who made some lovely word clouds after the first NIH BRAIN Initiative Awards were announced (see also Neuroecology).





    Not CNS-friendly, I'm afraid...




    Footnotes

    1 For the Wordle word clouds, I didn't include ads or indices, and edited out common affiliation words like "University". I set all words to appear in lower case to collapse occurrences of words like "Neural" and "neural".

    2 Although what counts as “neural” here differs (for the most part) from what the word means elsewhere (e.g., the BOLD signal vs. direct physiological recordings from neurons).

    3 This required a bit of editing to extract the poster titles. The final document wasn't as pristine as the 2014 CNS text, but major place names (i.e., affiliations) were removed.


    0 0



    The U.S. Food and Drug Administration recently admonished TauMark™, a brain diagnostics company, for advertising brain scans that can diagnose chronic traumatic encephalopathy (CTE), Alzheimer's disease, and other types of dementia. The Los Angeles Times reported that the FDA ordered UCLA researcher Dr. Gary Small and his colleague/business partner Dr. Jorge Barrio to remove misleading information from their company website (example shown below).




    CTE has been in the news because the neurodegenerative condition has been linked to a rash of suicides in retired NFL players, based on post-mortem observations. And the TauMark™ group made headlines two years ago with a preliminary study claiming that CTE pathology is detectable in living players (Small et al., 2013).

    The FDA letter stated:
    The website suggests in a promotional context that FDDNP, an investigational new drug, is safe and effective for the purpose for which it is being investigated or otherwise promotes the drug. As a result, FDDNP is misbranded under section 502(f)(1) of the FD&C Act...

    [18F]-FDDNP1 is a molecular imaging probe that crosses the blood brain barrier and binds to several kinds of abnormal proteins in the brain. When tagged with a radioactive tracer, FDDNP can be visualized using PET (positron emission tomography).

    Despite what the name of the company implies, FDDNP is not an exclusive tau marker. FDDNP may bind to tau protein[although this is disputed],2 but it also binds to beta-amyloid, found in the clumpy plaques that form in the brains of those with Alzheimer's disease. Tau is found in neurofibrillary tangles, also characteristic of Alzheimer's pathology, and seen in other neurodegenerative tauopathies such as CTE.

    The big deal with this and other radiotracers is that the pathological proteins can now be visualized in living human beings. Previously, an Alzheimer's diagnosis could only be given at autopsy, when the post-mortem brain tissue was processed to reveal plaques and tangles. So PET imaging is a BIG improvement. But still, a scan alone is not completely diagnostic, as noted by the Alzheimer's Association:
    Even though amyloid plaques in the brain are a characteristic feature of Alzheimer's disease, their presence cannot be used to diagnose the disease. Many people have amyloid plaques in the brain but have no symptoms of cognitive decline or Alzheimer's disease. Because amyloid plaques cannot be used to diagnose Alzheimer's disease, amyloid imaging is not recommended for routine use in patients suspected of having Alzheimer's disease.

    from TauMark's old website


    There are currently three FDA-approved molecular tracers that bind to beta-amyloid: florbetapirflutemetamol, and florbetaben (note that none of these is FDDNP). But the big selling point of TauMark™ is (of course) the tau marker part, which would also label tau in the brains of individuals with CTE and frontotemporal dementia, diseases not characterized by amyloid plaques. But how can you tell the difference, when FDDNP targets plaques and tangles (and prion proteins, for that matter)?

    A new study by the UCLA team demonstrated that the distribution of FDDNP labeling in the brains of Alzheimer's patients differs from that seen in a selected group of former NFL players with cognitive complaints (Barrio et al., 2015). These retired athletes (and others with a history of multiple concussions) are at risk of developing the brain pathology known as chronic traumatic encephalopathy.



    from Fig. 1 (Barrio et al., 2015).  mTBI = mild traumatic brain injury, or concussion. T1 to T4 = progressive FDDNP PET signal patterns.


    It's a well-established fact that brains with Alzheimer's disease, frontotemporal lobar degeneration, or Lou Gehrig's disease (for example) all show different patterns of neurodegeneration, so why not extend this to CTE? This may seem like a reasonable approach, but there are problems with some of the assumptions.




    Perhaps the most deceptive claim is that “TauMark owns the exclusive license of the first and only brain measure of tau protein...” Au contraire! A review of recent developments in tau PET imaging (Zimmer et al., 2014) said that...
    ...six novel tau imaging agents—[18F]THK523, [18F]THK5105, [18F]THK5117, [18F]T807, [18F]T808, and [11C]PBB3—have been described and are considered promising as potential tau radioligands.

    Note that [18F]FDDNP is not among the six.2,3  In fact, Zimmer et al. (2014) mentioned that in brain slices, “[3H]FDDNP failed to demonstrate overt labeling of tau pathology.” 2

    No matter. Former NFL players are clamoring to participate in the TauMark studies.


    So to recap, the FDA considered TauMark marketing to be “concerning from a public health perspective.” Their letter warned:
    Your website describes FDDNP for use in brain PET scans to diagnose traumatic brain injuries, Alzheimer’s disease, and other neurological conditions. These uses are ones for which a prescription would be needed because they require the supervision of a physician and adequate directions for lay use cannot be written.
    (see also Regulatory Focus News and the FDA's own PDF archive of the TauMark site).


    At this point, astute followers of The Neurocritic and Neurobollocks might ask, “Hey, how does Dr. Daniel Amen get away with claiming that his SPECT scans can accurately diagnose different types of dementia, each with different ‘treatment plans’?”




    Hey FDA, what gives?? Dr. Small and Dr. Barrio have at least 37 peer-reviewed publications on their FDDNP methods and imaging results. Meanwhile, Dr. Amen has two non-peer reviewed poster abstracts on his SPECT results in dementia. With ads like these and appearances on Celebrity Rehab, aren't the Amen Clinics's claims “misbranded” too?



    Further Reading

    Is CTE Detectable in Living NFL Players?

    The Ethics of Public Diagnosis Using an Unvalidated Method

    Uncertain Diagnoses, Research Data Privacy, & Preference Heterogeneity

    Blast Wave Injury and Chronic Traumatic Encephalopathy: What's the Connection?

    Little Evidence for a Direct Link between PTSD and Chronic Traumatic Encephalopathy


    Footnotes

    1FDDNP is 2-(1-(6-[(2-[(18)F]fluoroethyl)(methyl)amino]-2-naphthyl)ethylidene)malononitrile.

    2 Or what is presumed to be tau. FDDNP is supposedly a tracer for both tau and amyloid, but some experts think it's neither. Zimmer et al. (2014) stated:
    Though ... [18F]FDDNP appeared to bind both amyloid plaques and tau tangles, a subsequent study using [3H]FDDNP autoradiography in sections containing neurofibrillary tangles (NFTs) failed to demonstrate overt labeling of tau pathology because of a low affinity for NFTs.
    Other studies have shown that it binds to a variety of misfolded proteins.

    3James et al. (2015) were more generous in their review of tau PET imaging, mentioning the existence of seven tau tracers (including FDDNP). But again they noted the lack of specificity.  (Parenthetically speaking, [18F]T807 imaging has been done in a single NFL player, which may be of interest in a future post.)


    References

    Barrio, J., Small, G., Wong, K., Huang, S., Liu, J., Merrill, D., Giza, C., Fitzsimmons, R., Omalu, B., Bailes, J., & Kepe, V. (2015). In vivo characterization of chronic traumatic encephalopathy using [F-18]FDDNP PET brain imaging. Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.1409952112

    Zimmer, E., Leuzy, A., Gauthier, S., & Rosa-Neto, P. (2014). Developments in Tau PET Imaging. The Canadian Journal of Neurological Sciences, 41 (05), 547-553 DOI: 10.1017/cjn.2014.15

    0 0



    A new study has found that the pain reliever TYLENOL® (acetaminophen) not only dampens negative emotions, it blunts positive emotions too. Or does it?

    Durso and colleagues (2015) reckoned that if acetaminophen can lessen the sting of psychological pain (Dewall et al., 2010; Randles et al., 2013) — which is doubtful in my view then it might also lessen reactivity to positive stimuli. Evidence in favor of their hypothesis would support differential susceptibility, the notion that the same factors govern reactivity to positive and negative experiences.1 This outcome would also contradict the framework of acetaminophen as an all-purpose treatment for physical and psychological pain.

    The Neurocritic is not keen on TYLENOL® as a remedy for existential dread or social rejection. In high doses acetaminophen isn't great for your liver, either. And a recent meta-analysis even showed that it's ineffective in treating lower back pain (Machado et al., 2015)...

    But I'll try to be less negative than usual. The evidence presented in the main manuscript supported the authors' hypothesis. Participants who took acetaminophen rated positive and negative IAPS pictures as less emotionally arousing compared to a separate group of participants on placebo. The drug group also rated the unpleasant pictures less negatively and the pleasant pictures less positively. “In all, rather than being labeled as merely a pain reliever, acetaminophen might be better described as an all-purpose emotion reliever,” they concluded (Durso et al., 2015).

    Appearing in the prestigious Journal of Psychological Acetaminophen Studies, the paper described two experiments on healthy undergraduates, both of which yielded a raft of null results.

    Wait a minute..... what? How can that be?

    The main manuscript reported the results collapsed across the two studies, and the Supplemental Material presented the results from each experiment separately. Why does this matter?
    Eighty-two participants in Study 1 and 85 participants in Study 2 were recruited to participate in an experiment on “Tylenol and social cognition” in exchange for course credit. Our stopping rule of at least 80 participants per study was based on previously published research on acetaminophen (DeWall et al., 2010; Randles et al., 2013), in which 30 to 50 participants were recruited per condition (i.e., acetaminophen vs. a placebo).  ... The analyses reported here for the combined studies are reported for each study separately in the Supplemental Material available online.

    What this means is that the authors violated their stopping rule, and recruited twice the number of participants as originally planned. Like the other JPAS articles, this was a between-subjects design (unfortunately), and there were over 80 participants in each condition (instead of 30 to 50).

    After running Experiment 1, the authors were faced with results like these:
    As expected, however, a main effect of treatment (though not significantly significant in this study) was obtained, F(1,72) = 2.15, p = .147, ηp2 = .029, as was the predicted interaction (although it was not statistically significant in this study), F(3.3, 240.3) = 1.15, p = .330, ηp2 = .016. Contrast analyses indicated that participants taking acetaminophen were marginally significantly less emotionally aroused by extremely pleasant stimuli (M = 5.01, SD = 1.75) than were participants taking placebo (M = 5.65, SD = 1.55), t(72) = 1.67,p = .099. Similarly, participants receiving acetaminophen were less emotionally aroused by extremely unpleasant stimuli (M = 6.88, SD = 1.25) than were participants assigned the placebo condition (M = 7.23, SD = 1.84), although this difference was not statistically significant in this study, t(72) = 0.96, p = .341. Furthermore, participants taking acetaminophen tended to be less emotionally aroused by moderately pleasant stimuli (M = 2.91, SD = 1.64) than participants taking placebo (M = 3.49, SD = 1.89), t(72) = 1.44, p = .155, and participants taking acetaminophen also tended to be less emotionally aroused by moderated unpleasant stimuli (M = 4.68, SD = 1.42) than participants taking placebo (M = 5.25, SD = 2.02), t(72) = 1.42, p = .161, although these differences were not statistically significant in this study. 

    Wow, what a disappointment to get these results. Nothing looks statistically significant!

    Let's look at Experiment 2:
    ...Contrast analyses revealed that participants taking acetaminophen tended to rate extremely unpleasant stimuli (M = -3.39, SD = 1.14) less negatively than participants receiving placebo (M = -3.74, SD = 0.74), t(77) = 1.60, p = .115, though this contrast was not itself statistically significant within this study. Participants taking acetaminophen also rated extremely pleasant stimuli (M = +2.51, SD = 1.07) significantly less positively than participants receiving placebo (M = +3.19, SD = 0.88), t(77) = 3.06, p = .003.

    Participants taking acetaminophen also tended to evaluate moderately pleasant stimuli (M = +1.15, SD = 0.91) less positively than participants receiving placebo (M = +1.42, SD = 0.89), t(77) = 1.30, p = .198, although this difference was not statistically significant in this study. Finally, participants taking acetaminophen tended to rate moderately unpleasant stimuli less negatively (M = -1.84, SD = 0.99) than participants taking placebo (M = -1.93, SD = 0.95), although this difference wasnot significantin this study, t(77) = 0.42, p = .678. [NOTE:"tended"? really?] Evaluations of neutral stimuli surprisingly differed as a function of treatment, t(77) = 2.94, p = .004, such that participants taking acetaminophen evaluated these stimuli significantly less positively (M = -0.05, SD = 0.42) than did participants taking placebo (M = +0.22, SD = 0.38).

    One of the arguments that acetaminophen affects ratings of emotional stimuli specifically (both positive and negative) is that it does not affect ratings for neutral stimuli. Yet it did here. So in the paragraphs above, extremely pleasant stimuli and neutral stimuli were both rated as less positive by the drug group, but ratings for extremely unpleasant, moderately pleasant, and moderately unpleasant pictures did not differ between drug and placebo groups.

    The subjective emotional arousal ratings fared better than the picture ratings in Experiment 2, but there were still some unexpected and non-significant results. Overall, support for the “acetaminophen as an all-purpose emotion reliever” was underwhelming when the studies are examined singly (which is how they were run). 2

    [Right about now you're saying, “Hey! I thought you said you'd be less negative here!”]

    Let's accept that the combined results reported in the main manuscript present a challenge to the “acetaminophen as a psychological pain reliever” view, and support the differential susceptibility hypothesis. To convince those of us outside the field of social psychology, it would be beneficial to: (1) design within-subjects experiments, and (2) seriously consider possible mechanisms of action, beyond speculations about serotonin and (gasp!) 5-HTTLPR. For instance, why choose acetaminophen (which may act via the spinal cord) and not aspirin or ibuprofen? 3

    At the risk of sounding overbearing and pedantic, I hereby issue the following friendly suggestions to all TYLENOL® psychology researchers...


    The Proper Pharmacological Study Design Challenge

    (1) Please consider using a double-blind, randomized crossover design, like studies that have examined IAPS picture ratings after acute administration of SSRI antidepressants or placebo in healthy participants (Kemp et al., 2004; van der Veen et al., 2012; Outhred et al., 2014).

    Speaking of SSRIs, did you know that citalopram did not alter IAPS valence or arousal ratings relative to placebo (Kemp et al., 2004)? Or that paroxetine produced only minor effects on valence and arousal ratings for two of the eight conditions (van der Veen et al., 2012)? 4 What are the implications of these findings for your theoretical framework, that an OTC pain reliever supposedly has a greater impact on emotional processing than a prescription antidepressant? And that before the recent JPAS papers, no one has ever suspected that TYLENOL® affects reactions to emotionally evocative stimuli or David Lynch films?

    (2) Please consider that acetaminophen may act via COX-1, COX-2, COX-3, peroxidase, nitric oxide synthase, cannabinoid receptors, and/or descending serotoninergic projections to the spinal cord (Toussaint et al., 2010) before mentioning the anterior cingulate cortex or the serotonin transporter gene. Just another friendly suggestion.

    I usually give all my ideas away for free, but if you're interested in hiring me as a consultant, please leave a comment.


    ADDENDUM (May 6 2015): A commentbyDr. R(who developed theReplication-Index) said there was nothing wrong with combining the two studies. Study 1 was non-significant but Study 2 was significant, and combined the results were statistically credible (although I'm not exactly sure which of the many tests he checked). Perhaps one source of trouble was that Durso et al.'s estimated number of participants was based on inflated effect sizes in the earlier papers...


    Footnotes

    1 Turns out differential susceptibility is more or less The Orchid and the Dandelion, or as author David Dobbs puts it, “some of the genes and traits generating our greatest maladies and misdeeds — depression, anxiety, hyper-aggression, a failure to focus — also underlie many of our greatest satisfactions and success." I don't really see how this acetaminophen study informs the differential susceptibility hypothesis, which is based on individual differences (beyond a metaphorical kinship, perhaps).

    2 But then I missed the memo from Psych Sci on “recently recommended approaches to presenting the results of multiple studies through combined analyses.”  [paging @mc_hankins...]

    3 I know the original social rejection study used Tylenol, but why does everyone persist in doing so?? I was pleased to see that in the press release, first author Geoffrey Durso said they're branching out to test ibuprofen and aspirin.  [There, something positive.]

    4 To be precise, participants gave lower arousal ratings to high arousal, low valence pictures and slightly lower valence ratings to high arousal, high valence pictures. The other six cells in the arousal/pleasure ratings of high/low arousal, high/low pleasure were no different on drug vs. placebo.


    References

    Dewall CN, Macdonald G, Webster GD, Masten CL, Baumeister RF, Powell C, Combs D, Schurtz DR, Stillman TF, Tice DM, Eisenberger NI. (2010). Acetaminophen reduces social pain: behavioral and neural evidence. Psychol Sci. 21:931-7.

    Durso, G., Luttrell, A., & Way, B. (2015). Over-the-Counter Relief From Pains and Pleasures Alike: Acetaminophen Blunts Evaluation Sensitivity to Both Negative and Positive Stimuli. Psychological Science DOI: 10.1177/0956797615570366

    Kemp AH, Gray MA, Silberstein RB, Armstrong SM, Nathan PJ. (2004). Augmentation of serotonin enhances pleasant and suppresses unpleasant cortical electrophysiological responses to visual emotional stimuli in humans. Neuroimage 22:1084-96.

    Machado GC, Maher CG, Ferreira PH, Pinheiro MB, Lin CW, Day RO, McLachlan AJ, Ferreira ML. (2015). Efficacy and safety of paracetamol for spinal pain and osteoarthritis: systematic review and meta-analysis of randomised placebo controlled trials. BMJ. 350:h1225.

    Outhred T, Das P, Felmingham KL, Bryant RA, Nathan PJ, Malhi GS, Kemp AH. (2014). Impact of acute administration of escitalopram on the processing of emotional and neutral images: a randomized crossover fMRI study of healthy women. J Psychiatry Neurosci. 39:267-75.

    Randles D, Heine SJ, Santos N. (2015). The common pain of surrealism and death: acetaminophen reduces compensatory affirmation following meaning threats. Psychol Sci. 24:966-73.

    van der Veen FM, Jorritsma J, Krijger C, Vingerhoets AJ. (2012). Paroxetine reduces crying in young women watching emotional movies. Psychopharmacology 220:303-8.


    Better get this woman a damn fine cup of coffee and 1000 mg of TYLENOL®




    0 0



    I have two heads
    Where's the man, he's late

    --Throwing Muses, Devil's Roof


    Medical journals are enlivened by case reports of bizarre and unusual syndromes. Although somaticdelusions are relatively common in schizophrenia, reports of hallucinations and delusions of bicephaly are rare. For a patient to attempt to remove a perceived second head by shooting and to survive the experience for more than two years may well be unique, and merits presentation.

    --David Ames, British Journal of Psychiatry (1984)

    In 1984, Dr. David Ames of Royal Melbourne Hospital published a truly bizarre case report about a 39 year old man hospitalized with a self-inflicted gunshot wound through the left frontal lobe (Ames, 1984). The man was driven to this desperate act by the delusion of having a second head on his shoulder. The interloping head belonged to his wife's gynecologist.




    In an even more macabre twist, his wife had died in a car accident two years earlier..... and the poor man had been driving at the time!

    Surprisingly, the man survived a bullet through his skull (in true Phineas Gage fashion). After waking from surgery to remove the bullet fragments, the patient was interviewed:
    He described a second head on his shoulder. He believed that the head belonged to his wife's gynaecologist, and described previously having felt that his wife was having an affair with this gynaecologist, prior to her death. He described being able to see the second head when he went to bed at night, and stated that it had been trying to dominate his normal head. He also stated that he was hearing voices, including the voice of his wife's gynaecologist from the second head, as well as the voices of Jesus and Abraham around him, conversing with each other. All the voices were confirming that he had two heads...

    I'm two headed one free one sticky
    --Throwing Muses, Devil's Roof

    The other head kept trying to dominate my normal head, and I would not let it. It kept trying to say to me I would lose, and I said bull-shit ... and decided to shoot my other head off.”

    A gun was not his first choice, however... he originally wanted to use an ax.




    He stated that he fired six shots, the first at the second head, which he then decided was hanging by a thread, and then another one through the roof of his mouth. He then fired four more shots, one of which appeared to have gone through the roof of his mouth and three of which missed. He said that he felt good at that stage, and that the other head was not felt any more. Then he passed out. Prior to shooting himself, he had considered using an axe to remove the phantom head.

    Not surprisingly, the patient was diagnosed with schizophrenia and given antipsychotics.
    He was seen regularly in psychiatric out-patients following this operation and by March, stated that the second head was dead, that he was taking his chlorpromazine regularly, and that he had no worries.  [This was Australia, after all.]

    Unfortunately, the man died two years later from a Streptococcus pneumoniae infection in his brain.  Ames (1984) concluded his lively and bizarre case report by naming the singular syndromeperceptual delusional bicephaly”:
    This case illustrates an interesting phenomenon of perceptual delusional bicephaly; the delusion caused the patient to attempt to remove the second head by shooting. It is notable that following his head injury and treatment with chlorpromazine, the initial symptoms resolved, although he was left with the problems of social disinhibition and poor volition, typical of patients with frontal lobe injuries.

    As far as I know, this specific delusion has not yet been depicted in a horror film (or in an episode of Perception or Black Box).


    Reference

    Ames, D. (1984). Self shooting of a phantom head The British Journal of Psychiatry, 145 (2), 193-194 DOI: 10.1192/bjp.145.2.193





    0 0



    Capgras syndrome is the delusion that a familiar person has been replaced by a nearly identical duplicate. The imposter is usually a loved one or a person otherwise close to the patient.

    Originally thought to be a manifestation of schizophrenia and other psychotic illnesses, the syndrome is most often seen in individuals with dementia (Josephs, 2007). It can also result from acquired damage to a secondary (dorsal) face recognition system important for connecting the received images with an affective tone (Ellis & Young, 1990).1 Because of this, the delusion crosses the border between psychiatry and neurology.

    The porous etiology of Capgras syndrome raises the question of how phenomenologically similar delusional belief systems can be constructed from such different underlying neural malfunctions. This is not a problem for Freudian types, who promote psychodynamic explanations (e.g., psychic conflict, regression, etc.). For example, Koritar and Steiner (1988) maintain that “Capgras' Syndrome represents a nonspecific symptom of regression to an early developmental stage characterized by archaic modes of thought, resulting from a relative activation of primitive brain centres.”

    The psychodynamic view was nicely dismissed by de Pauw (1994), who states:
    While often ill-founded and convoluted, these formulations have, until recently, dominated many theoretical approaches to the phenomenon. Generally post hoc and teleological in nature, they postulate motives that are not introspectable and defence mechanisms that cannot be observed, measured or refuted. While psychosocial factors can and often do play a part in the development, content and course of the Capgras delusion in individual patients it remains to be proven that such factors are necessary and sufficient to account for delusional misidentification in general and the Capgras delusion in particular.

    Canary Capgras

    Although psychodynamic explanations were sometimes applied 2 to cases of Capgras syndrome for animals,3 other clinicians report that the delusional misindentification of pets can be ameliorated by pharmacological treatment of the underlying psychotic disorder. Rösler et al. (2001) presented the case of “a socially isolated woman who felt her canary was replaced by a duplicate”:
    Mrs. G., a 67-year-old woman, was admitted for the first time to a psychiatric hospital for late paraphrenia. ... She had been a widow for 11 years, had no children, and lived on her own with very few social contacts. Furthermore, she suffered from concerns that her canary was alone at home. She was delighted with the suggestion that the bird be transferred to the ward. However, during the first two days she repeatedly asserted that the canary in the cage was not her canary and reported that the bird looked exactly like her canary, but was in fact a duplicate. There were otherwise no misidentifications of persons or objects.

    Earlier, Somerfield (1999) had reported a case of parrot Capgras, also in an elderly woman with a late-onset delusional disorder:
    I would like to report an unusual case of a 91-year-old woman with a 10-year history of late paraphrenia (LP) and episodes of Capgras syndrome involving her parrot. She was a widow of 22 years, nulliparous, with profound deafness and a fiercely independent character.  The psychotic symptoms were usually well controlled by haloperidol 0.5 mg orally. However, she was periodically non-compliant with medication, resulting in deterioration of her mental state, refusal of food and her barricading herself in her room to stop her parrot being stolen. At times she accused others of “swapping” the parrot and said the bird was an identical imposter. There was no misidentifcation of people or objects. Her symptoms would attenuate rapidly with reinstatement of haloperidol.

    Both of these patients believed their beloved pet birds had been replaced by impostors, but neither of them misidentified any human beings. Clearly, this form of Capgras syndrome is different from what can happen after acquired damage to the affective face identification system (Ellis & Young, 1990). Is there an isolated case of sudden onset Capgras for animals that does not encompass person identification as well? I couldn't find one.


    A Common Explanation?

    Despite these differences, Ellis and Lewis (2001) suggested that “It seems parsimonious to seek a common explanation for the delusion, regardless of its aetiology.” I'm not so sure. If that's true, then haloperidol should effectively treat all instances of Capgras syndrome, including those that arise after a stroke. And there's evidence suggesting that antipsychotics would be ineffective in such patients.

    Are there systematic differences in the symptoms shown by Capgras patients with varying etiologies? Josephs (2007) reviewed 47 patient records and found no major differences between the delusions in patients with neurodegenerative vs. non-neurodegenerative disorders. In all 47 cases, the delusion involved a spouse, child, or other relative. {There were no cases involving animals or objects.}



    The factors that did differ were age of onset (older in dementia patients) and other reported symptoms (e.g., visual hallucinations 4 in all patients with Lewy body dementia, LBD). In this series, 81% of patients had a neurodegenerative disease, and only 4% had schizophrenia [perhaps the Capgras delusion was under-reported in the context of wide-ranging delusions?]. Other cases were due to methamphetamine abuse (4%) or sudden onset brain injury, e.g. hemorrhage (11%).

    Interestingly, Josephs puts forth dopamine dysfunction as a unifying theme, in line with Ellis and Lewis's general suggestion of a common explanation. The pathology in dementia with Lewy bodies includes degeneration of neurons containing dopamine and acetylcholine. The cognitive/behavioral symptoms of LBD overlap with those seen in Parkinson's dementia, which also involves degeneration of dopaminergic neurons. But dopamine-blocking antipsychotics like haloperidol should not be used in treating LBD. So from a circuit perspective, using “dopamine dysregulation” as a parsimonious explanation isn't really an explanation. And this conception doesn't fit with the neuropsychological model (shown at the bottom of the page).

    I'm not a fan of parsimony in matters of brain function and dysfunction. We don't know why one person thinks her canary has been replaced by an impostor, another thinks her husband has been replaced by a woman, while a third is convinced there are six copies of his wife floating around.5 I don't expect there to be a unifying explanation. The BRAIN Initiative and the Human Brain Project will teach us absolutely nothing about the content of delusions. Ultimately, the study of Capgras and other delusional misidentification syndromes present a challenging puzzle for those of us seeking neural explanations of thought and behavior.


    Footnotes

    1From Ellis and Young (1990). Also see figure below.
    Bauer (1984, 1986) advanced the view that there are two routes to facial recognition. The main route runs from visual cortex to temporal lobes via the inferior longitudinal fasciculus....the 'vental route' corresponds to the system responsible for overt or conscious recognition, and it is the route which typically is damaged in cases of prosopagnosia. The other, described as the 'dorsal route', runs between the visual cortex and the limbic system, via the inferior parietal lobule, and is sometimes intact in prosopagnosic patients. It is this latter route which ... gives the face its emotional significance and hence, when the ventral route is selectively damaged, can give rise to covert recognition (i.e. recognition at an unconscious level).

    2Canine Capgras:
    Reports 2 separate cases (a 76-yr-old woman and a 57-yr-old woman) in which the S believed that her pet dog had been replaced by an identical double. The psychodynamic issues that these cases raise are discussed. [NOTE: I don't have access to this article, sorry I can't say more.] In the Capgras delusion the double is usually a key figure in the life of the patient.

    3 Capgras for animals was dubbed zoocentric Capgras syndrome by Ehrt (1999). He presented the “case of a 23-year old women who had the delusional belief that her cat had been replaced by the cat of her former boy-friend.”

    4 There are a number of interesting hypotheses on why visual hallucinations are so common in Lewy body dementias.

    5 Unless he's a character in Orphan Black... But really, why six copies instead of three? What I mean here is an explanation beyond the trivial: one person lives alone with a canary, while the other two live with a spouse.


    References

    de Pauw KW. (1994). Psychodynamic approaches to the Capgras delusion: a critical historical review. Psychopathology 27(3-5):154-60.

    Ellis HD, Lewis MB. (2001). Capgras delusion: a window on face recognition. Trends Cogn Sci. 5(4):149-156.

    Ellis, H., & Young, A. (1990). Accounting for delusional misidentifications. The British Journal of Psychiatry, 157 (2), 239-248 DOI: 10.1192/bjp.157.2.239

    Josephs, K. (2007). Capgras Syndrome and Its Relationship to Neurodegenerative Disease. Archives of Neurology, 64 (12) DOI: 10.1001/archneur.64.12.1762

    Koritar E, Steiner W. (1988). Capgras' syndrome: a synthesis of various viewpoints. Can J Psychiatry 33(1):62-6.

    Rösler, A., Holder, G., & Seifritz, E. (2001). Canary Capgras. The Journal of Neuropsychiatry and Clinical Neurosciences, 13 (3), 429-429 DOI: 10.1176/jnp.13.3.429

    Somerfield D. (1999). Capgras syndrome and animals. Int J Geriatr Psychiatry 14(10):893-4.




older | 1 | .... | 6 | 7 | (Page 8) | 9 | 10 | .... | 14 | newer