Quantcast
Channel: The Neurocritic
Viewing all 329 articles
Browse latest View live

Use of Anti-Inflammatories Associated with Threefold Increase in Homicides

$
0
0
Scene from Elephant, a fictional film by Gus Van Sant


Regular use of over-the-counter pain relievers like aspirin, ibuprofen, naproxen, and acetaminophen was associated with three times the risk of committing a homicide in a new Finnish study (Tiihonen et al., 2015). The association between NSAID use and murderous acts was far greater than the risk posed by antidepressants.

Clearly, drug companies are pushing dangerous, toxic chemicals and we should ban the substances that are causing school massacres Advil and Alleve and Tylenol are evil!!

Wait..... what?


Tiihonen and colleagues wanted to test the hypothesis that antidepressant treatment is associated with an increased risk of committing a homicide. Because, you know, the Scientology-backed Citizens Commission on Human Rights of Colorado thinks so (and their blog is cited in the paper!!):
After a high-profile homicide case, there is often discussion in the media on whether or not the killing was caused or facilitated by a psychotropic medication. Antidepressants have especially been blamed by non-scientific organizations for a large number of senseless acts of violence, e.g., 13 school shootings in the last decade in the U.S. and Finland [1].

The authors reviewed a database of all homicides investigated by the police in Finland between 2003 and 2011. A total of 959 offenders were included in the analysis. Each offender was matched to 10 controls selected from the Population Information System. Then the authors checked purchases in the Finnish Prescription Register. A participant was considered a "user" if they had a current purchase in the system.1

The main drug classes examined were antidepressants, benzodiazepines, and antipsychotics. The primary outcome measure was risk of offending for current use vs. no use of those drugs (with significance set to p<0.016 to correct for multiple comparisons). Seven other drug classes were examined as secondary outcome measures (with α adjusted to .005): opioid analgesics, non-opioid analgesics (e.g., NSAIDs), antiepileptics, lithium, stimulants, meds for addictive disorders, and non-benzo anxiolytics.

Lo and behold, current use of antidepressants in the adult offender population was associated with a 31% greater risk of committing a homicide, but this did not reach significance (p=0.022). On the other hand, benzodiazepine use was associated with a 45% greater risk (p<.001), while antipsychotics were not associated with greater risk of offending (p=0.54).

Most dangerous of all were pain relievers. Current use of opioid analgesics (like Oxycontin and Vicodin) was associated with 92% greater risk. Non-opioid analgesics were even worse: individuals taking these meds were at 206% greater risk of offending that's a threefold increase.2  Taken in the context of this surprising result, the anti-psych-med faction doth complain too much about antidepressants.

Furthermore, analysis of young offenders (25 yrs or less) revealed that none of the medications were associated with greater risk of committing a homicide (benzos and opioids were p=.07 and .04 respectively). To repeat: In Finland at least, there was no association between antidepressant use and the risk of becoming a school shooter.

What are we to make of the provocative NSAIDs? More study is needed:
The surprisingly high risk associated with opioid and non-opioid analgesics deserves further attention in the treatment of pain among individuals with criminal history.

Drug-related murders in oxycodone abusers don't come as a great surprise, but aspirin-related violence is hard to explain...3


Footnotes

1 Having a purchase doesn't mean the individual was actually taking the drug before/during the time of the offense, however.

2 RR = 3.06; 95% CI: 1.78-5.24, p<0.001 for Advil, Tylenol, and the like. And the population-adjusted odds ratios (OR) weren't substantially different, although this wasn't reported for NSAIDs:
The analysis based on case-control design showed an adjusted OR of 1.30 (95% CI: 0.97-1.75) as the risk of homicide for the current use of an antidepressant, 2.52 (95% CI: 1.90-3.35) for benzodiazepines, 0.62 (95% CI: 0.41-0.93) for antipsychotics, and 2.16 (95% CI: 1.41-3.30) for opioid analgesics.

3 P.S. Just to be clear here, correlation ≠ causation. Disregarding the anomalous nature of the finding in the first place, it could be that murderers have more headaches and muscle pain, so they take more anti-inflammatories (rather than ibuprofen "causing" violence). But if the anti-med faction uses these results to argue that "antidepressants cause school shootings" then explain how ibuprofen raises the risk threefold...


Reference

Tiihonen, J., Lehti, M., Aaltonen, M., Kivivuori, J., Kautiainen, H., J. Virta, L., Hoti, F., Tanskanen, A., & Korhonen, P. (2015). Psychotropic drugs and homicide: A prospective cohort study from Finland. World Psychiatry, 14 (2), 245-247. DOI: 10.1002/wps.20220

8 1/2 Reward Prediction Errors: #MovieDirectorNeuroscientistMashup

$
0
0
Fellini/Schultz: 8½ Reward Prediction Errors



On Twitter, movie/brain buff My Cousin Amygdala issued the #MovieDirectorNeuroscientistMashup challenge using the following selections:


I made a few movie posters to go along with my suggestions...


Kurosawa/Tonegawa: Rashomon and the Memory Engram



David Lynch/Eric Kandel: Blue Velvet Aplysia


Write-in nominations were allowed, too.



How about Scorsese / Lynch / Maguire: Mulholland Taxi Driver's Hippocampus  {that one was a bit too involved for a poster}


Finally, I'll write in one by David Cronenberg and....


Cronenberg/Friston: Statistical Parametric Mapping to the Stars


The Future of Depression Treatment

$
0
0


2014

Jessica is depressed again. After six straight weeks of overtime, her boss blandly praised her teamwork at the product launch party. And the following week she was passed over for a promotion in favor of Jason, her junior co-worker. "It's always that way, I'll never get ahead..."

She arrives at her therapist's office late, looking stressed, disheveled, and dejected. The same old feelings of worthlessness and despair prompted her to resume her medication and CBT routine.

"You deserve to be recognized for your work," said Dr. Harrison. "The things you're telling yourself right now are cognitive distortions: the black and white thinking, the overgeneralization, the self-blame, jumping to conclusions... " 

"I guess so," muttered Jessica, looking down.

"And you need a vacation!"
. . .


A brilliant suggestion, Dr. Harrison. As we all know, taking time off to relax and recharge after a stressful time will do wonders for our mental health. And building up a reserve of happy memories to draw upon during darker times is a cornerstone of positive psychology.

Jessica and her husband Michael take a week-long vacation in Hawaii, creating new episodic memories that involve snorkling, parasailing, luaus, and mai tais on the beach. Jessica ultimately decides to quit her job and sell jewelry on Etsy.


2015

Michael is depressed after losing his job. His self-esteem has plummeted, and he feels useless. But he's too proud to ask for help. "Depression is something that happens to other people (like my wife), but not to me." He grows increasingly angry and starts drinking too much.

Jessica finally convinces him to see Dr. Harrison's colleague. Dr. Roberts is a psychiatrist with a Ph.D. in neuroscience. She's adopted a translational approach and tries to incorporate the latest preclinical research into her practice. She's intrigued by the latest finding from Tonegawa's lab, which suggests that the reactivation of a happy memory is more effective in alleviating depression than experiencing a similar event in the present.

Recalling happier memories can reverse depression, said the MIT press release. 

So instead of telling Michael to take time off and travel and practice mindfulness and live in the present, she tells him to recall his fondest memory from last year's vacation in Hawaii.  

It doesn't work.

Michael goes to see Dr. Harrison, who prescribes bupropion and venlafaxine. Four weeks later, he feels much better, and starts a popular website that repudiates positive psychology. Seligman and Zimbardo are secretly chagrined. 

. . .


Happy Hippocampus
photo credit: S. Ramirez


Artificially reactivating positive [sexual] memories [in male mice] could offer an alternative to traditional antidepressantsmakes them struggle more when you hold them by the tail after 10 days of confinement.1

Not as upbeat as the press release, eh?
The findings ... offer a possible explanation for the success of psychotherapies in which depression patients are encouraged to recall pleasant experiences. They also suggest new ways to treat depression by manipulating the brain cells where memories are stored...

“Once you identify specific sites in the memory circuit which are not functioning well, or whose boosting will bring a beneficial consequence, there is a possibility of inventing new medical technology where the improvement will be targeted to the specific part of the circuit, rather than administering a drug and letting that drug function everywhere in the brain,” says Susumu Tonegawa, ... senior author of the paper.

Although this type of intervention is not yet possible in humans, “This type of analysis gives information as to where to target specific disorders,” Tonegawa adds.

Before considering what the mice might actually experience when their happy memory cells are activated with light, let's all marvel at what was accomplished here.

Ramirez et al. (2015) studied mice that were genetically engineered to allow blue light to activate a specific set of granule cells in the dentate gyrus subfield of the hippocampus. These neurons are critical for the formation of new memories and are considered “engram cells” that undergo physical changes and store discrete memories (Liu et al., 2014). When a cue reactivates the same set of neurons, the episodic memory is retrieved. In this study, the engram cells were part of a larger circuit that included the amygdala and the nucleus accumbens, regions important for processing emotion, motivation, and reward.

Ramiriez, Liu, Tonegawa and colleagues have repeatedly demonstrated their masterful manipulation of mouse memories: activating fear memories, implanting false memories, and changing the valence of memories. These experiments are technically challenging and far outside my areas of expertise (greater detail in the Appendix below). In brief, the authors were able to label discrete sets of dentate gyrus cells while they were naturally activated during an interval of positive, neutral, or negative treatment. Then some groups of  animals were stressed for 10 days, and others remained in their home cages.


The stressed mice exhibited signs of “depression-like” and “anxiety-like” behaviors.2  I'll spare you the long digression about whether the tail suspension test successfully models the anguished human experience of abject states, but you can read my earlier musings on the topic.


The most astounding part of the experiment is that optical stimulation of positive-memory engram cells in stressed mice induced a reversal of “depressive” behaviors (but not “anxious” behaviors; see Appendix). Curiously, re-exposing the stressed male mice to an actual female did not have this positive benefit. So mediated experience artificial reactivation of the engram is even better than the real thing.

The first author, graduate student Steve Ramirez, offered a post hocexplanation:
“People who suffer from depression have those positive experiences in the brain, but the brain pieces necessary to recall them are broken. What we’re doing, in mice, is bypassing that circuitry and forcing it to be jump-started,” Ramirez says. “We’re harnessing the brain’s power from within itself and forcing the activation of that positive memory, whereas if you give a natural positive memory to the person or the animal, the depression that they have prevents them from finding that experience rewarding.”

In other words, “We'll force you to be happy [i.e., possibly remember a positive experience], whether you like it or not.” And since the authors discussed therapeutic implications in the paper, they have to deal with the problem of phenomenology, whether they like it or not. What do the mice actually remember? Generic sexual experiences, a feeling of reward? An episodic-like memory, e.g. a specific act and all its spatiotemporal contextual information? Even if we allow mice to have “episodic-like” memories, the latter seems unlikely given the highly artificial and non-physiological method of neural stimulation that bypasses the precisely timed patterns of activity thought to “represent” past experience. These memory manipulation studies seem very futuristic and scary but Inception they are not.

Our memories are plastic and malleable, and their physical instantiation changes each time we recall them. Which version of the Hawaii trip shall we target? What other memories show the greatest overlap with the happy one? Has the problem of hippocampal pattern separation been solved already?? Garden-variety deep brain stimulation seems easy in comparison (and we know how well that's gone in humans, so far). But: “In rodents, optogenetic stimulation of mPFC neurons, mPFC to raphe projections, and ventral tegmental dopaminergic neurons achieved a rapid reversal of stress-induced maladaptive behaviours” (Ramirez et al., 2015).

Why can't we just appreciate the basic knowledge gained from these experiments? But no. There has to be a human application right around the corner.
That link between the neural circuit manipulations in mice and therapies now used in humans makes the findings particularly exciting, says Tom Insel, director of the National Institute of Mental Health.

“This is a big step toward helping to understand not only the underlying circuits for a really serious illness like depression, but also the circuits that underlie treatment,” says Insel...

Was that actually an endorsement of mediated experience? If we go down that road, we must acknowledge that an artificially created reality, albeit one that originates within a being's own brain, is superior to real life. This is the most profound implication of activating positive memory engrams.


When Mediated Experience Replaces a Medicated Existence
Mediated experiences increasingly dominate our lives. Movies and television already confuse the real and the mediated. New technology is blurring the line further. Video games and virtual reality are becoming increasingly realistic. “Augmented reality” technology is on its way to the public. Wearable computers will allow people to enter a news story and see and feel the events the way the journalist who was there did and no doubt eventually we’ll be able to experience the events live. As the line between real and mediated gets harder to see, presence increases. An important and overlooked consequence of this trend is an increasing confusion from the other direction, in which “real life” seems to be mediated. People will have more and more trouble distinguishing reality, and some may not even appreciate that there is a difference. It will get harder for people to trust their own senses and judgment and it will be more difficult to impress people with non-mediated experiences.

Reeves Timmins & Lombard (2005)When “Real” Seems Mediated: Inverse Presence.

Heavy social media users already accept a reality filtered through Instagram and Facebook. As the interest in personal biometrics and the Quantified Self movement rises, so too will tolerance of increasingly invasive performance enhancing and “lifestyle” brain stimulation methods (see DIY tDCS). No one has said that optogenetic-type treatments are (or will be) possible in humans (OK, almost no one; see Albert, 2014). Others are more modest, and see the translational potential in non-invasive transcranial magnetic stimulation (Deisseroth et al., 2015).

. . .


2035

DARPA has mandated that all depressed Americans must be implanted with its CyberNeuroTron WritBit device, which cost $100 billion to develop. CNTWB is a closed-loop DBS system that automatically adjusts the stimulation parameters at 12 different customized target locations. It uses state-of-the-art syringe-injectable mesh electronics, incorporating silicon nanowires and microvoltammetry. Electrical and chemical signals are continuously recorded and uploaded to a centralized data center, where machine learning algorithms determine with high accuracy whether a given pattern of activity signals a significant change in mood.

The data are compiled, analyzed, and stored by the global search engine conglomerate BlueBook, which in 2032 swallowed up Google, Facebook, Apple, and every other internet data mining company.



. . .


2055

Sophia, the daughter of Jessica and Michael, is depressed again. The Ramirez et al. (2050) protocol for Positive Memory Engram Activation is in widespread use. Sophia searches for her dentate gyrus recordings from a vacation in Hawaii five months earlier. Then she selects the specific memory she wants to be artificially reactivated: watching the sunset on the beach with her partner, drinking mai tais and eating taro chips.



"We had a great time on that trip, didn't we Lucas?" 

Lucas the intelligent AI nods in agreement. "It's true," he thought. "Humans can no longer distinguish between virtual reality and the real thing."

This has been especially useful for the Ramirez protocol, since most Pacific Island nations have been underwater since 2047.



Footnotes

1 As an aside, I wonder what the female mice think of all this. What would be an equivalently positive experience? Is sex as rewarding for them? Will there be a new animal model of shopping at Nordstrom? Fortunately, this work was funded by RIKEN Brain Science Institute and Howard Hughes Medical Institute, so the authors don't have to follow the pesky impending NIH guidelines to include females in animal research.

2“Depression-related” behaviors were assessed using the Tail Suspension Test (TST) and the Sucrose Preference Test (SPT), which are supposed to mimic giving up hope and loss of pleasure, respectively. Different tests were used to measure “anxiety-related” behaviors. Interestingly, none of the happy engram manipulations improved anxiety-like behavior in the mice. Not a very good model of anxious depression, then.


References

Albert PR. (2014). Light up your life: optogenetics for depression?J Psychiatry Neurosci. 39(1):3-5.

Deisseroth K, Etkin A, Malenka RC. (2015). Optogenetics and the circuit dynamics ofpsychiatric disease. JAMA 313(20):2019-20.

Liu, X., Ramirez, S., Redondo, R., & Tonegawa, S. (2014). Identification and Manipulation of Memory Engram Cells Cold Spring Harbor Symposia on Quantitative Biology, 79, 59-65. DOI: 10.1101/sqb.2014.79.024901

Ramirez, S., Liu, X., MacDonald, C., Moffa, A., Zhou, J., Redondo, R., & Tonegawa, S. (2015). Activating positive memory engrams suppresses depression-like behaviour. Nature, 522 (7556), 335-339. DOI: 10.1038/nature14514

Timmins, L., & Lombard, M. (2005). When “Real” Seems Mediated: Inverse Presence. Presence: Teleoperators and Virtual Environments, 14 (4), 492-500. DOI: 10.1162/105474605774785307


Appendix

These experiments are indeed difficult, but if you successfully execute them, a publication is Nature nearly guaranteed. A review by Liu et al. (2014) explained their general protocol in an easier-to-understand fashion:
...we combined activity-dependent, drug-regulatable expression system with optogenetics (Liu et al. 2012). We used a transgenic mouse model where the artificial tetracycline transactivator (tTA), which can be blocked by doxycycline (Dox), is driven by the promoter of immediate early gene (IEG) c-fos (Reijmers et al. 2007). The activity dependency of c-fos promoter poses a natural spatial constrain on the identities of the neurons that can be labeled, reflecting the normal biological selection process of the brain during memory formation, whereas the Dox-dependency of the system poses an artificial temporal constrain as to when these neurons can be labeled, which can be controlled by the experimenters. With these two constraints, the down-stream effector of tTA can express selectively in neurons that are active during a particular behavior episode, only if the animals are off Dox diet. Using this system, we expressed channelrhodopsin-2 (ChR2) delivered by a viral vector AAV-TRE-ChR2-EYFP targeting the dentate gyrus (DG) of the hippocampus and implanted optical fibers right above the infected areas. 

One of the major treatment protocols is shown below (adapted from Fig. 1A).



There were a number of control conditions too. Reactivation of neutral or negative engram neurons didn't change depression-like behaviors on the TST and SPT.  Reactivation of positive engram neurons in non-stressed mice didn't alter behavior, either.



A very impressive body of work, with a special dedication by the authors: "We dedicate this study to the memory of Xu Liu, who made major contributions to memory engram research."

Xu Liu in memoriam.

Who Will Pay for All the New DBS Implants?

$
0
0


Recently, Science and Nature had news features on big BRAIN funding for the development of deep brain stimulation technologies. The ultimate aim of this research is to treat and correct malfunctioning neural circuits in psychiatric and neurological disorders. Both pieces raised ethical issues, focused on device manufacturers and potential military applications, respectively.

A different ethical concern, not mentioned in either article, is who will have access to these new devices, and who is going to pay the medical costs once they hit the market. DBS for movement disorders is a test case, because Medicare (U.S.) approved coverage for Parkinson's disease (PD) and essential tremor in 2003. Which is good, given that unilateral surgery costs about $50,000.

Willis et al. (2014) examined Medicare records for 657,000 PD patients and found striking racial disparities. The odds of receiving DBS in white PD patients were five times higher than for African Americans, and 1.8 times higher than for Asians. And living in a neighborhood with high socioeconomic status was associated with 1.4-fold higher odds of receiving DBS. Out-of-pocket costs for Medicare patients receiving DBS are over $2,000 per year, which is quite a lot of money for low-income senior citizens.

Aaron Saenz raised a similar issue regarding the cost of the DEKA prosthetic arm (aka "Luke"):
But if you're not a veteran, neither DARPA project may really help you much. The Luke Arm is slated to cost $100,000+.... That's well beyond the means of most amputees if they do not have the insurance coverage provided by the Veteran's Administration. ... As most amputees are not veterans, I think that the Luke Arm has a good chance of being priced out of a large market share.

The availability of qualified neurosurgeons, even in affluent areas, will be another problem once future indications are FDA-approved (or even trialed).

The situation in one Canadian province (British Columbia, with a population of 4.6 million) is instructive. An article in the Vancouver Sun noted that in March 2013, only one neurosurgeon was qualified to perform DBS surgeries for Parkinson's disease (or for dystonia). This resulted in a three year waiting list. Imagine, all these eligible patients with Parkinson's have to endure their current condition (and worse) for years longer, instead of having a vastly improved quality of life.
Funding, doctors needed if brain stimulation surgery to expand in B.C.:

... “But here’s the problem: We already have a waiting list of almost three years, from the time family doctors first put in the referral to the DBS clinic. And I’m the only one in B.C. doing this. So we really aren’t able to do more than 40 cases a year,” [Dr. Christopher Honey] said.
. . .
...The health authority allocates funding of $1.1 million annually, which includes the cost of the $20,000 devices, and $14,000 for each battery replacement. On average, batteries need to be replaced every three years.
. . .
To reduce wait times, the budget would have to increase and a Honey clone would have to be trained and hired.

Back in the U.S., Rossi et al. (2014) called out Medicare for curbing medical progress:
Devices for DBS have been approved by the FDA for use in treating Parkinson disease, essential tremor, obsessive-compulsive disorder, and dystonia,2 but expanding DBS use to include new indications has proven difficult—specifically because of the high cost of DBS devices and generally because of disincentives for device manufacturers to sponsor studies when disease populations are small and the potential for a return on investment is not clear. In many of these cases, Medicare coverage will determine whether a study will proceed. ... Ultimately, uncertain Medicare coverage coupled with the lack of economic incentives for industry sponsorship could limit investigators’ freedom of inquiry and ability to conduct clinical trials for new uses of DBS therapy.

But the question remains, where is all this health care money supposed to come from?

The device manufacturers aren't off the hook, either, but BRAIN is trying to reel them in. NIH recently sponsored a two-day workshop, BRAIN Initiative Program for Industry Partnerships to Facilitate Early Access Neuromodulation and Recording Devices for Human Clinical Studies [agenda PDF]. The purpose was to:
  • Bring together stakeholders and interested parties to disseminate information on opportunities for research using latest-generation devices for CNS neuromodulation and interfacing with the brain in humans.
  • Describe the proposed NIH framework for facilitating and lowering the cost of new studies using these devices.
  • Discuss regulatory and intellectual property considerations.
  • Solicit recommendations for data coordination and access.

The Program Goals [PDF]:
...we hope to spur human research bridging the “valley of death” that has been a barrier to translating pre-clinical research into therapeutic outcomes. We expect the new framework will allow academic researchers to test innovative ideas for new therapies, or to address scientific unknowns regarding mechanisms of disease or device action, which will facilitate the creation of solid business cases by industry and venture capital for the larger clinical trials required to take these ideas to market.

To advance these goals, NIH is pursuing general agreements (Memoranda of Understanding, MOUs) with device manufacturers to set up a framework for this funding program. In the MOUs, we expect each company to specify the capabilities of their devices, along with information, support and any other concessions they are willing to provide to researchers.

In other words, it's a public/private partnership to advance the goal of having all depressed Americans implanted with the CyberNeuroTron WritBit device by 2035 (just kidding!!).

But seriously... before touting the impending clinical relevance of a study in rodents, basic scientists and bureaucrats alike should listen to patients with the current generation of DBS devices. Participants in the halted BROADEN Trial for refractory depression reported outcomes ranging from “...the side effects caused by the device were, at times, worse than the depression itself” to “I feel like I have a second chance at life.”

What do you do with a medical device that causes great physical harm to one person but is a godsend for another? What are the factors involved? Sloppy patient selection criteria? Surgeon ineptitude? Anatomical variation? All of the above and more are likely to contribute to the wildly divergent outcomes.

One anonymous commenter on a previous post recently said that the study sponsor had abandoned them:
The BROADEN study isn't continuing the 4 year follow-up study. I'm in it and just got a phone call. They'll put in a rechargeable device for those of us enrolled and will not follow up with us. The FDA approved it just for us who had the surgery. It looks like St. Judes isn't going foe FDA approval anymore. I have no public reference for this but it was what I was just told over the phone. It has helped me and I don't know what I'm going to do about follow-up care except with my psychiatrist who doesn't have DBS experience. Scary.

Why isn't the manufacturer providing medical care for the study participants? Because they don't have to! In her Science piece, Emily Underwood reported:
Recent failures of several large clinical trials of deep brain stimulation for depression loomed large over the meeting. In the United States, companies or institutions sponsoring research are rarely, if ever, required to pay medical costs that trial subjects incur as a result of their participation, [Hank] Greely points out. “Many people who work in research ethics, including me, think this is wrong,” he says. 

Hopefully the workshop attendees considered not only how to lower the cost of new DBS studies, but also how to provide equitable circuit-based health care in the future.


Further Reading (and viewing)

Watch the NIH videocast: Day 1 and Day 2.

BROADEN Trial of DBS for Treatment-Resistant Depression

Update on the BROADEN Trial of DBS for Treatment-Resistant Depression


References

Rossi, P., Machado, A., & Okun, M. (2014). Medicare Coverage of Investigational Devices. JAMA Neurology, 71 (5) DOI: 10.1001/jamaneurol.2013.6042

Willis, A., Schootman, M., Kung, N., Wang, X., Perlmutter, J., & Racette, B. (2014). Disparities in deep brain stimulation surgery among insured elders with Parkinson disease. Neurology, 82 (2), 163-171 DOI: 10.1212/WNL.0000000000000017

Can Tetris Reduce Intrusive Memories of a Trauma Film?

$
0
0


For some inexplicable reason, you watched the torture gore horror film Hostel over the weekend. On Monday, you're having trouble concentrating at work. Images of severed limbs and bludgeoned heads keep intruding on your attempts to code or write a paper. So you decide to read about the making of Hostel.You end up seeing pictures of the most horrifying scenes from the movie. It's all way too way much to simply shake off so then you decide to play Tetris.

But a funny thing happens. The unwelcome images start to become less frequent. By Friday, the gory mental snapshots are no longer forcing their way into your mind's eye. The ugly flashbacks are gone.

Meanwhile, your parnter in crime is having similar images of eye gouging pop into his head. Except he didn't review the tortuous highlights on Monday, and he didn't play Tetris. He continues to have involuntary intrusions of Hostel images once or twice a day for the rest of the week.

This is basically the premise (and outcome) of a new paper in Psychological Science by Ella James and colleagues at Cambridge and Oxford. It builds on earlier work suggesting that healthy participants who play Tetris shortly after watching a “trauma” film will have fewer intrusive memories (Holmes et al, 2009, 2010). This is based on the idea that involuntary “flashbacks” in real post-traumatic stress disorder (PTSD) are visual in nature, and require visuospatial processing resources to generate and maintain. Playing Tetris will interfere with consolidation and subsequent intrusion of the images, at least in an experimental setting (Holmes et al, 2009):
...Traumaflashbacks are sensory-perceptual, visuospatial mental images. Visuospatial cognitive tasks selectively compete for resources required to generate mental images. Thus, a visuospatial computergame (e.g. "Tetris") will interfere with flashbacks. Visuospatial tasks post-trauma, performed within the time window for memory consolidation [6 hrs], will reduce subsequent flashbacks. We predicted that playing"Tetris" half an hour after viewing trauma would reduce flashback frequency over 1-week.

The timing is key here. In the earlier experiments, Tetris play commenced 30 min after the trauma film experience, during the 6 hour window when memories for the event are stabilized and consolidated. Newly formed memories are thought to be malleable during this time.

However, if one wants to extrapolate directly to clinical application in cases of real life trauma exposure (and this is problematic, as we'll see later), it's pretty impractical to play Tetris right after an earthquake, auto accident, mortar attack, or sexual assault. So the new paper relies on the process of reconsolidation, when an act of remembering will place the memory in a labile state once again, so it can be modified (James et al., 2015).




The procedure was as follows: 52 participants came into the lab on Day 0 and completed questionnaires about depression, anxiety, and previous trauma exposure. Then they watched a 12 min trauma film that included 11 scenes of actual death (or threatened death) or serious injury (James et al., 2015):
...the film functioned as an experimental analogue of viewing a traumatic event in real life. Scenes contained different types of context; examples include a young girl hit by a car with blood dripping out of her ear, a man drowning in the sea, and a van hitting a teenage boy while he was using his mobile phone crossing the road. This film footage has been used in previous studies to evoke intrusive memories...

After the film, they rated “how sad, hopeless, depressed, fearful, horrified, and anxious they felt right at this very moment” and “how distressing did you find the film you just watched?” They were instructed to keep a diary of intrusive images and come back to the lab 24 hours later.

On Day 1, participants were randomized to either the experimental group (memory reactivation + Tetris) or the control group (neither manipulation). The experimental group viewed 11 still images from the film that served as reminder cues to initiate reconsolidation. This was followed by a 10 min filler task and then 12 min of playing Tetris (the Marathon mode shown above). The game instructions aimed to maximize the amount of mental rotation the subjects would use. The controls did the filler task and then sat quietly for 12 min.

Both groups kept a diary of intrusions for the next week, and then returned on Day 7. All participants performed the Intrusion Provocation Task (IPT). Eleven blurred pictures from the film were shown, and subjects indicated when any intrusive mental images were provoked. Finally, the participants completed a few more questionnaires, as well as a recognition task that tested their verbal (T/F written statements) and visual (Y/N for scenes) memories of the film.1

The results indicated that the Reactivation + Tetris manipulation was successful in decreasing the number of visual memory intrusions in both the 7-day diary and the IPT (as shown below).


modified from Fig. 1 (James et al., 2015). Asterisks indicate a significant difference between groups (**p < .001). Error bars represent +1 SEM.


Cool little snowman plots (actually frequency scatter plots) illustrate the time course of intrusive memories in the two groups.


modified from Fig. 2 (James et al., 2015). Frequency scatter plots showing the time course of intrusive memories reported in the diary daily from Day 0 (prior to intervention) to Day 7. The intervention was on Day 1, and the red arrow is 24 hrs later (when the intervention starts working). The solid lines are the results of a generalized additive model. The size of the bubbles represents the number of participants who reported the indicated number of intrusive memories on that particular day.


But now, you might be asking yourself if the critical element was Tetris or the reconsolidation update procedure (or both), since the control group did neither. Not to worry. Experiment 2 tried to disentangle this by recruiting four groups of participants (n=18 in each) the original two groups plus two new ones: Reactivation only and Tetris only.

And the results from Exp. 2 demonstrated that both were needed.


modified from Fig. 4 (James et al., 2015). Asterisks indicate that results for the Reactivation + Tetris group were significantly different from results for the other three groups (*p < .01). Error bars represent +1 SEM. The No-Task Control and Tetris Only groups did not differ for diary intrusions (n.s.).


The authors' interpretation:
Overall, the results of the present experiments indicate that the frequency of intrusive memories induced by experimental trauma can be reduced by disrupting reconsolidation via a competing cognitive-task procedure, even for established memories (here, events viewed 24 hours previously). ... Critically, neither playing Tetris alone (a nonreactivation control condition) nor the control of memory reactivation alone was sufficient to reduce intrusions... Rather, their combination is required, which supports a reconsolidation-theory account. We suggest that intrusive-memory reduction is due to engaging in a visuospatial task within the window of memory reconsolidation, which interferes with intrusive image reconsolidation (via competition for shared resources).

Surprisingly (perhaps), I don't have anything negative to say about the study. It was carefully conducted and interpreted with restraint. They don't overextrapolate to PTSD. They don't use the word “flashback” to describe the memory phenomenon. And they repeatedly point out that it's “experimental trauma.” I actually considered reviving The Neurocomplimenter for this post, but that would be going too far...

Compare this flattering post with one I wrote in 2010, about a related study by the same authors (Holmes et al.. 2010). That paper certainly had a modest title: Key Steps in Developing a Cognitive Vaccine against Traumatic Flashbacks: Visuospatial Tetris versus Verbal Pub Quiz.

Cognitive vaccine. Traumatic. Flashbacks. Twelve mentions of PTSD. This led to ridiculous headlines like Doctors Prescribing 'Tetris Therapy'.

Here, let me fix that for you:

Tetris Helps Prevent Unpleasant Memories of Gory Film in Happy People

My problem wasn't with the actual study, but with the way the authors hyped the results and exaggerated their clinical significance. So I'm pleased to see a more restrained approach here.


The media coverage for the new paper was generally more accurate too:

Can playing Tetris reduce intrusive memories? (Medical News Today)

Moving tiles as an unintrusive way to handle flashbacks (Medical Express)

Intrusiveness of Old Emotional Memories Can Be Reduced by Computer Game Play Procedure (APS)

But we can always count on the Daily Mail for a good time: Could playing TETRIS banish bad memories? Retro Nintendo game 'reduces the risk of post-traumatic stress disorder'2

Gizmodo is a bit hyperbolic as well: Tetris Blocks Flashbacks of Traumatic Events Lodged in the Brain [“lodged in the brain” for all of 24 hrs]


Questions for Now and the Future

Is there really nothing wrong with this study?? Being The Neurocritic, I always have to find something to criticize... and here I had to dig through the Supplemental Material to find issues that may affect the translational potential of Tetris-based interventions.

  • The Intrusion subscale of the Impact of Event Scale (IES-R) was used as an exploratory measure, and subject ratings were between 0 and 1.
The Intrusion subscale consists of 8 questions like “I found myself acting or feeling like I was back at that time” and “I had dreams about it” that are rated from 0 (not at all) to 4 (extremely). The IES-R is given to people after distressing, traumatic life events. These individuals may have actual PTSD symptoms like flashbacks and nightmares.

In Exp. 1, the Reactivation + Tetris group (M = .68) had significantly lower scores (p = .016) on Day 7 than the control group (M = 1.01). BUT this is not terribly meaningful, due to a floor effect. And in Exp. 2 there was no difference between the four groups, with scores ranging from 0.61 to 0.81.3

As an overall comment, watching a film of a girl getting hit by a car is not the same as witnessing it in person (obviously). But this real-life scenario may be the most amenable to Tetris, because the witness was not in the accident themselves and did not know the girl (both of which would heighten the emotional intensity and vividness of the trauma, elements that transcend visual imagery).

It's true that in PTSD, the involuntary intrusion of trauma memories (i.e., flashbacks) have a distinctly sensory quality to them (Ehlers et al. 2004). Visual images are most common, but bodily sensations, sounds, and smells can be incorporated into a multimodal flashback. Or could occur on their own.

  • The effectiveness of the Tetris intervention was related to game score and self-rated task difficulty.
This means that people who were better at playing Tetris showed a greater decrease in intrusive memories. This result wasn't covered in the main paper, but it makes you wonder about cause and effect. Is it because the game was more enjoyable for them? Or could it be that their superior visual-spatial abilities (or greater game experience) resulted in greater interference, perhaps by using up more processing resources? That's always a dicey argument, as you could also predict that better, more efficient game play uses fewer visual-spatial resources.

An interesting recent paper found that individuals with PTSD (who presumably experience intrusive visual memories) have worse allocentric spatial processing abilities than controls (Smith et al., 2015). This means they have problems representing the locations of environmental features relative to each other (instead of relative to the self). So are weak spatial processing and spatial memory abilities caused by the trauma, or are weak spatial abilities a vulnerability factor for developing PTSD?

  • As noted by the authors, the modality-specificity of the intervention needs to be assessed.
Their previous paper showed that the effect was indeed specific to Tetris. A verbally based video game (Pub Quiz) actually increased the frequency of intrusive images (Holmes et al., 2010).

It would be interesting to disentangle the interfering elements of Tetris even further. Would any old mental rotation task do the trick? How about passive viewing of Tetris blocks, or is active game play necessary? Would a visuospatial n-back working memory task work? It wouldn't be as fun, but it obviously uses up visual working memory processing resources. What about Asteroids or Pac-Man or...? 4

This body of work raises a number of interesting questions about the nature of intrusive visual memories, traumatic and non-traumatic alike. Do avid players of action video games (or Tetris) have fewer intrusive memories of past trauma or trauma-analogues in everyday life? I'm not sure this is likely, but you could find out pretty quickly on Amazon Mechanical Turk or one of its alternatives.

There are also many hurdles to surmount before Doctors Prescribe 'Tetris Therapy'. For instance, what does it mean to have the number of weekly Hostel intrusions drop from five to two? How would that scale to an actual trauma flashback, which may involve a fear or panic response?

The authors conclude the paper by briefly addressing these points:
A critical next step is to  investigate  whether  findings  extend  to  reducing  the psychological impact of real-world emotional events and media. Conversely, could computer gaming be affecting intrusions of everyday events?

A number of different research avenues await these investigators (and other interested parties). And — wait for it — a clinical trial of Tetris for flashback reduction has already been completed by the investigators at Oxford and Cambridge!

A Simple Cognitive Task to Reduce the Build-Up of Flashbacks After a Road Traffic Accident (SCARTA)

Holmes and colleagues took the consolidation window very seriously: participants played Tetris in the emergency room within 6 hours of experiencing or witnessing an accident. I'll be very curious to see how this turns out...


Footnotes

1 Interestingly, voluntary retrieval of visual and verbal memories was not affected by the manipulation, highlighting the uniqueness of flashback-like phenomena.

2 It does no such thing. But they did embed a video of Dr. Tom Stafford explaining why Tetris is so compelling...

3 The maximum total score on the IES-R is 32. The mean total score in a group of car accident survivors was 17; in Croatian war veterans it was 25. At first I assumed the authors reported the total score out of 32, rather than the mean score per item. I could be very wrong, however. By way of comparison, the mean item score in female survivors of intimate partner violence was 2.26. Either way, the impact of the trauma film was pretty low in this study, as you might expect.

4 OK, now I'm getting ridiculous. I'm also leaving aside modern first-person shooter games as potentially too traumatic and triggering.


References

Ehlers A, Hackmann A, Michael T. (2004). Intrusive re-experiencing in post-traumaticstress disorder: phenomenology, theory, and therapy. Memory 12(4):403-15.

Holmes EA, James EL, Coode-Bate T, Deeprose C. (2009). Can playing the computer game "Tetris" reduce the build-up of flashbacks for trauma? A proposal from cognitive science. PLoS One 4(1):e4153.

Holmes, E., James, E., Kilford, E., & Deeprose, C. (2010). Key Steps in Developing a Cognitive Vaccine against Traumatic Flashbacks: Visuospatial Tetris versus Verbal Pub Quiz. PLoS ONE, 5 (11) DOI: 10.1371/journal.pone.0013706

James, E., Bonsall, M., Hoppitt, L., Tunbridge, E., Geddes, J., Milton, A., & Holmes, E. (2015). Computer Game Play Reduces Intrusive Memories of Experimental Trauma via Reconsolidation-Update Mechanisms. Psychological Science DOI: 10.1177/0956797615583071

Smith KV, Burgess N, Brewin CR, King JA. (2015). Impaired allocentric spatialprocessing in posttraumatic stress disorder. Neurobiol Learn Mem. 119:69-76.

Scary Brains and the Garden of Earthly Deep Dreams

$
0
0

In case you've been living under a rock the past few weeks, Google's foray into artificial neural networkshas yielded hundreds of thousands of phantasmagoric images. The company has an obvious interest in image classification, and here's how they explain the DeepDream process in their Research Blog:
Inceptionism: Going Deeper into Neural Networks

. . .
We train an artificial neural network by showing it millions of training examples [of dogs and eyes and pagodas, let's say] and gradually adjusting the network parameters until it gives the classifications we want. The network typically consists of 10-30 stacked layers of artificial neurons. Each image is fed into the input layer, which then talks to the next layer, until eventually the “output” layer is reached. The network’s “answer” comes from this final output layer.

. . .
One way to visualize what goes on is to turn the network upside down and ask it to enhance an input image in such a way as to elicit a particular interpretation. Say you want to know what sort of image would result in “Banana.” Start with an image full of random noise, then gradually tweak the image towards what the neural net considers a banana... By itself, that doesn’t work very well, but it does if we impose a prior constraint that the image should have similar statistics to natural images, such as neighboring pixels needing to be correlated.

After Google released the deepdream code on GitHub, Psychic VR Lab set up a Deep Dream web interface, which currently has over 300,000 groovy and scary images.

I've taken an interest in the hallucinogenic and distorted brain images, including the one above. I can't properly credit the human input interface (which wasn't me), but I found it after a submitting a file of my own in the early stages of http://psychic-vr-lab.com/deepdream/.  I can't find the url hosting my image, but I came across the frightening brain here, along with the original.





I've included a few more for your viewing pleasure. Brain Decoder posted a dreamy mouse hippocampus Brainbow.




Here's one by HofmannsBicycle.



And a fun fave courtesy of @rogierK and @katestorrs. This one is cartoonish instead of menacing.



Rogier said: "According to #deepdream the homunculus in our brains is a terrifying bird-dog hybrid."

Aw, I thought it was kind of cute. More small birds, fewer staring judgmental eyeballs.


And the grand finale isn't a brain at all. But who doesn't want to see the dreamified version of The Garden of Earthly Delights, by Hieronymus Bosch? Here it is, via @aut0mata. Click on image for a larger view.
 




When nothing's right, just close your eyes
Close your eyes and you're gone

-Beck, Dreams




ADDENDUM (July 21 2015): It's worth reading Deepdream: Avoiding Kitsch by Josh Nimoy, which confirms the training set was filled with dogs, birds, and pagodas. Nimoy also shows deepdream images done with neural networks trained on other datasets.

For example, the image below was generated by a neural network trained to do gender classification.




Gendernet deepdreaming Cindy Sherman Untitled B (1975)

The Idiosyncratic Side of Diagnosis by Brain Scan and Machine Learning

$
0
0

R2D3 recently had a fantastic Visual Introduction to Machine Learning, using the classification of homes in San Francisco vs. New York as their example. As they explain quite simply:
In machine learning, computers apply statistical learning techniques to automatically identify patterns in data. These techniques can be used to make highly accurate predictions.
You should really head over there right now to view it, because it's very impressive.


Computational neuroscience types are using machine learning algorithms to classify all sorts of brain states, and diagnose brain disorders, in humans. How accurate are these classifications? Do the studies all use separate training sets and test sets, as shown in the example above?

Let's say your fMRI measure is able to differentiate individuals with panic disorder (n=33) from those with panic disorder + depression (n=26) with 79% accuracy.1 Or with structural MRI scans you can distinguish 20 participants with treatment-refractory depression from 21 never-depressed individuals with 85% accuracy.2 Besides the issues outlined in the footnotes, the reality check is that the model must be able to predict group membership for a new (untrained) data set. And most studies don't seem to do this.

I was originally drawn to the topic by a 3 page article entitled, Machine learning algorithm accurately detects fMRI signature of vulnerability to major depression (Sato et al., 2015). Wow! Really? How accurate? Which fMRI signature? Let's take a look.
  • machine learning algorithm = Maximum Entropy Linear Discriminant Analysis (MLDA)
  • accurately predicts = 78.3% (72.0% sensitivity and 85.7% specificity)
  • fMRI signature = guilt-selective anterior temporal functional connectivity changes (seems a bit overly specific and esoteric, no?)
  • vulnerability to major depression = 25 participants with remitted depression vs. 21 never-depressed participants
The authors used a standard leave-one-subject-out procedure in which the classification is cross-validated iteratively by using a model based on the sample after excluding one subject to independently predict group membershipbut they did not test their fMRI signature in completely independent groups of participants.

Nor did they try to compare individuals who are currently depressed to those who are currently remitted. That didn't matter, apparently, because the authors suggest the fMRI signature is a trait markerof vulnerability, not a state marker of current mood. But the classifier missed 28% of the remitted group who did not have the guilt-selective anterior temporal functional connectivity changes.”

What is that, you ask? This is a set of mini-regions (i.e., not too many voxels in each) functionally connected to a right superior anterior temporal lobe seed region of interest during a contrast of guilt vs. anger feelings (selected from a number of other possible emotions) for self or best friend, based on written imaginary scenarios like “Angela [self] does act stingily towards Rachel [friend]” and “Rachel does act stingily towards Angela” conducted outside the scanner (after the fMRI session is over). Got that?

You really need to read a bunch of other articles to understand what that means, because the current paper is less than 3 pages long. Did I say that already?


modified from Fig 1B (Sato et al., 2015). Weight vector maps highlighting voxels among the 1% most discriminative for remitted major depression vs. controls, including the subgenual cingulate cortex, both hippocampi, the right thalamus and the anterior insulae.


The patients were previously diagnosed according to DSM-IV-TR (which was current at the time), and in remission for at least 12 months. The study was conducted by investigators from Brazil and the UK, so they didn't have to worry about RDoC, i.e. “new ways of classifying mental disorders based on behavioral dimensions and neurobiological measures” (instead of DSM-5 criteria). A “guilt-proneness” behavioral construct, along with the “guilt-selective” network of idiosyncratic brain regions, might be more in line with RDoC than past major depression diagnosis.

Could these results possibly generalize to other populations of remitted and never-depressed individuals? Well, the fMRI signature seems a bit specialized (and convoluted). And overfitting is another likely problem here...

In their next post, R2D3 will discuss overfitting:
Ideally, the [decision] tree should perform similarly on both known and unknown data.

So this one is less than ideal. [NOTE: the one that's 90% in the top figure]

These errors are due to overfitting. Our model has learned to treat every detail in the training data as important, even details that turned out to be irrelevant.

In my next post, I'll present an unsystematic review of machine learning as applied to the classification of major depression. It's notable that Sato et al. (2015) used the word “classification” instead of “diagnosis.”3


ADDENDUM (Aug 3 2015): In the comments, I've presented more specific critiques of: (1) the leave-one-out procedure and (2) how the biomarker is temporally disconnected from when the participants identify their feeling as 'guilt' or 'anger' or etc. (and why shame is more closely related to depression than guilt).


Footnotes

1 The sensitivity (true positive rate) was 73% and the specificity (true negative rate) was 85%. After correcting for confounding variables, these numbers were 77% and 70%, respectively.

2 The abstract concludes this is a “high degree of accuracy.” Not to pick on these particular authors (this is a typical study), but Dr. Dorothy Bishop explains why this is not very helpful for screening or diagnostic purposes. And what you'd really want to do here is to discriminate between treatment-resistant vs. treatment-responsive depression. If an individual does not respond to standard treatments, it would be highly beneficial to avoid a long futile period of medication trials.

3 In case you're wondering, the title of this post was based on The Dark Side of Diagnosis by Brain Scan, which is about Dr  Daniel Amen. The work of the investigators discussed here is in no way, shape, or form related to any of the issues discussed in that post.


Reference

Sato, J., Moll, J., Green, S., Deakin, J., Thomaz, C., & Zahn, R. (2015). Machine learning algorithm accurately detects fMRI signature of vulnerability to major depression Psychiatry Research: Neuroimaging DOI: 10.1016/j.pscychresns.2015.07.001

Will machine learning create new diagnostic categories, or just refine the ones we already have?

$
0
0

How do we classify and diagnose mental disorders?

In the coming era of Precision Medicine, we'll all want customized treatments that “take into account individual differences in people’s genes, environments, and lifestyles.” To do this, we'll need precise diagnostic tools to identify the specific disease process in each individual. Although focused on cancer in the near-term, the longer-term goal of the White House initiative is to apply Precision Medicine to all areas of health. This presumably includes psychiatry, but the links between Precision Medicine, the BRAIN initiative, and RDoC seem a bit murky at present.1

But there's nothing a good infographic can't fix. Science recently published a Perspective piece by the NIMH Director and the chief architect of the Research Domain Criteria (RDoC) initiative (Insel & Cuthbert, 2015). There's Deconstruction involved, so what's not to like? 2


ILLUSTRATION: V. Altounian and C. Smith / SCIENCE


In this massively ambitious future scenario, the totality of one's genetic risk factors, brain activity, physiology, immune function, behavioral symptom profile, and life experience (social, cultural, environmental) will be deconstructed and stratified and recompiled into a neat little cohort. 3

The new categories will be data driven. The project might start by collecting colossal quantities of expensive data from millions of people, and continue by running classifiers on exceptionally powerful computers (powered by exceptionally bright scientists/engineers/coders) to extract meaningful patterns that can categorize the data with high levels of sensitivity and specificity. Perhaps I am filled with pathologically high levels of negative affect (Loss? Frustrative Nonreward?), but I find it hard to be optimistic about progress in the immediate future. You know, for a Precision Medicine treatment for me (and my pessimism)...

But seriously.

Yes, RDoC is ambitious (and has its share of naysayers). But what you may not know is that it's also trendy! Just the other day, an article in The Atlantic explained Why Depression Needs A New Definition (yes, RDoC) and even cited papers like Depression: The Shroud of Heterogeneity. 4

But let's just focus on the brain for now. For a long time, most neuroscientists have viewed mental disorders as brain disorders. [But that's not to say that environment, culture, experience, etc. play no role! cf. Footnote 3]. So our opening question becomes, How do we classify and diagnose brain disorders neural circuit disorders in a fashion consistent with RDoC principles? Is there really One Brain Network for All Mental Illness, for instance? (I didn't think so.)

Our colleagues in Asia and Australia and Europe and Canada may not have gotten the funding memo, however, and continue to run classifiers based on DSM categories. 5 In my previous post, I promised an unsystematic review of machine learning as applied to the classification of major depression. You can skip directly to the Appendix to see that.

Regardless of whether we use DSM-5 categories or RDoC matrix constructs, what we need are robust and reproducible biomarkers (see Table 1 above). A brief but excellent primer by Woo and Wager (2015) outlined the characteristics of a useful neuroimaging biomarker:
1. Criterion 1: diagnosticity

Good biomarkers should produce high diagnostic performance in classification or prediction. Diagnostic performance can be evaluated by sensitivity and specificity. Sensitivity concerns whether a model can correctly detect signal when signal exists. Effect size is a closely related concept; larger effect sizes are related to higher sensitivity. Specificity concerns whether the model produces negative results when there is no signal. Specificity can be evaluated relative to a range of specific alternative conditions that may be confusable with the condition of interest.

2. Criterion 2: interpretability

Brain-based biomarkers should be meaningful and interpretable in terms of neuroscience, including previous neuroimaging studies and converging evidence from multiple sources (eg, animal models, lesion studies, etc). One potential pitfall in developing neuroimaging biomarkers is that classification or prediction models can capitalize on confounding variables that are not neuroscientifically meaningful or interesting at all (eg, in-scanner head movement). Therefore, neuroimaging biomarkers should be evaluated and interpreted in the light of existing neuroscientific findings.

3. Criterion 3: deployability

Once the classification or outcome-prediction model has been developed as a neuroimaging biomarker, the model and the testing procedure should be precisely defined so that it can be prospectively applied to new data. Any flexibility in the testing procedures could introduce potential overoptimistic biases into test results, rendering them useless and potentially misleading. For example, “amygdala activity” cannot be a good neuroimaging biomarker without a precise definition of which “voxels” in the amygdala should be activated and the relative expected intensity of activity across each voxel. A well-defined model and standardized testing procedure are crucial aspects of turning neuroimaging results into a “research product,” a biomarker that can be shared and tested across laboratories.

4. Criterion 4: generalizability

Clinically useful neuroimaging biomarkers aim to provide predictions about new individuals. Therefore, they should be validated through prospective testing to prove that their performance is generalizable across different laboratories, different scanners or scanning procedures, different populations, and variants of testing conditions (eg, other types of chronic pain). Generalizability tests inherently require multistudy and multisite efforts. With a precisely defined model and standardized testing procedure (criterion 3), we can easily test the generalizability of biomarkers and define the boundary conditions under which they are valid and useful.
[Then the authors evaluated the performance of a structural MRI signature for IBS presented in an accompanying paper.]

Should we try to improve on a neuroimaging biomarker (or “neural signature”) for classic disorders in which “Neuroanatomical diagnosis was correct in 80% and 72% of patients with major depression and schizophrenia, respectively...” (Koutsouleris et al., 2015)? That study used large cohorts and evaluated the trained biomarker against an independent validation database (i.e., it was more thorough than many other investigations). Or is the field better served by classifying when loss and agency and auditory perception go awry? What would individualized treatments for these constructs look like? Presumably, the goal is to develop better treatments, and to predict who will respond to a specific treatment(s).

OR should we adopt the surprisingly cynical view of some prominent investigators, who say:
...identifying a genuine neural signature would necessitate the discovery of a specific pattern of brain responses that possesses nearly perfect sensitivity and specificity for a given condition or other phenotype. At the present time, neuroscientists are not remotely close to pinpointing such a signature for any psychological disorder or trait...

If that's true, then we'll have an awfully hard time with our resting state fMRI classifier for neuro-nihilism.


Footnotes

1 Although NIMH Mad Libs does a bang up job...

2 Derrida's Deconstruction and RDoc are diametrically opposed, as irony would have it.

3 Or maybe an n of 1...  I'm especially curious about how life experience will be incorporated into the mix. Perhaps the patient of the future will upload all the data recorded by their memory implants, as in The Entire History of You (an episode of Black Mirror).

4 The word “shroud” always makes everything sound so dire and deathly important... especially when used as a noun.

5 As do many research groups in the US. This is meant to be snarky, but not condescending to anyone who follows DSM-5 in their research.


References

Insel, T., & Cuthbert, B. (2015). Brain disorders? Precisely. Science, 348 (6234), 499-500 DOI: 10.1126/science.aab2358

Woo, C., & Wager, T. (2015). Neuroimaging-based biomarker discovery and validation. PAIN, 156 (8), 1379-1381 DOI: 10.1097/j.pain.0000000000000223



Appendix

Below are 34 references on MRI/fMRI applications of machine learning used to classify individuals with major depression (I excluded EEG/MEG for this particular unsystematic review). The search terms were combinations of "major depression""machine learning""support vector""classifier".

Here's a very rough summary of methods:

Structural MRI: 1, 14, 22, 29, 31, 32

DTI: 6, 12, 18, 19

Resting State fMRI: 3, 5, 8, 9 11, 16, 17, 21, 28, 33

fMRI while viewing different facial expressions: 2, 7, 10, 24, 26, 27, 34

comorbid panic: 13

verbal working memory: 25

guilt: 15 (see The Idiosyncratic Side of Diagnosis by Brain Scan and Machine Learning)

Schizophrenia vs. Bipolar vs. Schizoaffective: 16

Psychotic Major Depression vs. Bipolar Disorder: 20

Schizophrenia vs. Major Depression: 23, 31

Unipolar vs. Bipolar Depression: 24, 32, 34

This last one is especially important, since an accurate diagnosis can avoid the potentially disastrous prescribing of antidepressants in bipolar depression.

Idea that may already be implemented somewhere: Individual labs or research groups could perhaps contribute to a support vector machine clearing house (e.g., at NTRIC or OpenfMRI or GitHub) where everyone can upload the code for data processing streams and various learning/classification algorithms to try out on each others' data.

1.
Brain. 2012 May;135(Pt 5):1508-21. doi: 10.1093/brain/aws084.
Multi-centre diagnostic classification of individual structural neuroimaging scans from patients with major depressive disorder.
Mwangi B Ebmeier KP, Matthews K, Steele JD.

2.
Bipolar Disord. 2012 Jun;14(4):451-60. doi: 10.1111/j.1399-5618.2012.01019.x.
Pattern recognition analyses of brain activation elicited by happy and neutral faces in unipolar and bipolar depression.
Mourão-Miranda J Almeida JR, Hassel S, de Oliveira L, Versace A, Marquand AF, Sato JR, Brammer M, Phillips ML.

3.
PLoS One. 2012;7(8):e41282. doi: 10.1371/journal.pone.0041282. Epub 2012 Aug 20.
Changes in community structure of resting state functional connectivity in unipolar depression.
Lord A Horn D, Breakspear M, Walter M.

5.
Neuroreport. 2012 Dec 5;23(17):1006-11. doi: 10.1097/WNR.0b013e32835a650c.
Machine learning classifier using abnormal brain network topological metrics in major depressive disorder.
Guo H Cao X, Liu Z, Li H, Chen J, Zhang K.

6.
PLoS One. 2012;7(9):e45972. doi: 10.1371/journal.pone.0045972. Epub 2012 Sep 26.
Increased cortical-limbic anatomical network connectivity in major depression revealed by diffusion tensor imaging.
Fang P Zeng LL, Shen H, Wang L, Li B, Liu L, Hu D.

7.
PLoS One. 2013;8(4):e60121. doi: 10.1371/journal.pone.0060121. Epub 2013 Apr 1.
What does brain response to neutral faces tell us about major depression? evidence from machine learning and fMRI.
Oliveira L Ladouceur CD, Phillips ML, Brammer M, Mourao-Miranda J.

8.
Hum Brain Mapp. 2014 Apr;35(4):1630-41. doi: 10.1002/hbm.22278. Epub 2013 Apr 24.
Unsupervised classification of major depression using functional connectivity MRI.
Zeng LL Shen H, Liu L, Hu D.

9.
Psychiatry Clin Neurosci. 2014 Feb;68(2):110-9. doi: 10.1111/pcn.12106. Epub 2013 Oct 31.
Aberrant functional connectivity for diagnosis of major depressive disorder: a discriminant analysis.

10.
Neuroimage. 2015 Jan 15;105:493-506. doi: 10.1016/j.neuroimage.2014.11.021. Epub 2014 Nov 15.
Sparse network-based models for patient classification using fMRI.
Rosa MJ Portugal L Hahn T Fallgatter AJ Garrido MI Shawe-Taylor J Mourao-Miranda J.

11.
Proc IEEE Int Symp Biomed Imaging. 2014 Apr;2014:246-249.
ELUCIDATING BRAIN CONNECTIVITY NETWORKS IN MAJOR DEPRESSIVE DISORDER USING CLASSIFICATION-BASED SCORING.
Sacchet MD Prasad G Foland-Ross LC Thompson PM Gotlib IH.

12.
Front Psychiatry. 2015 Feb 18;6:21. doi: 10.3389/fpsyt.2015.00021. eCollection 2015.
Support vector machine classification of major depressive disorder using diffusion-weighted neuroimaging and graph theory.
Sacchet MD Prasad G Foland-Ross LC Thompson PM Gotlib IH.

13.
J Affect Disord. 2015 Sep 15;184:182-92. doi: 10.1016/j.jad.2015.05.052. Epub 2015 Jun 6.
Separating depressive comorbidity from panic disorder: A combined functional magnetic resonance imaging and machine learning approach.
Lueken U Straube B Yang Y Hahn T Beesdo-Baum K Wittchen HU Konrad C Ströhle A Wittmann A Gerlach AL Pfleiderer B, Arolt V, Kircher T.

14.
PLoS One. 2015 Jul 17;10(7):e0132958. doi: 10.1371/journal.pone.0132958. eCollection 2015.
Structural MRI-Based Predictions in Patients with Treatment-Refractory Depression (TRD).
Johnston BA Steele JD Tolomeo S Christmas D Matthews K.

15.
Psychiatry Res. 2015 Jul 5. pii: S0925-4927(15)30025-1. doi: 10.1016/j.pscychresns.2015.07.001. [Epub ahead of print]
Machine learning algorithm accurately detects fMRI signature of vulnerability to major depression.
Sato JR Moll J Green S Deakin JF Thomaz CE Zahn R.

16.
Neuroimage. 2015 Jul 24. pii: S1053-8119(15)00674-6. doi: 10.1016/j.neuroimage.2015.07.054. [Epub ahead of print]
A group ICA based framework for evaluating resting fMRI markers when disease categories are unclear: Application to schizophrenia, bipolar, and schizoaffective disorders.
Du Y Pearlson GD Liu J Sui J Yu Q He H Castro E Calhoun VD.

17.
Neuroreport. 2015 Aug 19;26(12):675-80. doi: 10.1097/WNR.0000000000000407.
Predicting clinical responses in major depression using intrinsic functional connectivity.
Qin J, Shen H, Zeng LL, Jiang W, Liu L, Hu D.

18.
J Affect Disord. 2015 Jul 15;180:129-37. doi: 10.1016/j.jad.2015.03.059. Epub 2015 Apr 4.
Altered anatomical patterns of depression in relation to antidepressant treatment: Evidence from a pattern recognition analysis on the topological organization of brain networks.
Qin J, Wei M, Liu H Chen J Yan R Yao Z Lu Q.

19.
Magn Reson Imaging. 2014 Dec;32(10):1314-20. doi: 10.1016/j.mri.2014.08.037. Epub 2014 Aug 29.
Abnormal hubs of white matter networks in the frontal-parieto circuit contribute to depression discrimination via pattern classification.
Qin J, Wei M, Liu H Chen J Yan R Hua L Zhao K Yao Z Lu Q.

20.
Biomed Res Int. 2014;2014:706157. doi: 10.1155/2014/706157. Epub 2014 Jan 19.
Neuroanatomical classification in a population-based sample of psychotic major depression and bipolar I disorder with 1 year of diagnostic stability.
Serpa MH, Ou Y Schaufelberger MS Doshi J Ferreira LK Machado-Vieira R Menezes PR Scazufca M Davatzikos C Busatto GF Zanetti MV.

21.
Psychiatry Res. 2013 Dec 30;214(3):306-12. doi: 10.1016/j.pscychresns.2013.09.008. Epub 2013 Oct 7.
Identifying major depressive disorder using Hurst exponent of resting-state brain networks.
Wei M Qin J, Yan R, Li H, Yao Z, Lu Q.

22.
J Psychiatry Neurosci. 2014 Mar;39(2):78-86.
Characterization of major depressive disorder using a multiparametric classification approach based on high resolution structural images.
Qiu L Huang X Zhang J Wang Y Kuang W Li J Wang X Wang L Yang X Lui S Mechelli A Gong Q2.

23.
PLoS One. 2013 Jul 2;8(7):e68250. doi: 10.1371/journal.pone.0068250. Print 2013.
Convergent and divergent functional connectivity patterns in schizophrenia and depression.
Yu Y Shen H, Zeng LL, Ma Q, Hu D.

24.
Eur Arch Psychiatry Clin Neurosci. 2013 Mar;263(2):119-31. doi: 10.1007/s00406-012-0329-4. Epub 2012 May 26.
Discriminating unipolar and bipolar depression by means of fMRI and pattern classification: a pilot study.
Grotegerd D Suslow T, Bauer J, Ohrmann P, Arolt V, Stuhrmann A, Heindel W, Kugel H, Dannlowski U.

25.
Neuroreport. 2008 Oct 8;19(15):1507-11. doi: 10.1097/WNR.0b013e328310425e.
Neuroanatomy of verbal working memory as a diagnostic biomarker for depression.
Marquand AF Mourão-Miranda J, Brammer MJ, Cleare AJ, Fu CH.

26.
Biol Psychiatry. 2008 Apr 1;63(7):656-62. Epub 2007 Oct 22.
Pattern classification of sad facial processing: toward the development of neurobiological markers in depression.
Fu CH Mourao-Miranda J, Costafreda SG, Khanna A, Marquand AF, Williams SC, Brammer MJ.

27.
Neuroreport. 2009 May 6;20(7):637-41. doi: 10.1097/WNR.0b013e3283294159.
Neural correlates of sad faces predict clinical remission to cognitive behavioural therapy in depression.
Costafreda SG Khanna A, Mourao-Miranda J, Fu CH.

28.
Magn Reson Med. 2009 Dec;62(6):1619-28. doi: 10.1002/mrm.22159.
Disease state prediction from resting state functional connectivity.
Craddock RC Holtzheimer PE 3rd, Hu XP, Mayberg HS.

29.
Neuroimage. 2011 Apr 15;55(4):1497-503. doi: 10.1016/j.neuroimage.2010.11.079. Epub 2010 Dec 3.
Prognostic prediction of therapeutic response in depression using high-field MR imaging.
Gong Q Wu Q, Scarpazza C, Lui S, Jia Z, Marquand A, Huang X, McGuire P, Mechelli A.

30.
Neuroimage. 2012 Jun;61(2):457-63. doi: 10.1016/j.neuroimage.2011.11.002. Epub 2011 Nov 7.
Diagnostic neuroimaging across diseases.
Klöppel S Abdulkadir A, Jack CR Jr, Koutsouleris N, Mourão-Miranda J, Vemuri P.

31.
Brain. 2015 Jul;138(Pt 7):2059-73. doi: 10.1093/brain/awv111. Epub 2015 May 1.
Individualized differential diagnosis of schizophrenia and mood disorders using neuroanatomical biomarkers.
Koutsouleris N Meisenzahl EM Borgwardt S Riecher-Rössler A Frodl T Kambeitz J Köhler Y Falkai P Möller HJ Reiser M Davatzikos C.

32.
JAMA Psychiatry. 2014 Nov;71(11):1222-30. doi: 10.1001/jamapsychiatry.2014.1100.
Brain morphometric biomarkers distinguishing unipolar and bipolar depression. A voxel-based morphometry-pattern classification approach.
Redlich R Almeida JJ Grotegerd D Opel N Kugel H Heindel W Arolt V Phillips ML Dannlowski U.

33.
Brain Behav. 2013 Nov;3(6):637-48. doi: 10.1002/brb3.173. Epub 2013 Sep 22.
A reversal coarse-grained analysis with application to an altered functional circuit in depression.
Guo S Yu Y Zhang J Feng J.

34.
Hum Brain Mapp. 2014 Jul;35(7):2995-3007. doi: 10.1002/hbm.22380. Epub 2013 Sep 13.
Amygdala excitability to subliminally presented emotional faces distinguishes unipolar and bipolar depression: an fMRI and pattern classification study.
Grotegerd D Stuhrmann A, Kugel H, Schmidt S, Redlich R, Zwanzger P, Rauch AV, Heindel W, Zwitserlood P, Arolt V, Suslow T, Dannlowski U.


Cats on Treadmills (and the plasticity of biological motion perception)

$
0
0

Cats on a treadmill. From Treadmill Kittens.


It's been an eventful week. The 10th Anniversary of Hurricane Katrina. The 10th Anniversary of Optogenetics (with commentary from the neuroscience community and from theinventors). The Reproducibility Project's efforts to replicate 100 studies in cognitive and social psychology (published in Science). And the passing of the great writer and neurologist, Oliver Sacks. Oh, and Wes Craven just died too...

I'm not blogging about any of these events. Many many others have already written about them (see selected reading list below). And The Neurocritic has been feeling tapped out lately.

Hence the cats on treadmills. They're here to introduce a new study which demonstrated that early visual experience is not necessary for the perception of biological motion (Bottari et al., 2015). Biological motion perception involves the ability to understand and visually track the movement of a living being. This phenomenon is often studied using point light displays, as shown below in a demo from the BioMotion Lab. You should really check out their flash animation that allows you to view human, feline, and pigeon walkers moving from right to left, scrambled and unscrambled, masked and unmasked, inverted and right side up.






Biological Motion Perception Is Spared After Early Visual Deprivation

People born with dense, bilateral cataracts that are surgically removed at a later date show deficits in higher visual processing, including the perception of global motion, global form, faces, and illusory contours. Proper neural development during the critical, or sensitive period early in life is dependent on experience, in this case visual input. However, it seems that the perception of biological motion (BM) does not require early visual experience (Bottari et al., 2015).

Participants in the study were 12 individuals with congenital cataracts that were removed at a mean age of 7.8 years (range 4 months to 16 yrs). Age at testing was 17.8 years (range 10-35 yrs). The study assessed their biological motion thresholds (extracting BM from noise) and recorded their EEG to point light displays of a walking man and to scrambled versions of the walking man (see demo).





Behavioral performance on the BM threshold task didn't differ much between the congenital cataract (cc) and matched control (mc) groups (i.e., there was a lot of overlap between the filled diamonds and the open triangles below).

Modified from Fig. 1 (Bottari et al., 2015).


The event-related potentials (ERPs) averaged to presentations of the walking man vs. scrambled man showed the same pattern in cc and mc groups as well: larger to walking man (BM) than scrambled man (SBM).

Modified from Fig. 1 (Bottari et al., 2015).


The N1 component (the peak at about 0.25 sec post-stimulus) seems a little smaller in cc but that wasn't significant. On the other hand, the earlier P1 was significantly reduced in the cc group. Interestingly, the duration of visual deprivation, amount of visual experience, and post-surgical visual acuity did not correlate with the size of the N1.

The authors discuss three possible explanations for these results:
(1) The neural circuitries associated with the processing of BM can specialize in late childhood or adulthood. That is, as soon as visual input becomes available, initiates the functional maturation of the BM system. Alternatively the neural systems for BM might mature independently of vision. (2) Either they are shaped cross-modally or (3) they mature independent of experience.

They ultimately favor the third explanation, that "the neural systems for BM specialize independently of visual experience." They also point out that the ERPs to faces vs. scrambled faces in the cc group do not show the characteristic difference between these stimulus types. What's so special about biological motion, then? Here the authors wave their hands and arms a bit:
We can only speculate why these different developmental trajectories for faces and BM emerge: BM is characteristic for any type of living being and the major properties are shared across species. ... By contrast, faces are highly specific for a species and biases for the processing of faces from our own ethnicity and age have been shown.

It's more important to see if a bear is running towards you than it is to recognize faces, as anyone with congenital prosopagnosia ("face blindness") might tell you...


Footnote

1Troje & Westhoff (2006):
"The third sequence showed a walking cat. The data are based on a high-speed (200 fps) video sequence showing a cat walking on a treadmill. Fourteen feature points were manually sampled from single frames. As with the pigeon sequence, data were approximated with a third-order Fourier series to obtain a generic walking cycle."


Reference

Bottari, D., Troje, N., Ley, P., Hense, M., Kekunnaya, R., & Röder, B. (2015). The neural development of the biological motion processing system does not rely on early visual input Cortex, 71, 359-367 DOI: 10.1016/j.cortex.2015.07.029






Links to Pieces About Momentous Events

Remembering Katrina in the #BlackLivesMatter Movement by Tracey Ross

Hurricane Katrina Proved That If Black Lives Matter, So Must Climate Justice by Elizabeth Yeampierre

Project Katrina: A Decade of Resilience in New Orleans by Steven Gray

Hurricane Katrina, 10 Years Later, Buzzfeed's Katrina issue

ChR2: Anniversary: Optogenetics, special issue of Nature Neuroscience

ChR2 coming of age, editorial in Nature Neuroscience

Optogenetics and the future of neuroscience by Ed Boyden

Optogenetics: 10 years of microbial opsins in neuroscience by Karl Deisseroth

Optogenetics: 10 years after ChR2 in neurons—views from the community in Nature Neuroscience

10 years of neural opsins by Adam Calhoun

Estimating the reproducibility of psychological science in Science

Reproducibility Project: Psychology on Open Science Framework

How Reliable Are Psychology Studies? by Ed Yong

The Bayesian Reproducibility Project by Alexander Etz

A Life Well Lived, by those who maintain the Oliver Sacks, M.D. website.

Oliver Sacks, Neurologist Who Wrote About the Brain’s Quirks, Dies at 82, NY Times obituary

Oliver Sacks has left the building by Vaughan Bell

My Own Life, Oliver Sacks on Learning He Has Terminal Cancer


Mind Reading in the Red Room of "Listening"

$
0
0

"How am I supposed to work knowing that guy is listening to every thought that's going through my head? This is insane..."


David Thorogood and Ryan Cates are poor but brilliant Cal Tech grad students in Listening, a new neuro science fiction film by writer-director Khalil Sullins. Their secret garage lab invention of direct brain-to-brain communication has been hijacked by the CIA, who put it to nefarious use.





I'll take a closer look at the neuroscience (good and bad) in the next post.


Excessive use of filters? Perhaps...

Neurohackers Gone Wild!

$
0
0
Scene from Listening, a new neuro science fiction film by writer-director Khalil Sullins.


What are some of the goals of research in human neuroscience?
  • To explain how the mind works.
  • To unravel the mysteries of consciousness and free will.
  • To develop better treatments for mental and neurological illnesses.
  • To allow paralyzed individuals to walk again.

Brain decoding experiments that use fMRI or ECoG (direct recordings of the brain in epilepsy patients) to deduce what a person is looking at or saying or thinking have become increasingly popular as well.

They're still quite limited in scope, but any study that can invoke “mind reading” or “brain-to-brain” scenarios will attract the press like moths to a flame....

For example, here's how NeuroNews site Brain Decoder covered the latest “brain-to-brain communication” stunt and the requisite sci fi predictions:
Scientists Connect 2 Brains to Play “20 Questions”

Human brains can now be linked well enough for two people to play guessing games without speaking to each other, scientists report. The researchers hooked up several pairs of people to machines that connected their brains, allowing one to deduce what was on the other's mind.
. . .

This brain-to-brain interface technology could one day allow people to empathize or see each other's perspectives more easily by sending others concepts too difficult to explain in words, [author Andrea Stocco] said.

Mind reading! Yay! But this isn't what happened. No thoughts were decoded in the making of this paper (Stocco et al., 2015).

Instead, stimulation of visual cortex did all the “talking.” Player One looked at an LED that indicated “yes” (13 Hz flashes) or “no” (12 Hz flashes). Steady-state visual evoked potentials (a type of EEG signal very common in BCI research) varied according to flicker rate, and this binary code was transmitted to a second computer, which triggered a magnetic pulse delivered to the visual cortex of Player Two if the answer was yes. The TMS pulse in turn elicited a phosphene (a brief visual percept) that indicated yes (no phosphene indicated a “no” answer).

Eventually, we see some backpedalling in the Brain Decoder article:
Ideally, brain-to-brain interfaces would one day allow one person to think about an object, say a hammer, and another to know this, along with the hammer's shape and what the first person wanted to use it for. "That would be the ideal type of complexity of information we want to achieve," Stocco said. "We don't know whether that future is possible." 

Well, um, we already have the first half of the equation to some small degree (Naselaris et al. 2015 decoded mental images of remembered scenes)...


But the Big Prize goes to.... the decoders of covert speech, or inner thoughts!! (Martin et al. 2014)

Scientists develop a brain decoder that can hear your inner thoughts

Brain decoder can eavesdrop on your inner voice



Listening to Your Thoughts

The new film Listeningstarts off with a riff on this work and spins into a dark and dangerous place where no thought is private. Given the preponderance of “hearing” metaphors above, it's fitting that the title is Listening, where fiction (in this case near-future science fiction) is stranger than truth. The hazard of watching a movie that depicts your field of expertise is that you nitpick every little thing (like the scalp EEG sensors that record from individual neurons). This impulse was exacerbated by a setting which is so near-future that it's present day.


From Marilyn Monroe Neurons to Carbon Nanotubes

But there were many things I did like about Listening.1  In particular, I enjoyed the way the plot developed in the second half of the film, especially in the last 30 minutes. On the lighter side was this amusing scene of a pompous professor lecturing on the real-life finding of Marilyn Monroe neurons (Quian Quiroga et al., 2005, 2009).




Caltech Professor:“For example, the subject is asked to think about Marilyn Monroe. My study suggests not only conscious control in the hippocampus and parahippocampal cortex, when the neuron....”

Conversation between two grad students in back of class:“Hey, you hear about the new bioengineering transfer?” ...

Caltech Professor:“Mr. Thorogood, perhaps you can enlighten us all with Ryan's gossip? Or tell us what else we can conclude from this study?”

Ryan the douchy hardware guy:“We can conclude that all neurosurgeons are in love with Marilyn Monroe.”

David the thoughtful software guy:“A single neuron has not only the ability to carry complex code and abstract form but is also able to override sensory input through cognitive effort. It suggests thought is a stronger reality than the world around us.”

Caltech Professor:“Unfortunately, I think you're both correct.”


Ryan and David are grad students with Big Plans. They've set up a garage lab (with stolen computer equipment) to work on their secret EEG decoding project. Ryan the douche lets Jordan the hot bioengineering transfer into their boys' club, much to David's dismay.

Ryan:“She's assigned to Professor Hamomoto's experiment with ATP-powered cell-binding nanotube devices.” [maybe these?]

So she gets to stay in the garage. For the demonstration, Ryan sports an EEG net that looks remarkably like the ones made by EGI (shown below on the right).




Ryan reckons they'll put cell phone companies out of business with their mind reading invention, but David realizes they have a long way to go...




Jordan the hot bioengineering transfer:  “Your mind can have a dozen thoughts in a millisecond 2 [really? how can you possibly assert this?] but it takes you five seconds to say 'hi sexy'?”

Ryan the douchy hardware guy:“It's not perfect.”

Jordan: “It's crap.”

.....

Jordan points out the decoding algorithm's response time is way too slow to be useful, and that recording from “a thousand neurons” 3 isn't enough... “you have to open the books.” David points out they're not neurosurgeons (who would implant intracranial electrodes for ECoG).

Jordan: “You don't need surgery... you need nanotubes.”

...and this leads to the most ridiculous scenario: intrathecal administration of said nanotubes [along with microscopic transistors to form molecular electrodes] via lumbar puncture (spinal injections) performed by complete novices wielding foot long needles. [direct administration into the cerebrospinal fluid bypasses difficulties with the impermeable blood brain barrier.] But if you can get through that, and the heavy handed use of color filters...




...you will be transported to the Red Room, where scary bald men “listen” to every thought [the direct brain-to-brain communication is one way only to avoid that nasty "circular feedback loop"].




Then more THINGS happen. It's not perfect. But it's not crap. I thought Listening was worth $4.99.

Available on Amazon and Vimeo.

Sometimes even The Neurocritic is willing to suspend disbelief...


Further Reading

Brain decoding: Reading minds: 2013 Nature News story by Kerri Smith.
“By scanning blobs of brain activity, scientists may be able to decode people's thoughts, their dreams and even their intentions.”

Neuroscience: ‘I built a brain decoder': BBC Future
“What are you looking at? Scientist Jack Gallant can find out by decoding your thoughts, as Rose Eveleth discovers.”

Brain Decoding Project: mouse hippocampus
---A BRAIN Project: Brain Activity Mapping of Neural Codes for memory

One more step along the long road towards brain-to-brain interfaces: Nice blog coverage of the 20 Questions study by Pierre Mégevand.

Meet the Hackers Who Are Decrypting Your Brainwaves: Oh no they're not. But an interesting piece on the DIY EEG movement.


Footnotes

1 Some of the dialogue and the interpersonal relationships? Not as much.

2Dozens of thoughts in 1/1000 of a second?? Perhaps she's being hyperbolic here... Well, popular lore says we have 70,000 thoughts per day, which comes out to only 0.8101851851851852 thoughts per second. But this is also absurd, since we haven't yet defined what a “thought” even is. Interesting factoid: the Laboratory of Neuroimaging (LONI) at UCLA has taken credit for this number. But they did offer some caveats:
*This is still an open question (how many thoughts does the average human brain processes in 1 day). LONI faculty have done some very preliminary studies using undergraduate student volunteers and have estimated that one may expect around 60-70K thoughts per day. These results are not peer-reviewed/published. There is no generally accepted definition of what "thought" is or how it is created. In our study, we had assumed that a "thought" is a sporadic single-idea cognitive concept resulting from the act of thinking, or produced by spontaneous systems-level cognitive brain activations.
theoracleofdelphi-ga had some interesting thoughts on the matter:
So there's the heart of the problem: No one really knows what the biological basis for a 'thought' is, so we can't compute how fast a brain can produce them. Once you figure out the biological basis for a thought (and return from the Nobel ceremony) you can ask the question again and expect a reasonable scientific answer.

In the mean time, you could probably get a bunch of psychologists to argue about the definition of a thought for a while, and get a varying set of answers that depend highly on the definitions.
Oh, I think they said also 30 thoughts per second at another time in the movie...

3 Yeah, here's the “one electrode, one neuron” fallacy. The reality is that a single EEG electrode records summed, synchronous activity from thousands of neurons, at the very least.


References

Herff C, Heger D, de Pesters A, Telaar D, Brunner P, Schalk G, Schultz T. (2015). Brain-to-text: decoding spoken phrases from phone representations in the brain. Front Neurosci. 9:217.

Liu H, Agam Y, Madsen JR, Kreiman G. (2009). Timing, timing, timing: fast decoding of object information from intracranial field potentials in human visual cortex. Neuron 62(2):281-90.

Martin S, Brunner P, Holdgraf C, Heinze HJ, Crone NE, Rieger J, Schalk G, Knight RT, Pasley BN.  (2014). Decoding spectrotemporal features of overt and covertspeech from the human cortex. Front Neuroeng. 7:14.

Naselaris T, Olman CA, Stansbury DE, Ugurbil K, Gallant JL. (2015). A voxel-wise encoding model for early visual areas decodes mental images of remembered scenes. Neuroimage 105:215-28.

Pasley BN, David SV, Mesgarani N, Flinker A, Shamma SA, Crone NE, Knight RT, Chang EF. (2012). Reconstructing speech from human auditory cortex. PLoS Biol. 10(1):e1001251.

Quian Quiroga R, Kraskov A, Koch C, Fried I. (2009). Explicit encoding of multimodal percepts by single neurons in the human brain. Curr Biol. 19(15):1308-13.

Quiroga RQ, Reddy L, Kreiman G, Koch C, Fried I. (2005). Invariant visual representation by single neurons in the human brain. Nature 435(7045):1102-7.

Stocco, A., Prat, C., Losey, D., Cronin, J., Wu, J., Abernethy, J., & Rao, R. (2015). Playing 20 Questions with the Mind: Collaborative Problem Solving by Humans Using a Brain-to-Brain Interface PLOS ONE, 10 (9) DOI: 10.1371/journal.pone.0137303




Good Brain / Bad Brain

$
0
0
'Wiring diagrams' link lifestyle to brain function

Human Connectome Project finds surprising correlations between brain architecture and behavioural or demographic influences.

The brain’s wiring patterns can shed light on a person’s positive and negative traits, researchers report in Nature Neuroscience1. The finding, published on 28 September, is the first from the Human Connectome Project (HCP), an international effort to map active connections between neurons in different parts of the brain.



What are some of these surprising conclusions about the living human brain?





Good Brain / Bad Brain



Smith et al. (2015):
“We identified one strong mode of population co-variation: subjects were predominantly spread along a single 'positive-negative' axis linking lifestyle, demographic and psychometric measures to each other and to a specific pattern of brain connectivity.”

Well. This sounds an awful lot like the Hegemony of the Western Binary as applied to resting state functional connectivity to me...

And hey, looks like IQ, years of education, socioeconomic status, the ability to delay reward, and life satisfaction give you a good brain.

“You can distinguish people with successful traits and successful lives versus those who are not so successful,” [Marcus Raichle] says.


The authors used canonical correlation analysis (CCA) to estimate how 280 demographic and behavioral subject measures and patterns of brain connectivity co-varied in a similar way across subjects (Smith et al., 2015):
“This analysis revealed a single highly significant CCA mode that relates functional connectomes to subject measures (r = 0.87, P< 10−5 corrected for multiple comparisons across all modes estimated).”

And who is not so “successful” (at least according to their chaotic and disconnected brains)?



Regular pot smokers:  “...one of the negative traits that pulled a brain farthest down the negative axis was marijuana use in recent weeks.”  Cue up additional funding for NIDA:  “...the finding emphasizes the importance of projects such as one launched by the US National Institute on Drug Abuse last week, which will follow 10,000 adolescents for 10 years to determine how marijuana and other drugs affect their brains.”

But what about wine coolers??




Why am I asking this?? Because in the subject measures, it was a little obvious that malt liquor was considered separately from beer/wine coolers. {Who drinks wine coolers? Who drinks malt liquor?}



In terms of alcohol content, the distinction is silly these days, since you can buy craft beers like Boatswain Double IPA (8.4% alcohol) for $2.29 at Trader Joe's. Unless those questions were retained as a code for race and socioeconomic status...

Hmm... 






I'm getting way off track here. My point is that presenting correlational HCP data in a binary manner without any sort of social context isn't a very flattering thing to do.

I am my connectome,” says Sebastian Seung. What about the 460 participants in the study? What about you?


References

Reardon S (2015). 'Wiring diagrams' link lifestyle to brain function. Nature News. doi:10.1038/nature.2015.18442

Smith SM, Nichols TE, Vidaurre D, Winkler AM, Behrens TE, Glasser MF, Ugurbil K, Barch DM, Van Essen DC, Miller KL. (2015). A positive-negative mode of population covariation links brain connectivity, demographics and behavior. Nat Neurosci. 2015 Sep 28. doi: 10.1038/nn.4125.



“As a black woman interested in feminist movement, I am often asked whether being black is more important than being a woman; whether feminist struggle to end sexist oppression is more important than the struggle to racism or vice versa. All such questions are rooted in competitive either/or thinking, the belief that the self is formed in opposition to an other...Most people are socialized to think in terms of opposition rather than compatibility. Rather than seeing anti-racist work as totally compatible with working to end sexist oppression, they often see them as two movements competing for first place.”

bell hooks, Feminist Theory: From Margin to Center

A few more words about good brains and bad brains

$
0
0


My previous Good Brain / Bad Brain post may have been a little out there, so here are four brief comments.

(1) HCP database.  The entire Human Connectome Project database (ConnectomeDB) is an amazing resource that's freely available (more details in Van Essen et al., 2013, 2015).


(2) Good reporting / bad reporting.  Smith et al. (2015) are to be commended for such an impressive body of work.1  But I still think it was remiss to report a population along a judgmental good/bad binary axis in a cursory manner. The correlation/causation conundrum needs more of a caveat than:
These analyses were driven by and report only correlations; inferring and interpreting the (presumably complex and diverse) causalities remains a challenging issue for the future.
...or else you're confronted with press coverage like this:
Are some brains wired for a lifestyle that includes education and high levels of satisfaction, while others are wired for anger, rule-breaking, and substance use?

“Wired” implies born that way no effects of living in poverty in a shitty neighborhood.

Oh, and my flippant observation about the wine cooler/malt liquor axis wasn't actually a major player in the canonical correlation analysis. But race and ethnicity information was indeed collected (but not used: “partly because the race measure is not quantitative, but consists of several distinct categories”).


(3) Ethics!  This brings up the larger issue of ethics. A whole host of personal participant information (e.g., genomics from everyone, including hundreds of identical twins) is included in the package. From Van Essen et al. (2013):
The released HCP data are not considered de-identified, insofar as certain combinations of HCP Restricted Data (available through a separate process) might allow identification of individuals as discussed below. It is accordingly important that all investigators who agree to Open Access Data Use Terms consult with their local IRB or Ethics Committee to determine whether the research needs to be approved or declared exempt. If needed and upon request, the HCP will provide a certificate stating that an investigator has accepted the HCP Open Access Data Use Terms. Because HCP participants come from families with twins and non-twin siblings, there is a risk that combinations of information about an individual (e.g., age by year; body weight and height; handedness) might lead to inadvertent identification, particularly by other family members, if these combinations were publicly released.

Oops.


Important Notice to Recipients and System Administrators of HCP Connectome In A Box Hard Drives

Thank you for acquiring a Connectome-in-a-Box that contains HCP image data.  This provides an easy and efficient way to transfer large HCP datasets to other labs and institutions wanting to process lots of data, especially when multiple investigators are involved. With it comes a need to insure compliance with HCP’s Data Use Terms as well as any institutional requirements.


And any participant in the study can look at the results and infer, because of their regular cannabis use and their father's history of heavy drinking, that they must have a “bad brain.” Do the investigators have an obligation to counsel them on what this might mean (and what they should do)? Yeah, stop smoking cigarettes and pot, but there's not much they can do about their father's substance abuse or their fluid intelligence.


(4) Biology.  Finally, I'm not sure what the finding means biologically. Across a population, there's a general mode of functional connectivity while participants lie in a scanner with nothing to do. That falls along an axis of “positive” and “negative” traits. And this pattern of correlated hemodynamic activity across 30 node-pair edges means....... what, exactly?

Every person's connectome is unique (“I am my connectome” for the thousandth time).2  But this mantra more commonly refers to the fine-grained structural connectome. You know, the kind that will live forever and be uploaded to a computer (see Amy Harmon's article on The Neuroscience of Immortality, which caused quite a splash).

What is the relationship between resting state functional connectivity and the implementation of thought and behavior via neural codes? This must be exceptionally unique for each person. We know this because even in lowly organisms like flies, neurons in an olfactory region called the mushroom bodies show a striking degree of individuality in neural coding across animals.3 
At the single-cell level, we show that uniquely identifiable MBONs [mushroom body output neurons, n=34] displayprofoundly different tuning across different animals, but that tuning of the same neuron across the two hemispheres of an individual fly was nearly identical.

In other words, a fly's unique olfactory experience shapes the response properties of a tiny set of neurons, even for animals reared under the same conditions. “In several cases, we even recorded on the same day from progeny of the same cross, raised in the same food vial” (Hige et al., 2015).



I never know what to do with information like this, especially in the context of human brains, good and bad.....  Maybe: Are some fly MBONs wired for a wild lifestyle of apple cider vinegar?





Footnotes

1 Or maybe the result was a massive case of confirmation bias, as suggested in a private comment to me.

2See this book review for an opposing view.

3fly paper via @fly_papers (also @debivort and @neuroecology).

4Also see Neurocriminology in prohibition-era New York.

On the Long Way Down: The Neurophenomenology of Ketamine

$
0
0


Is ketamine a destructive club drug that damages the brain and bladder? With psychosis-like effects widely used as a model of schizophrenia? Or is ketamine an exciting new antidepressant, the “most important discovery in half a century”?

For years, I've been utterly fascinated by these separate strands of research that rarely (if ever) intersect. Why is that? Because there's no such thing as “one receptor, one behavior.” And because like most scientific endeavors, neuro-pharmacology/psychiatry research is highly specialized, with experts in one microfield ignoring the literature produced by another (though there are some exceptions).1

Ketamine is a dissociative anesthetic and PCP-derivative that can produce hallucinations and feelings of detachment in non-clinical populations. Phamacologically it's an NMDA receptor antagonist that also acts on other systems (e.g., opioid). Today I'll focus on a recent neuroimaging study that looked at the downsides of ketamine: anhedonia, cognitive disorganization, and perceptual distortions (Pollak et al., 2015).




Imaging Phenomenologically Distinct Effects of Ketamine

In this study, 23 healthy male participants underwent arterial spin labeling (ASL) fMRI scanning while they were infused with either a high dose (0.26 mg/kg bolus + slow infusion) or a low dose (0.13 mg/kg bolus + slow infusion) of ketamine 2 (Pollak et al., 2015). For comparison, the typical dose used in depression studies is 0.5 mg/kg (Wan et al., 2015). Keep in mind that the number of participants in each condition was low, n=12 (after one was dropped) and n=10 respectively, so the results are quite preliminary.

ASL is a post-PET and BOLD-less technique for measuring cerebral blood flow (CBF) without the use of a radioactive tracer (Petcharunpaisan et al., 2010). Instead, water in arterial blood serves as a contrast agent, after being magnetically labeled by applying a 180 degree radiofrequency inversion pulse. Basically, it's a good method for monitoring CBF over a number of minutes.




ASL sequences were obtained before and 10 min after the start of ketamine infusion. Before and after the scan, participants rated their subjective symptoms of delusional thinking, perceptual distortion, cognitive disorganization, anhedonia, mania, and paranoia on the Psychotomimetic States Inventory (PSI). The study was completely open label, so it's not like they didn't know they were getting a mind-altering drug.

Behavioral ratings were quite variable (note the large error bars below), but generally the effects were larger in the high-dose group, as one might expect.


The changes in Perceptual Distortion and Cognitive Disorganization scores were significant for the low-dose group, with the addition of Delusional Thinking, Anhedonia, and Mania in the high-dose group. But again, it's important to remember there was no placebo condition, the significance levels were not all that impressive, and the n's were low.

The CBF results (below) show increases in anterior and subgenual cingulate cortex and decreases in superior and medial temporal cortex, similar to previous studies using PET.



Fig 2a (Pollak et al., 2015). Changes in CBF with ketamine in the low- and high-dose groups overlaid on a high-resolution T1-weighted image.


Did I say the n's were low? The Fig. 2b maps (not shown here) illustrated significant correlations with the Anhedonia and Cognitive Disorganization subscales, but these were based on 10 and 12 data points, when outliers can drive phenomenally large effects. One might like to say...
For [the high-dose] group, ketamine-induced anhedonia inversely related to orbitofrontal cortex CBF changes and cognitive disorganisation was positively correlated with CBF changes in posterior thalamus and the left inferior and middle temporal gyrus. Perceptual distortion was correlated with different regional CBF changes in the low- and high-dose groups.
  ...but this clearly requires replication studies with placebo comparisons and larger subject groups.

Nonetheless, the fact remains that ketamine administration in healthy participants caused negative effects like anhedonia and cognitive disorganization at doses lower than those used in studies of treatment-resistant depression (many of which were also open label). Now you can say, “well, controls are not the same as patients with refractory depression” and you'd be right (see Footnote 1). “Glutamatergic signaling profiles” and symptom reports could show a variable relationship, with severe depression at the low end and schizophrenia at the high end (with controls somewhere in the middle).

A recent review of seven placebo-controlled, double-blind, randomized clinical trials of ketamine and other NMDA antagonists concluded (Newport et al., 2015):
The antidepressant efficacy of ketamine ... holds promise for future glutamate-modulating strategies; however, the ineffectiveness of other NMDA antagonists suggests that any forthcoming advances will depend on improving our understanding of ketamine’s mechanism of action. The fleeting nature of ketamine’s therapeutic benefit, coupled with its potential for abuse and neurotoxicity, suggest that its use in the clinical setting warrants caution.

The mysterious and paradoxical ways of ketamine continue...

So take it in don't hold your breath
The bottom's all I've found
We can't get higher than we get
On the Long Way Down





Further Reading

Ketamine for Depression: Yay or Neigh?

Warning about Ketamine in the American Journal of Psychiatry

Chronic Ketamine for Depression: An Unethical Case Study?

still more on ketamine for depression

Update on Ketamine in Palliative Care Settings

Ketamine - Magic Antidepressant, or Expensive Illusion? - by Neuroskeptic

Fighting Depression with Special K - by Scicurious


Footnotes

1 One exception is the present study, which discussed the divergent anhedonia results (compared to  previous findings of reduced anhedonia in depression). Another example is the work of Dr. John H. Krystal, which includes papers in both the schizophrenia and the treatment-resistant depression realms. However, most of the papers discuss only one and not the other. One notable exception (schizophrenia-related) said this:
...it is important to note that studies examining its effects on glutamateric pathways in the context of mood symptoms (178) may be highly informative for developing our understanding of its relevance to schizophrenia (111). Briefly, emerging models in this area postulate that ketamine may act as anti-depressant by promoting synaptic plasticity via intra-cellular signaling pathways, ultimately promoting brain-derived neurotrophic factor expression via synaptic potentiation (179) and in turns synaptic growth (178). In that sense, acute NMDAR antagonism may promote synaptic plasticity along specific pathways impacted in mood disorders, such as ventral medial PFC (180, 181, p. 916). Conversely, when administered to patients diagnosed with schizophrenia, NMDAR antagonists seem to worsen their symptom profile (182), perhaps by “pushing” an already aberrantly elevated glutamatergic signaling profile upward. Collectively such dissociable effects of ketamine may imply that along distinct circuits there may be an inverted-U relationship between ketamine’s effects and symptoms: depressed patients may be positioned on the low end of the inverted-U (178) and schizophrenia patents may be positioned on the higher end (183). Both task-based and resting-state functional connectivity techniques are well positioned to interrogate such system-level effects of NMDAR antagonists in humans.

2Low-dose ketamine: target plasma level of 50–75 ng/mL was specified (in practice this approximated a rapid bolus of an average of 0.12 mg/kg over 20 s followed by a slow infusion of 0.31 mg/kg/h).

High-dose ketamine: target plasma level of 150 ng/mL was specified (in practice this approximated a rapid bolus of 0.26 mg/kg over 20 s followed by a slow infusion of 0.42 mg/kg/h).


References

Petcharunpaisan S, Ramalho J, Castillo M. (2010). Arterial spin labeling inneuroimaging. World J Radiol. 2(10):384-98.

Pollak, T., De Simoni, S., Barimani, B., Zelaya, F., Stone, J., & Mehta, M. (2015). Phenomenologically distinct psychotomimetic effects of ketamine are associated with cerebral blood flow changes in functionally relevant cerebral foci: a continuous arterial spin labelling study Psychopharmacology DOI: 10.1007/s00213-015-4078-8

Wan LB, Levitch CF, Perez AM, Brallier JW, Iosifescu DV, Chang LC, Foulkes A, Mathew SJ, Charney DS, Murrough JW. (2015). Ketamine safety and tolerability in clinical trials for treatment-resistant depression. J Clin Psychiatry 76(3):247-52.





Ophidianthropy: The Delusion of Being Transformed into a Snake

$
0
0

Scene from Sssssss (1973).

When Dr. Stoner needs a new research assistant for his herpetological research, he recruits David Blake from the local college.  Oh, and he turns him into a snake for sh*ts and giggles.”

Movie Review by Jason Grey

Horror movies where people turn into snakes are relatively common (30 by one count), but clinical reports of delusional transmogrification into snakes are quite rare. This is in contrast to clinical lycanthropy, the delusion of turning into a wolf.

What follows are two frightening tales of unresolved mental illness, minimal followup, and oversharing (plus mistaking an April Fool's joke for a real finding).

THERE ARE NO ACTUAL PICTURES OF SNAKESin this post [an important note for snake phobics].

The first case of ophidianthropy was described by Kattimani et al. (2010):
A 24 year young girl presented to us with complaints that she had died 15 days before and that in her stead she had been turned into a live snake. At times she would try to bite others claiming that she was a snake. ... We showed her photos of snakes and when she was made to face the large mirror she failed to identify herself as her real human self and described herself as snake. She described having snake skin covering her and that her entire body was that of snake except for her spirit inside.  ...  She was distressed that others did not understand or share her conviction. She felt hopeless that nothing could make her turn into real self. She made suicidal gestures and attempted to hang herself twice on the ward...

The initial diagnosis was severe depressive disorder with psychotic features. A series of drug trials was unsuccessful (Prozac and four different antipsychotics), and a course of 10 ECT sessions had no lasting effect on her delusions. The authors couldn't decide whether the patient should be formally diagnosed with schizophrenia or a more general psychotic illness. Her most recent treatment regime (escitalopram plus quetiapine) was also a failure because the snake delusion persisted.

“Our next plan is to employ supportive psychotherapy in combination with pharmacotherapy,” said the authors (but we never find out what happened to her). Not a positive outcome...



Scene from Sssssss (1973).


Ophidiantrophy with paranoid schizophrenia, cannabis use, bestiality, and history of epilepsy

The second case is even more bizarre, with a laundry list of delusions and syndromes (Mondal, 2014):
A 23 year old, married, Hindu male, with past history of  ... seizures..., personal history of non pathological consumption of bhang and alcohol for the last nine years and one incident of illicit sexual intercourse with a buffalo at the age of 18 years presented ... with the chief complains of muttering, fearfulness, wandering tendency ... and hearing of voices inaudible to others for the last one month. ... he sat cross legged with hands folded in a typical posture resembling the hood of a snake. ... The patient said that he inhaled the breath of a snake passing by him following which he changed into a snake. Though he had a human figure, he could feel himself poisonous inside and to have grown a fang on the lower set of his teeth. He also had the urge to bite others but somehow controlled the desire. He said that he was not comfortable with humans then but would be happy on seeing a snake, identifying it belonging to his species. ... He says that he was converted back to a human being by the help of a parrot, which took away his snake fangs by inhaling his breath and by a cat who ate up his snake flesh once when he was lying on the ground. ...  the patient also had thought alienation phenomena in the form of thought blocking, thought withdrawal and thought broadcasting, delusion of persecution, delusion of reference, delusion of infidelity [Othello syndrome], the Fregoli delusion, bizarre delusion, nihilistic delusion [Cotard's syndrome], somatic passivity, somatic hallucinations, made act [?], third person auditory hallucinations, derealization and depersonalisation. He was diagnosed as a case of paranoid schizophrenia as per ICD 10.

Wow.

He was was given the antipsychotic haloperidol while being treated as an inpatient for 10 days. Some of his symptoms improved but others did not. “Long term follow up is not available.”

The discussion of this case is a bit... terrifying:
Lycanthropy encompasses two aspects, the first one consisting of primary lupine delusions and associated behavioural deviations termed as lycomania, and the second aspect being a psychosomatic problem called as lycosomatization (Kydd et al., 1991).
Kydd, O.U., Major, A., Minor, C (1991). A really neat, squeaky-clean isolation and characterization of two lycanthropogens from nearly subhuman populations of Homo sapiens. J. Ultratough Molec. Biochem. 101: 3521-3532.  [this is obviously a fake citation]
Endogenous lycanthropogens responsible for lycomania are lupinone and buldogone which differ by only one carbon atom in their ring structure; their plasma level having a lunar periodicity with peak level during the week of full moon. Lycosomatization likely depends on the simultaneous secretion of suprathreshold levels of both lupinone and the peptide lycanthrokinin, a second mediator, reported to be secreted by the pineal gland, that “initiates and maintains the lycanthropic process” (Davis et al., 1992). Thus, secretion of lupinone without lycanthrokinin results in only lycomania. In our patient these molecular changes were not investigated.

oh my god, the paper by Davis et al. on the Psychopharmacology of Lycanthropy (and "endogenous lycanthropogens") was published in the April 1, 1992 issue of the Canadian Medical Association Journal. There is no such thing as lupinone and buldogone.



Fig. 1 (Davis et al., 1992): Structural formulas of endogenous lycanthropogens.


I know the authors are non-native English speakers, but where was the peer review for the Asian Journal of Psychiatry??  We might as well return to the review for Sssssss, which was more thorough.




 THE MORGUE:
   David Blake -
Our hapless victim.  David is a college student who gets recruited by Dr. Stoner to help out at his farm, and be his latest test subject.  He's a nice guy, and there really is not much to say about him, as he's pretty bland until he starts growing scales.

   Dr. Carl Stoner - The villain of our piece.  He's a snake researcher looking for new grant money, and a new test subject.  He actually means well enough, and is looking to advance humanity, but in classic horror movie fashion, he plays God and things go too far.

   Kristine Stoner - The doctor's daughter, who is also interested in snakes.  Especially David's.  She's smart, and kind, and again a bit of a blank slate beyond those traits.  Loyal to a fault with her father.

   Dr. Daniels - A minor character, but Stoner's chief rival, and the man who holds the purse strings.  The two doctors have an antagonistic relationship, but there seems to be an undercurrent of past friendship as well, overshadowed by Daniels' position.  Or I'm reading too much into things.

Sssssss has a score of 13% on Rotten Tomatoes. We don't have a similar rating system for journal articles, but there's always PubMed Commons and PubPeer...


Further Reading

People Who Change into Snakes in Movies - from California Herps

Snake me up before you go-go: An unusual case of ophidianthropy - by Dr Mark Griffiths

Psychopharmacology of Lycanthropy

Werewolves of London, Ontario


References

Davis WM, Wellwuff HG, Garew L, Kydd OU. (1992). Psychopharmacology of lycanthropy. CMAJApr 1;146(7):1191-7.

Kattimani S, Menon V, Srivastava MK, Mukharjee A. (2010). Ophidianthropy: the case of a woman who ‘Turned into  a Snake’. Psychiatry On-Line.

Mondal, G., Nizamie, S., Mukherjee, N., Tikka, S., & Jaiswal, B. (2014). The ‘snake’ man: Ophidianthropy in a case of schizophrenia, along with literature review, Asian Journal of Psychiatry, 12, 148-149 DOI: 10.1016/j.ajp.2014.10.002


Buried Alive! The Immersive Experience

$
0
0

Ryan Reynolds in Buried (2010)


The pathological fear of being buried alive is called taphophobia.1  This seems like a perfectly rational fear to me, especially if one is claustrophobic and enjoys horror movies and Edgar Allan Poe shortstories. Within a modern medical context, however, it simply not possible that a person will be buried while still alive.

But this wasn't always the case. In the 19th century, true stories of premature burial were common, appearing in newspapers and medical journals of the day. Tebb and Vollum (1896) published a 400 page tome (Premature burial and how it may be prevented: with special reference to trance, catalepsy, and other forms of suspended animation) that was full of such examples:

The British Medical Journal, December 8, 1877,
p. 819, inserts the following : —

"BURIED ALIVE.

"A correspondent at Naples states that the Appeal Court has had before it a case not likely to inspire confidence in the minds of those who look forward with horror to the possibility of being buried alive. It appeared from the evidence that some time ago a woman was interred with all the usual formalities, it being believed that she was dead, while she was only in a trance. Some days afterwards, the grave in which she had been placed being opened for the reception of another body, it was found that the clothes which covered the unfortunate woman were torn to pieces, and that she had even broken her limbs in attempting to extricate herself from the living tomb. The Court, after hearing the case, sentenced the doctor who had signed the certificate of decease, and the mayor who had authorised the interment, each to three months' imprisonment for involuntary manslaughter."

To avoid this fate worse than death, contraptions known as “safety coffins” were popular, with air tubes, bells, flags, and/or burning lamps (Dossey, 2007). Some taphophobes went to great lengths to outline specific instructions for handling their corpse, to prevent such an ante-mortem horror from happening to them. Some might even say these directives were a form of “overkill”...

From the Lancet, August 20, 1864, p. 219.

"PREMATURE INTERMENT.

"Amongst the papers left by the great Meyerbeer, were some which showed that he had a profound dread of premature interment. He directed, it is stated, that his body should be left for ten days undisturbed, with the face uncovered, and watched night and day. Bells were to be fastened to his feet. And at the end of the second day veins were to be opened in the arm and leg. This is the gossip of the capital in which he died. The first impression is that such a fear is morbid. No doubt fewer precautions would suffice, but now and again cases occur which seem to warrant such a feeling, and to show that want of caution may lead to premature interment in cases unknown. An instance is mentioned by the Ost. Deutsche Post of Vienna. A few days since, runs the story, in the establishment of the Brothers of Charity in that capital, the bell of the dead-room was heard to ring violently, and on one of the attendants proceeding to the place to ascertain the cause, he was surprised at seeing one of the supposed dead men pulling the bell-rope. He was removed immediately to another room, and hopes are entertained of his recovery."

Here's a particularly gruesome one:

From the Daily Telegraph, January 18, 1889.

"A gendarme was buried alive the other day in a village near Grenoble. The man had become intoxicated on potato brandy, and fell into a profound sleep. After twenty hours passed in slumber, his friends considered him to be dead, particularly as his body assumed the usual rigidity of a corpse. When the sexton, however, was lowering the remains of the ill-fated gendarme into the grave, he heard moans and knocks proceeding from the interior of the 'four-boards.' He immediately bored holes in the sides of the coffin, to let in air, and then knocked off the lid. The gendarme had, however, ceased to live, having horribly mutilated his head in his frantic but futile efforts to burst his coffin open."

Doesn't that sound like fun? Wouldn't you like to experience this yourself? Now you can!



Taphobos, an immersive coffin experience (by James Brown)


How does it work?

The game uses a real life coffin, an Oculus Rift, a PC and some microphones. One player gets in the coffin with the Rift on, together with a headset + microphone. The other player plays on a PC again with mic + headset, this player will play a first person game where they must work with the buried player to uncover where the coffin is and rescue the trapped player before their oxygen runs out. This is all powered by the Unity engine.

But why?? (Brown, 2015):

This work is intended to explore “uncomfortable experiences and interactions” as part of academic research in the Human Computer Interaction field (HCI) from an MSc by Research in Computer Science student, James Brown. The player inside the coffin will experience various emotions as they are put in and then try to get out of the confined space. Claustrophobia as well as the fear of being buried alive “taphophobia” may well affect players of the game and they must cope with these emotions as they play.





Further Reading

taphobos.com

Buried Alive! (October 31, 2011)


Footnote

1 Also spelled taphephobia. From the Greek taphos, or grave.


References

Brown J. (2015). Taphobos: An Immersive Coffin Experience. British HCI 2015, July 13-17, 2015, Lincoln, United Kingdom.

Dossey L. (2007). The undead: botched burials, safety coffins, and the fear of the grave. Explore (NY). 3:347-54.

Tebb W, Vollum EP. (1896). Premature burial and how it may be prevented: with special reference to trance, catalepsy, and other forms of suspended animation. SWAN SONNENSCHEIN & CO., LIM.: London.  {archive.org}

Obesity Is Not Like Being "Addicted to Food"

$
0
0
Credit: Image courtesy of Aalto University


Is it possible to be “addicted” to food, much like an addiction to substances (e.g., alcohol, cocaine, opiates) or behaviors (gambling, shopping, Facebook)? An extensive and growing literature uses this terminology in the context of the “obesity epidemic”, and looks for the root genetic and neurobiological causes (Carlier et al., 2015; Volkow & Bailer, 2015).


Fig. 1 (Meule, 2015). Number of scientific publications on food addiction (1990-2014). Web of Science search term “food addiction”.


Figure 1 might lead you to believe that the term “food addiction” was invented in the late 2000s by NIDA. But this term is not new at all, as Adrian Meule (2015) explained in his historical overview, Back by Popular Demand: A Narrative Review on the History of Food Addiction Research. Dr. Theron G. Randolph wrote about food addiction in 1956 (he also wrote about food allergies).

Fig. 2 (Meule, 2015). History of food addiction research.


Thus, the concept of food addiction predates the documented rise in obesity in the US, which really took off in the late 80s to late 90s (as shown below).1

Prevalence of Obesity in the United States, 1960-2012

1960-621971-741976-801988-891999-2000
12.80%14.10%14.50%22.50%30.50%










2007-082011-12




33.80%34.90%

Sources:Flegal et al. 1998, 2002, 2010; Ogden et al. 2014


One problem with the “food addiction” construct is that you can live without alcohol and gambling, but you'll die if you don't eat. Complete abstinence is not an option.2

Another problem is that most obese people simply don't show signs of addiction (Hebebrand, 2015):
...irrespective of whether scientific evidence will justify use of the term food and/or eating addiction, most obese individuals have neither a food nor an eating addiction.3Obesity frequently develops slowly over many years; only a slight energy surplus is required to in the longer term develop overweight. Genetic, neuroendocrine, physiological and environmental research has taught us that obesity is a complex disorder with many risk factors, each of which have small individual effects and interact in a complex manner. The notion of addiction as a major cause of obesity potentially entails endless and fruitless debates, when it is clearly not relevant to the great majority of cases of overweight and obesity.

Still not convinced? Surely, differences in the brains' of obese individuals point to an addiction. The dopamine system is altered, right, so this must mean they're addicted to food? Well think again, because the evidence for this is inconsistent (Volkow et al., 2013; Ziauddeen & Fletcher, 2013).

An important new paper by a Finnish research group has shown that D2 dopamine receptor binding in obese women is not different from that in lean participants (Karlsson et al., 2015). Conversely, μ-opioid receptor (MOR) binding is reduced, consistent with lowered hedonic processing. After the women had bariatric surgery (resulting in mean weight loss of 26.1 kg, or 57.5 lbs), MOR returned to control values, while the unaltered D2 receptors stayed the same.

In the study, 16 obese women (mean BMI=40.4, age 42.8) had PET scans before and six months after undergoing the standard Gastric Bypass procedure (Roux-en-Y Gastric Bypass) or the Sleeve Gastrectomy. A comparison group of non-obese women (BMI=22.7, age 44.9) was also scanned. The radiotracer [11C]carfentanil measured MOR availability and [11C]raclopride measured D2R availability in two separate sessions. The opioid and dopamine systems are famous for their roles in neural circuits for “liking” (pleasurable consumption) and “wanting” (incentive/motivation), respectively (Castro & Berridge, 2014).

The pre-operative PET scans in the obese women showed that MOR binding was significantly lower in a number of reward-related regions, including ventral striatum, dorsal caudate, putamen, insula, amygdala, thalamus, orbitofrontal cortex and posterior cingulate cortex. Six months after surgery, there was an overall 23% increase in MOR availability, which was no longer different from controls.



Fig. 1 (modified from Karlsson et al., 2015).Top: μ-opioid receptors are reduced in obese participants pre-operatively (middle), but after bariatrc surgery (right) they recover to control levels (left). Bottom: D2 receptors are unaffected in the obese participants.


Karlsson et al. (2015) suggest that:
The MOR system promotes hedonic [pleasurable] aspects of feeding, and this can make obese individuals susceptible to overeating in order to gain the desired hedonic response from food consumption, which may further promote pathological eating. We propose that at the initial stages of weight gain, excessive eating may cause perpetual overstimulation of the MOR system, leading to subsequent MOR downregulation.  ...  However, bariatric surgery-induced weight loss and decreased food intake may reverse this process.

The unchanging striatal dopamine D2 receptor densities in the obese participants are in stark contrast to what is seen in individuals who are addicted to stimulant drugs, such as cocaine and methamphetamine (Volkow et al., 2001). Drugs of abuse are consistently associated with decreases in D2 receptors.



Fig. 1 (modified from Volkow et al., 2001). Ratio of the Distribution Volume of [11C]Raclopride in the Striatum (Normalized to the Distribution Volume in the Cerebellum) in a Non-Drug-Abusing Comparison Subject and a Methamphetamine Abuser.


So the next time you see a stupid ass headline like, “Cheese really is crack. Study reveals cheese is as addictive as drugs”, you'll know the writer is on crack.


Further Reading - The Scicurious Collection on Obesity

Overeating and Obesity: Should we really call it food addiction?

No, cheese is not just like crack

Dopamine and Obesity: The D2 Receptor

Dopamine and Obesity: The Food Addiction?

Cheesecake-eating rats and food addiction, a commentary


Footnotes

1Not surprisingly, papers on the so-called obesity epidemic lagged behind the late 80s-mid 90s rise in prevalence.

- click on image for a larger view -
Number of papers on "obesity epidemic" in PubMed (1996-2015)


2Notice in Fig. 2 that anorexia is considered the opposite: an addiction to starving.

3Binge eating disorder (BED) might be another story, and I'll refer you to an informative post by Scicurious for discussion of that issue. You do not have to be obese (or even overweight) to have BED.


References

Carlier N, Marshe VS, Cmorejova J, Davis C, Müller DJ. (2015). Genetic Similarities between Compulsive Overeating and Addiction Phenotypes: A Case for "Food Addiction"?Curr Psychiatry Rep. 17(12):96.

Castro, D., & Berridge, K. (2014). Advances in the neurobiological bases for food ‘liking’ versus ‘wanting’ Physiology & Behavior, 136, 22-30 DOI: 10.1016/j.physbeh.2014.05.022

Karlsson, H., Tuulari, J., Tuominen, L., Hirvonen, J., Honka, H., Parkkola, R., Helin, S., Salminen, P., Nuutila, P., & Nummenmaa, L. (2015). Weight loss after bariatric surgery normalizes brain opioid receptors in morbid obesity Molecular Psychiatry DOI: 10.1038/mp.2015.153

Meule A (2015). Back by Popular Demand: A Narrative Review on the History of Food Addiction Research. The Yale journal of biology and medicine, 88 (3), 295-302 PMID: 26339213

Volkow ND, Baler RD. (2015). NOW vs LATER brain circuits: implications for obesity and addiction. Trends Neurosci. 38(6):345-52.

Volkow ND, Wang GJ, Tomasi D, Baler RD. (2013). Obesity and addiction: neurobiological overlaps. Obes Rev. 14(1):2-18.

Ziauddeen H, Fletcher PC. (2013). Is food addiction a valid and useful concept?Obes Rev. 14(1):19-28.

The Neuroscience of Social Media: An Unofficial History

$
0
0

There's a new article in Trends in Cognitive Sciences about how neuroscientists can incorporate social media into their research on the neural correlates of social cognition (Meshi et al., 2015). The authors outlined the sorts of social behaviors that can be studied via participants' use of Twitter, Facebook, Instagram, etc.: (1) broadcasting information; (2) receiving feedback; (3) observing others' broadcasts; (4) providing feedback; (5) comparing self to others.


Meshi, Tamir, and Heekeren / Trends in Cognitive Sciences (2015)


More broadly, these activities tap into processes and constructs like emotional state, personality, social conformity, and how people manage their self-presentation and social connections. You know, things that exist IRL (this is an important point to keep in mind for later).

The neural systems that mediate these phenomena, as studied by social cognitive neuroscience types, are the Mentalizing Network (in blue below), the Self-Referential Network (red), and the Reward Network (green).


Fig. 2 (Meshi et al., 2015).Proposed Brain Networks Involved in Social Media Use.  (i)mentalizing network: dorsomedial prefrontal cortex (DMPFC), temporoparietal junction (TPJ), anterior temporal lobe (ATL), inferior frontal gyrus (IFG), posterior cingulate cortex/precuneus (PCC).(ii) self-referential network: medial prefrontal cortex (MPFC) and PCC. (iii) reward network: ventromedial prefrontal cortex (VMPFC), ventral striatum (VS),ventral tegmental area (VTA). 


The article's publication was announced on social media:


I anticipated this day in 2009, when I wrote several satirical articles about the neurology of Twitter.  I proposed that someone should do a study to examine the neural correlates of Twitter use:
It was bound to happen. Some neuroimaging lab will conduct an actual fMRI experiment to examine the so-called "Neural Correlates of Twitter" -- so why not write a preemptive blog post to report on the predicted results from such a study, before anyone can publish the actual findings?

Here are the conditions I proposed, and the predicted results (a portion of the original post is reproduced below).




A low-level baseline condition (viewing "+") and an active baseline condition (reading the public timeline [public timeline no longer exists] of random tweets from strangers) will be compared to three active conditions:

(1) Celebrity Fluff

(2) Social Media Marketing Drivel

(3) Friends on your Following List

... The hemodynamic response function to the active control condition will be compared to those from Conditions 1-3 above. Contrasts between each of these conditions and the low-level baseline will also be performed.

The major predicted results are as follows:
Fig. 2A. (Mitchell et al., 2006). A region of ventral mPFC showed greater activation during judgments of the target to whom participants considered themselves to be more similar.

  • Reading the stream of Celebrity Fluff will activate the frontal eye fields to a much greater extent than the control condition, as the participants will be engaged in rolling their eyes in response to the inane banter.
Figure from Paul Pietsch, Ph.D.The frontal eye fields are in a stamp-sized zone at the posterior end of the middle frontal gyri. 

  • Reading the stream of Social Media Marketing Drivel will tax the neural circuits involved in generating a feeling of disgust, including the anterior insula, ventrolateral prefrontal cortex-temporal pole, and putamen-globus pallidus (Mataix-Cols et al., 2008)
Fig. 1A (Jabbi et al., 2008). Coronal slice (y = 18) showing the location of the ROI (white) previously shown to be involved in the experience and observation of disgust.

In conclusion, we predict that the observed patterns of brain activity will be dependent on the nature of the Twitter material being read. These distinct neural networks are expected to reflect the cognitive, emotional, and visceral processes underlying the rapidly changing content of digital media, which ultimately results in "rewiring" of the brain.




Back to the present post...

Not too far off, eh?

Although the TICS piece mentioned that seven social media neuroscience articles have been published to date1(none quite like that one), it didn't review them. Bloggers have covered some of these (e.g., The Facebook Brain and More Friends on Facebook Does NOT Equal a Larger Amygdala) and related topics like social media use and personality, Facebook neuromarketing, metaphorical Facebook cells, Twitter psychosis (interview), “internet addiction”, textmania, and the lack of evidence that social network sites “damage social relationships” or cause depression.

After discussing the many ways in which social media data can be used as a proxy for real-world behavior, Meshi et al. mentioned some conspicuous differences between online and offline behavior (e.g., online disinhibition as illustrated by trolls, overly disclosive trainwreck LiveJournals, and TMI). This brings us to the “What the Internet is doing to our brains” brigade of unsupported scaremongering:

Social networking websites are causing alarming changes in the brains of young users, an eminent scientist has warned.

Sites such as Facebook, Twitter and Bebo are said to shorten attention spans, encourage instant gratification and make young people more self-centred.

The claims from neuroscientist Susan Greenfield will make disturbing reading for the millions whose social lives depend on logging on to their favourite websites each day.

Susan Greenfield, Susan Greenfield

No history of social media neuroscience is complete without the unsubstantiated claims of Baroness Susan Greenfield an extremely prominent British neuroscientist, author, and broadcaster: 'My fear is that these technologies are infantilising the brain into the state of small children who are attracted by buzzing noises and bright lights, who have a small attention span and who live for the moment.'  Although she declares the dangers of digital Mind Change far and wide, such statements are not backed by careful peer reviewed studies.

Susan Greenfield: I am not some greedy harridan

She is concerned that those who live only in the present, online, don’t allow their malleable brains to develop properly. “It’s not going to destroy the planet but is it going to be a planet worth living in if you have a load of breezy people who go around saying yaka-wow. Is that the society we want?”

A team of British psychologists, neuroscientists, bloggers, and science writers have been trying for ages to rebut the Baroness asking her to produce reliable evidence for her dire assertions (see Appendix). 

The neuroscience of social media isn't just emerging. It's been with us for over ten years.


Footnote

1One of these seven references is not a peer-reviewed paper, it's an abstract for a conference that's starting in a few days. I found it here: Facebook Network Structure and Brain Reactivity to Social Exclusion.

And there are actually more publications than that. One was covered in a post by Mo Costandi, Shared brain activity predicts audience preferences. There was a review article on Social Rewards and Social Networks in the Human Brain. There's a very recent paper on cortisol and Facebook behaviors in teens (likely that Meshi et al. hadn't seen it). But oddly, the 2012 TICS commentary by Stafford and Bell (Brain network: social media and the cognitive scientist) wasn't cited.


Reference

Meshi D, Tamir TI, Heekeren HR (2015). The Emerging Neuroscience of Social Media. Trends in Cognitive Sciences : 10.1016/j.tics.2015.09.004


Appendix: The “Rational UK Neuroscientists and Writers vs. Susan Greenfield” Collection

Breezy People Mind Hacks (Vaughan Bell) remember #yakawow?

Does the internet rewire your brain? Mind Hacks/BBC Future (Tom Stafford)

The brain melting internet Mind Hacks (Vaughan Bell)

The elusive hypothesis of Baroness Greenfield The Lay Scientist (Martin Robbins)

Mind Change: Susan Greenfield has a big idea, but what is it? The Lay Scientist (Martin Robbins)

Twitter Vs Dr. Susan Greenfield Neurobonkers (Simon Oxenham)

A little more conversation Speaking Out (Sophie Scott)

Susan Greenfield's Dopamine Disaster Neuroskeptic

Is the internet changing our brains? BPS Research Digest (Christian Jarrett)

Chilling warning to parents from top neuroscientist Bad Science (Ben Goldacre)

Digital tech, the BMJ, and The Baroness Mind Hacks (Vaughan Bell)

An open letter to Baroness Susan Greenfield BishopBlog (Dorothy Bishop)

On Greenfield Counterbalanced (Pete Etchells)

Facebook will destroy your children's brains The Lay Scientist (Martin Robbins)
Social media sites like Facebook and Twitter have left a generation of young adults vulnerable to degeneration of the brain, we can exclusively reveal for aboutthefifthtime. Symptoms include self-obsession, short attention spans and a childlike desire for constant feedback, according to a 'top scientist' with no record of published research on the issue.
. . . 

The scientist believes that use of the internet – and computer games – could 'rewire' the brain, causing neurons to establish new connections and pathways. "Rewiring itself is something that the brain does naturally all the time," the professor said, "but the phrase 'rewiring the brain' sounds really dramatic and chilling, so I like to use it to make it seem like I'm talking about a profound and unnatural change, even though it isn't."


Happiness Is a Large Precuneus

$
0
0

What is happiness, and how do we find it? There are 93,290 books on happiness at Amazon.com. Happiness is Life's Most Important Skill, an Advantage and a Project and a Hypothesis that we can Stumble On and Hard-Wire in 21 Days.

The Pursuit of Happiness is an Unalienable Right granted to all human beings, but it also generates billions of dollars for the self-help industry.

And now the search for happiness is over! Scientists have determined that happiness is located in a small region of your right medial parietal lobe. Positive psychology gurus will have to adapt to the changing landscape or lose their market edge. “My seven practical, actionable principles are guaranteed to increase the size of your precuneus or your money back.”




The structural neural substrate of subjective happiness is the precuneus.

A new paper has reported that happiness is related to the volume of gray matter in a 222.8 mm3 cluster of the right precuneus (Sato et al., 2015). What does this mean? Taking the finding at face value, there was a correlation (not a causal relationship) between precuneus gray matter volume and scores on the Japanese version of the Subjective Happiness Scale.1



Fig. 1 (modified from Sato et al., 2015).  Left: Statistical parametric map (p < 0.001, peak-level uncorrected for display purposes). The blue cross indicates the location of the peak voxelRight: Scatter plot of the adjusted gray matter volume as a function of the subjective happiness score at the peak voxel. [NOTE: Haven't we agreed to not show regression lines through scatter plots based on the single voxel where the effect is the largest??]


The search for happiness: Using MRI to find where happiness happens,” said one deceptive headline. Should we accept the claim that one small region of the brain is entirely responsible for generating and maintaining this complex and desirable state of being?

NO. Of course not. And the experimental subjects were not actively involved in any sort of task at all. The study used a static measure of gray matter volume in four brain Regions of Interest (ROIs): left anterior cingulate gyrus, left posterior cingulate gyrus, right precuneus, and left amygdala. These ROIs were based on an fMRI activation study in 26 German men (mean age 33 yrs) who underwent a mood induction procedure (Habel et al., 2005). The German participants viewed pictures of faces with happy expressions and were told to “Look at each face and use it to help you to feel happy.” The brain activity elicited by happy faces was compared to activity elicited by a non-emotional control condition. Eight regions were reported in their Table 1.


Table 1 (modified from Habel et al., 2005).


Only four of those regions were selected as ROIs by Sato et al. (2015). One of these was a tiny 12 voxel region in the paracentral lobule, which was called precuneus by Sato et al. (2015).



Image: John A Beal, PhD. Dept. of Cellular Biology & Anatomy, Louisiana State University Health Sciences Center Shreveport.


Before you say I'm being overly pedantic, we can agree that the selected coordinates are at the border of the precuneus and the paracentral lobule. The more interesting fact is that the sadness induction of Habel et al. (2005) implicated a very large region of the posterior precuneus and surrounding regions (1562 voxels). An area over 100 times larger than the Happy Precuneus.

Oops. But the precuneus contains multitudes, so maybe it's not so tragic. The precuneus is potentially involved in very lofty functions like consciousness and self-awareness and the recollection of  autobiographical memories. It's also a functional core of the default-mode network (Utevsky et al., 2014), which is active during daydreaming and mind wandering and unconstrained thinking.

But it seems a bit problematic to use hand picked ROIs from a study of transient and mild “happy” states (in a population of German males) to predict a stable trait of subjective happiness in a culturally distinct group of younger Japanese college students (26 women, 25 men).


Cross-Cultural Notions of Happiness

Isn't “happiness” a social construct (largely defined by Western thought) that varies across cultures?




Should we expect “the neural correlates of happiness” (or well-being) to be the same in Japanese and Chinese and British college students? In the Chinese study, life satisfaction was positively correlated with gray matter volume in the right parahippocampal gyrus but negatively correlated with gray matter volume in the left precuneus... So the participants with the largest precuneus volumes in that study had the lowest well-being.

What does a bigger (or smaller) size even mean for actual neural processing? Does a larger gray matter volume in the precuneus allow for a higher computational capacity that can generate greater happiness?? We have absolutely no idea: “...there is no clear evidence of correlation between GM volume measured by VBM and any histological measure, including neuronal density” (Gilaie-Dotan et al., 2014).

Sato et al. (2015) concluded that their results have important practical implications: Are you happy? We don't have to take your word for it any more!
In terms of public policy, subjective happiness is thought to be a better indicator of happiness than economic success. However, the subjective measures of happiness have inherent limitations, such as the imprecise nature of comparing data across different cultures and the difficulties associated with the applications of these measures to specific populations, including the intellectually disabled. Our results show that structural neuroimaging may serve as a complementary objective measure of subjective happiness.

Finally, they issued the self-help throw down: “...our results suggest that psychological training that effectively increases gray matter volume in the precuneus may enhance subjective happiness.”


Resting-state functional connectivity of the default mode network associated with happiness is so last month...

adapted from Luo et al. (2015)


Further Reading

Are You Conscious of Your Precuneus?

Be nice to your Precuneus – it might be your real self…

Your Precuneus May Be the Root of Happiness and Satisfaction

The Precuneus and Recovery From a Minimally Conscious State


Footnote

1 The Subjective Happiness Scale is a 4-item measure of global subjective happiness (Lyubomirsky & Lepper, 1999).




References

Habel, U., Klein, M., Kellermann, T., Shah, N., & Schneider, F. (2005). Same or different? Neural correlates of happy and sad mood in healthy males NeuroImage, 26 (1), 206-214 DOI: 10.1016/j.neuroimage.2005.01.014

Sato, W., Kochiyama, T., Uono, S., Kubota, Y., Sawada, R., Yoshimura, S., & Toichi, M. (2015). The structural neural substrate of subjective happiness Scientific Reports, 5 DOI: 10.1038/srep16891



Carving Up Brain Disorders

$
0
0


Neurology and Psychiatry are two distinct specialties within medicine, both of which treat disorders of the brain. It's completely uncontroversial to say that neurologists treat patients with brain disorders like Alzheimer's disease and Parkinson's disease. These two diseases produce distinct patterns of neurodegeneration that are visible on brain scans. For example, Parkinson's disease (PD) is a movement disorder caused by the loss of dopamine neurons in the midbrain.



Fig. 3 (modified from Goldstein et al., 2007). Brain PET scans superimposed on MRI scans. Note decreased dopamine signal in the putamen and substantia nigra (S.N.) bilaterally in the patient.


It's also uncontroversial to say that drugs like L-DOPA and invasive neurosurgical interventions like deep brain stimulation (DBS) are used to treat PD.

On the other hand, some people will balk when you say that psychiatric illnesses like bipolar disorder and depression are brain disorders, and that drugs and DBS (in severe intractable cases) may be used to treat them. You can't always point to clear cut differences in the MRI or PET scans of psychiatric patients, as you can with PD (which is a particularly obvious example).

The diagnostic methods used in neurology and psychiatry are quite different as well. The standard neurological exam assesses sensory and motor responses (e.g., reflexes) and basic mental status. PD has sharply defined motor symptoms including tremor, rigidity, impaired balance, and slowness of movement. There are definitely cases where the symptoms of PD should be attributed to another disease (most notably Lewy body dementia)1, and other examples where neurological diagnosis is not immediately possible. But by and large, no one questions the existence of a brain disorder.

Things are different in psychiatry. Diagnosis is not based on a physical exam. Psychiatrists and psychologists give clinical interviews based on the Diagnostic and Statistical Manual (DSM-5), a handbook of mental disorders defined by a panel of experts with opinions that are not universally accepted. The update from DSM-IV to DSM-5 was highly controversial (and widely discussed).

The causes of mental disorders are not only biological, but often include important social and interpersonal factors. And their manifestations can vary acrosscultures.

Shortly before the release of DSM-5, the former director of NIMH (Dr. Tom Insel) famously dissed the new manual:
The strength of each of the editions of DSM has been “reliability” – each edition has ensured that clinicians use the same terms in the same ways. The weakness is its lack of validity. Unlike our definitions of ischemic heart disease, lymphoma, or AIDS, the DSM diagnoses are based on a consensus about clusters of clinical symptoms, not any objective laboratory measure.

In other words, where are the clinical tests for psychiatric disorders?

For years, NIMH has been working on an alternate classification scheme, the Research Domain Criteria (RDoC) project, which treats mental illnesses as brain disorders that should be studied according to domains of functioning (e.g., negative valence). Dimensional constructs such as acute threat (“fear”) are key, rather than categorical DSM diagnosis. RDoC has been widely discussed on this blog and elsewhere it's the best thing since sliced bread, it's necessary but very oversold, or it's ill-advised.

What does this have to do with neurology, you might ask? In 2007, Insel called for the merger of neurology and psychiatry:
Just as research during the Decade of the Brain (1990-2000) forged the bridge between the mind and the brain, research in the current decade is helping us to understand mental illnesses as brain disorders. As a result, the distinction between disorders of neurology (e.g., Parkinson's and Alzheimer's diseases) and disorders of psychiatry (e.g., schizophrenia and depression) may turn out to be increasingly subtle. That is, the former may result from focal lesions in the brain, whereas the latter arise from abnormal activity in specific brain circuits in the absence of a detectable lesion. As we become more adept at detecting lesions that lead to abnormal function, it is even possible that the distinction between neurological and psychiatric disorders will vanish, leading to a combined discipline of clinical neuroscience.

Actually, Insel's view dates back to 2005 (Insel & Quirion, 2005)....2
Future training might begin with two post-graduate years of clinical neuroscience shared by the disciplines we now call neurology and psychiatry, followed by two or three years of specialty training in one of several sub-disciplines (ranging from peripheral neuropathies to public sector and transcultural psychiatry). This model recognizes that the clinical neurosciences have matured sufficiently to resemble internal medicine, with core training required prior to specializing.

...and was expressed earlier by Dr. Joseph P. Martin, Dean of Harvard Medical School (Martin, 2002):
Neurology and psychiatry have, for much of the past century, been separated by an artificial wall created by the divergence of their philosophical approaches and research and treatment methods. Scientific advances in recent decades have made it clear that this separation is arbitrary and counterproductive. .... Further progress in understanding brain diseases and behavior demands fuller collaboration and integration of these fields. Leaders in academic medicine and science must work to break down the barriers between disciplines.

Contemporary leaders and observers of academic medicine are not all equally ecstatic about this prospect, however. Taylor et al. (2015) are enthusiastic advocates of a move beyond “Neural Cubism”, to increased integration of neurology and psychiatry. Dr. Sheldon Benjamin agrees that greater cross-discipline training is needed, but wants the two fields to remain separate. But Dr. Jose de Leon thinks the psychiatry/neurology integration is a big mistake that revives early 20th century debates (see table below, in the footnotes).3

I think a distinction can (and should) be made between the research agenda of neuroscience and the current practice of psychiatry. Neuroscientists who work on such questions assume that mental illnesses are brain disorders and act accordingly, by studying the brain. They study animal models and brain slices and genes and humans with implanted or attached electrodes and humans in scanners. And they study the holy grail of neural circuits using DREDDs and optogenetics. This doesn't invalidate the existence of social, cultural, and interpersonal factors that affect the development and manifestation of mental illnesses. As an non-clinician, I have less to say about medical practice. I'm not grandiose enough to claim that neuroscience research (or RDoC, for that matter) will transform the practice of psychiatry (or neurology) in the near future. [Though you might think differently if you read Public Health Relevance Statements or articles in high profile journals.]

Basic researchers may not even think about the distinction between neurology and psychiatry. Is the abnormal deposition of amyloid-β peptide in Alzheimer's disease (AD) an appropriate target for treatment? Are metabotropic glutamate receptors an appropriate target in schizophrenia? These are similar questions, despite the fact that one disease is neurological and the other psychiatric. There are defined behavioral endpoints that mark treatment-related improvements in either case. It's very useful to measure a change in amyloid burden4 using florbetapir PET imaging in AD [there's nothing similar in schizophrenia], but the most important measure is cognitive improvement (or a flattening of cognitive decline).


Does Location Matter?

In response to the pro-merger cavalcade, a recent meta-analysis asked whether the entire category of neurological disorders affects different brain regions than the entire category of psychiatric disorders (Crossley et al., 2015). The answer was why yes, the two categories affect different brain areas, and for this reason neurology and psychiatry should remain separate.


I thought this was an odd question to begin with, and an even odder conclusion. It's not surprising that disorders of movement, for example, involve different brain regions than disorders of mood or disorders of thought. From my perspective, it's more interesting to look at where the two categories overlap, with an eye to specific comparisons (not global lumping). For instance, are compulsive and repetitive behaviors in OCD associated with alterations in some of the subcortical circuits implicated in movement disorders? Why yes.

But let's take a closer look at the technical details of the study.



Crossley et al. (2015) searched for structural MRI articles that observed decreases in gray matter in patients compared to controls. The papers used voxel-based morphometry (VBM) to quantify regional gray matter volumes across the entire brain. For inclusion, disorders needed to have at least seven published studies to be entered into the analysis. A weighted method was used to control for number of published studies (e.g., AD and schizophrenia were way over-represented in their respective categories), and 7 papers were chosen at random for each disorder. The papers were either in the brainmap.org VBM database or found via electronic searches. The x y z peak coordinates were extracted from each paper and entered into the GingerALE program, which performed a meta-analysis via the activation likelihood estimation (ALE) method (see these references: [pdf], [pdf], [pdf] ).

They found that the basal ganglia, insula, lateral and medial temporal cortex, and sensorimotor areas were affected to a greater extent in neurological disorders. Meanwhile, anterior and posterior cingulate, medial frontal cortex, superior frontal gyrus, and occipital cortex were more affected in psychiatric disorders.

- click on image for a larger view -

The authors also looked at network differences, with networks based on previous resting state fMRI studies. Some of these results were uninformative. For example, psychiatric disorders affect visual networks more than neurological disorders do. That was because neurological disorders affect visual regions much less than expected (based on the total number of affected voxels).

Another finding was that abnormalities in the cerebellum occurred less often than expected in neurological disorders. But this is obviously not the case in cerebellar ataxia, which affects (you guessed it) THE CEREBELLUM. So I'm not sure how useful it is to make global statements about cerebellar involvement in neurological disorders.


ALE map (FDR pN < 0.05) from 16 VBM studies of ataxia.


ALE map above was based on 16 papers in the BrainMap database (from a search including 'Ataxia', 'Friedreich ataxia', or 'Spinocerebellar Ataxia'). Gray matter decreases are seen in the cerebellum.


It was sort of interesting to see all the neurological disorders lumped together and compared to all the psychiatric disorders (the coarsest carving imaginable), but I guess I'm more of a splitter. But an integrative one who also looks for commonalities and overlap. The intersection of neurology and psychiatry is a fascinating topic that could fill many future blog posts.


Footnotes

1 Comedian Robin Williams, who died by suicide, was initially thought to have depression and/or PD. However, an autopsy ultimately diagnosed Lewy body dementia (‘diffuse Lewy body disease’). PD isn't purely a motor disorder, either. Symptoms can include cognitive changes, depression, and dementia.

2 That's a fascinating history that may be covered at another time. For now, here's the table from de Leon (2015).


3 It's interesting to see the prediction for 2015: we should be in the age of diagnostic biomarkers by now...

4That article came to a surprising conclusion:
If these data support a regional association between amyloid plaque burden and metabolism, it is for the somewhat heretical inversion of the amyloid hypothesis. That is, regional amyloid plaque deposition is protective, possibly by pulling the more toxic amyloid oligomers out of circulation and binding them up in inert plaques, or via other mechanisms...


References

Benjamin S. (2015). Neuropsychiatry and neural cubism. Acad Med. 90(5):556-8.

Crossley, N., Scott, J., Ellison-Wright, I., & Mechelli, A. (2015). Neuroimaging distinction between neurological and psychiatric disorders.. The British Journal of Psychiatry, 207 (5), 429-434 DOI: 10.1192/bjp.bp.114.154393

David, A., & Nicholson, T. (2015). Are neurological and psychiatric disorders different? The British Journal of Psychiatry, 207 (5), 373-374. DOI: 10.1192/bjp.bp.114.158550

de Leon J. (2015) Is psychiatry only neurology? Or only abnormal psychology? Déjà vu after 100 years. Acta Neuropsychiatr. 27(2):69-81.

Insel TR, & Quirion R (2005). Psychiatry as a clinical neuroscience discipline. JAMA, 294 (17), 2221-4 PMID: 16264165

Martin JB. (2002). The integration of neurology, psychiatry, and neuroscience in the21st century. Am J Psychiatry 159(5):695-704.

Taylor JJ, Williams NR, George MS. (2015). Beyond neural cubism: promoting a multidimensional view of brain disorders by enhancing the integration of neurology and psychiatry in education. Acad Med. 90(5):581-6.

Viewing all 329 articles
Browse latest View live




Latest Images