Quantcast
Channel: The Neurocritic
Viewing all 329 articles
Browse latest View live

Can a Computer Algorithm Identify Suicidal People from Brain Scans? The Answer Won't Surprise You

$
0
0


Death by suicide is a preventable tragedy if the suicidal individual is identified and receives appropriate treatment. Unfortunately, some suicidal individuals do not signal their intent, and others do not receive essential assistance. Youths with severe suicidal ideation are not taken seriously in many cases, and thus are not admitted to emergency rooms. A common scenario is that resources are scarce, the ER is backed up, and a cursory clinical assessment will determine who is admitted and who will be triaged. From a practical standpoint, using fMRI to determine suicide risk is a non-starter.

Yet here we are, with media coverage blaring that an Algorithm can identify suicidal people using brain scans and Brain Patterns May Predict People At Risk Of Suicide. These media pieces herald a new study claiming that fMRI can predict suicidal ideation with 91% accuracy (Just et al. 2017). The authors applied a complex algorithm (machine learning) to analyze brain scans obtained using a highly specialized protocol to examine semantic and emotional responses to life and death concepts.

Let me unpack that a bit. The scans of 17 young adults with suicidal ideation (thoughts about suicide) were compared to those from another 17 participants without suicidal ideation. A computer algorithm (Gaussian Naive Bayes) was trained on the neural responses to death-related and suicide-related words, and correctly classified 15 out of 17 suicidal ideators (88% sensitivity) and 16 out of 17 controls (94% specificity). Are these results too good to be true? Yes, probably. And yet they're not good enough, because two at-risk individuals were not picked up.




The computational methods used to classify the suicidal vs. control groups are suspect, according to manymachine learningexpertsonsocialmedia. One problem is known as “overfitting using too many parameters taken from small populations that may not generalize to unique samples. The key metric is whether the algorithm will be able to classify individuals from independent, out-of-sample populations. And we don't know that for sure. Another problem is that the leave-one-out cross validation is problematic. I'm not an expert here, so the Twitter threads that start below (and here) are your best bet.


For the rest of this post, I'll raise other issues about this study that concerned me.


Why use an expensive technology in the first place?

The rationale for this included some questionable statements.
  • ...predictions by both clinicians and patients of future suicide risk have been shown to be relatively poor predictors of future suicide attempt2,3.
One of the papers cited as a poor predictor (Nock et al., 2010) was actually touted as a breakthrough when it was published: Implicit Cognition Predicts Suicidal Behavior. [n.b. Nock is an author on the Just et al. paper that trashes his earlier work]. Anyway, Nock et al. (2010) developed the death/suicide Implicit Association Test (IAT)1which was able to identify ER patients at greatest risk for another suicide attempt in the future:
...the implicit association of death/suicide with self was associated with an approximately 6-fold increase in the odds of making a suicide attempt in the next 6 months, exceeding the predictive validity of known risk factors (e.g., depression, suicide-attempt history) and both patients’ and clinicians’ predictions.
But let's go ahead with an fMRI study that will be far more accurate than a short and easy-to-administer computerized test!

  • Nearly 80% of patients who die by suicide deny suicidal ideation in their last contact with a mental healthcare professional4.
This 2003 study was based on psychiatric inpatients who died by suicide while in hospital (5-6% of all suicides) or else shortly thereafter, and may not be representative of the entire at-risk population. Nonetheless, other research shows that current risk scales are indeed of limited use and may even waste valuable clinical resources. The scales “may be missing important aspects relevant to repeat suicidal behaviour (for example social, cultural, economic or psychological processes).” But a focus on brain scans would also miss social, cultural, and economic factors.


How do you measure the neural correlates of suicidal thoughts?

This is a tough one, but the authors propose to uncover the neural signatures of specific concepts, as well as the emotions they evoke:
...the neural signature of the test concepts was treated as a decomposable biomarker of thought processes that can be used to pinpoint particular components of the alteration [in participants with suicidal ideation]. This decomposition attempts to specify a particular component of the neural signature that is altered, namely, the emotional component...

How do you choose which concepts and emotions to measure?

The “concepts” were words from three different categories (although the designation of Suicide vs. Negative seems arbitrary for some of the stimuli). The set of 30 words was presented six times, with each word shown for three seconds followed by a four second blank screen. Subjects were “asked to actively think about the concepts ... while they were displayed, thinking about their main properties (and filling in details that come to mind) and attempting consistency across presentations.”




The “emotion signatures” were derived from a prior study (Kassam et al., 2013) that asked method actors to self-induce nine emotional states (anger, disgust, envy, fear, happiness, lust, pride, sadness, and shame). The emotional states selected for the present study were anger, pride, sadness, and shame (all chosen post hoc). Should we expect emotion signatures that are self-induced by actors to be the same as emotion signatures that are evoked by words? Should we expect a universal emotional response to Comfort or Evil or Apathy?

Six words (death, carefree, good, cruelty, praise, and trouble in descending order) and five brain regions (left superior medial frontal, medial frontal/anterior cingulate, right middle temporal, left inferior parietal, and left inferior frontal) from a whole-brain analysis (that excluded bilateral occipital lobes for some reason) provided the most accurate discrimination between the two groups. Why these specific words and voxels? Twenty-five voxels, specifically. It doesn't matter.
The neural representation of each concept, as used by the classifier, consisted of the mean activation level of the five most stable voxels in each of the five most discriminating locations.
...and...
All of these regions, especially the left superior medial frontal area and medial frontal/anterior cingulate, have repeatedly been strongly associated with self-referential thought...
...and...
...the concept of ‘death’ evoked more shame, whereas the concept of ‘trouble’ evoked more sadness in the suicidal ideator group. ‘Trouble’ also evoked less anger in the suicidal ideator group than in the control group. The positive concept ‘carefree’ evoked less pride in the suicidal ideator group. This pattern of differences in emotional response suggests that the altered perspective in suicidal ideation may reflect a resigned acceptance of a current or future negative state of affairs, manifested by listlessness, defeat and a degree of anhedonia (less pride evoked in the concept of ‘carefree’) [why not less pride to 'praise' or 'superior'? who knows...]

Not that this involves circularity or reverse inference or HARKing or anything...


How can a method that excludes data from 55% of the target participants be useful??

This one seems like a showstopper. A total of 38 suicidal participants were scanned, but those who did not show the desired semantic effects were excluded due to “poor data quality”:
The neurosemantic analyses ... are based on 34 participants, 17 participants per group whose fMRI data quality was sufficient for accurate (normalized rank accuracy > 0.6) identification of the 30 individual concepts from their fMRI signatures. The selection of participants included in the primary analyses was based only on the technical quality of the fMRI data. The data quality was assessed in terms of the ability of a classifier to identify which of the 30 individual concepts they were thinking about with a rank accuracy of at least 0.6, based on the neural signatures evoked by the concepts. The participants who met this criterion also showed less head motion (t(77) = 2.73, P < 0.01). The criterion was not based on group discriminability.

This logic seems circular to me, despite the claim that inclusion wasn't based on group classification accuracy. Seriously, if you throw out over half of your subjects, how can your method ever be useful? Nonetheless, the 21 “poor data quality” ideators with excessive head motion and bad semantic signatures were used in an out-of-sample analysis that also revealed relatively high classification accuracy (87%) compared to the data from the same 17 “good” controls (the data from 24 “bad” controls were excluded, apparently).
We attribute the suboptimal fMRI data quality (inaccurate concept identification from its neural signature) of the excluded participants to some combination of excessive head motion and an inability to sustain attention to the task of repeatedly thinking about each stimulus concept for 3 s over a 30-min testing period.

Furthermore, another classifier was even more accurate (94%) in discriminating between suicidal ideators who had made a suicide attempt (n=9) from those who had not (n=8), although the out-of-sample accuracy for the excluded 21 was only 61%. Perhaps I'm misunderstanding something here, but I'm puzzled...

I commend the authors for studying a neglected clinical group, but wish they were more rigorous, didn't overinterpret their results, and didn't overhype the miracle of machine learning.


Crisis Text Line [741741 in the US] uses machine learning to prioritize their call load based on word usage and emojis. There is a great variety of intersectional risk factors that may lead someone to death by suicide. At present, no method can capture the full scope of diversity of who will cross the line.

If you are feeling suicidal or know someone who might be, here is a link to a directory of online and mobile suicide help services.



Footnote

1I won't discuss the problematic nature of the IAT here.


References

Just MA, Pan L, Cherkassky VL, McMakin DL, Cha c, Nock MK, & Brent D (2017). Machine learning of neural representations of suicide and emotion concepts identifies suicidal youth. Nature Human Behaviour. Published online: 30 October 2017

Kassam KS, Markey AR, Cherkassky VL, Loewenstein G, Just MA. (2013). Identifying Emotions on the Basis of Neural Activation. PLoS One. 8(6):e66032.

Nock MK, Park JM, Finn CT, Deliberto TL, Dour HJ, Banaji MR. (2010). Measuring the suicidal mind: implicit cognition predicts suicidal behavior. Psychol Sci. 21(4):511-7.

Brief Guide to the CTE Brains in the News. Part 1: Aaron Hernandez

$
0
0
Chronic traumatic encephalopathy (CTE) is the neurodegenerative disease of the moment, made famous by the violent and untimely deaths of many retired professional athletes. Repeated blows to the head sustained in contact sports such as boxing and American football can result in abnormal accumulations of tau protein (usually many years later). The autopsied brains from two of these individuals are shown below.



Left: courtesy of Dr. Ann McKee in NYT.  Right: courtesy of Dr. Bennett Omalu in CNN. These are coronal sections1 from the autopsied brains of: (L) Aaron Hernandez, aged 27; and (R) Fred O'Neill, aged 63.


Both men played professional football in the NFL. Both came upon some troubled times after leaving the game. And although the CTE pathology in their brains has been attributed directly to football — repeated concussive and sub-concussive events — other potential factors have been mostly ignored. Below I'll discuss these events and phenomena, and whether they could have contributed to the condition of the post-mortem brains.


Aaron Hernandez


Illustration by Sean McCabe for Rolling Stone


Talented ex-NFL football star, PCP addict, convicted murderer, and suicide by hanging. The Rolling Stone ran two riveting articles that detailed the life (and death) of Mr. Hernandez. Despite a difficult upbringing surrounded by violence and tragedy, he was a serious and stellar athlete at Bristol High School. The tragic death of his father from a medical accident led Aaron to hang out with a less savory crowd. He fortunately ended up at the University of Florida for college football. There he failed several drug tests, but the administration mostly looked the other way. He was on a national championship team, named an all-American, and involved in a shooting where he was not charged.

Most NFL teams took a pass because of his use of recreational drugs and reputation as a hot-head:
After seeing his pre-draft psychological report, where he received the lowest possible score, one out of 10, in the category of “social maturity” and which also noted that he enjoyed “living on the edge of acceptable behavior,” a handful of teams pulled him off their boards, and 25 others let him sink like a stone on draft day.

But he ended up signing with the New England Patriots in a $40 million deal. He smoked pot constantly and avoided hanging out with the other players. “Instead of teammates, Hernandez built a cohort of thugs, bringing stone-cold gangsters over to the house to play pool, smoke chronic and carouse.” Things spiraled downwards, in terms of thug life, use of PCP (angel dust), and ultimately the murder of a friend that ended in a life sentence without parole.

He was also tried and acquitted of a separate double homicide, but his days were numbered. Two days later he hanged himself with a bedsheet in his jail cell. He was rumored to have smoked K2 (nasty synthetic cannabis) just before his death, but this was ultimately unsubstantiated.

These complicating factors lengthy history of drug abuse, death by asphyxiation must have had some effect on his brain, I mused in another post.




Meanwhile, the New York Times had a splashy piece about how the pristine brain of Aaron Hernandez presented an opportunity to study a case of “pure” CTE:
What made the brain extraordinary, for the purpose of science, was not just the extent of the damage, but its singular cause. Most brains with that kind of damage have sustained a lifetime of other problems, too, from strokes to other diseases, like Alzheimer’s. Their samples are muddled, and not everything found can be connected to one particular disease.

This was a startling statement, as I said in my secondary blog:
I’ve been struggling to write a post that highlights the misleading nature of this claim. How much of that was [the writer's] own hyperbole? Or was he merely paraphrasing the famous neuropathologists who presented their results to the media, not to peer reviewers? Is it my job to find autopsied brains from PCP abusers and suicides by hanging? Searching for the latter, by the way, will turn up some very unsavory material in forensic journals and elsewhere. At any rate, I think much of this literature glosses over any complicating elements, and neglects to mention all of the cognitively intact former football players whose brains haven’t been autopsied.

In the next post, I'll discuss the case of Fred O'Neill.


Footnote

1Illustration of the coronal plane of section.





Further Reading 
I've written about CTE a lot, you can read more below.

FDA says no to marketing FDDNP for CTE

Is CTE Detectable in Living NFL Players?

The Ethics of Public Diagnosis Using an Unvalidated Method

The Truth About Cognitive Impairment in Retired NFL Players

Lou Gehrig Probably Died of Lou Gehrig's Disease

Blast Wave Injury and Chronic Traumatic Encephalopathy: What's the Connection?

Little Evidence for a Direct Link between PTSD and Chronic Traumatic Encephalopathy




New York Times: A neuropathologist and her associate examined slices of the brain of a 27-year-old man. Credit: Boston University.

Brief Guide to the CTE Brains in the News. Part 2: Fred McNeill

$
0
0
Chronic traumatic encephalopathy (CTE) is the neurodegenerative disease of the moment, made famous by the violent and untimely deaths of many retired professional athletes. Repeated blows to the head sustained in contact sports such as boxing and American football can result in abnormal accumulations of tau protein (usually many years later). The autopsied brains from two of these individuals are shown below.



Left: courtesy of Dr. Ann McKee in NYT.  Right: courtesy of Dr. Bennett Omalu in CNN. These are coronal sections1 from the autopsied brains of: (L) Aaron Hernandez, aged 27; and (R) Fred McNeill, aged 63.


Part 1 of this series looked at complicating factors in the life of Aaron Hernandez PCP abuse, death by asphyxiation that presumably had some impact on his brain beyond the effects of concussions in football.

Part 2 will discuss the tragic case of Fred McNeill, former star linebacker for the Minnesota Vikings. He died in 2015 from complications of Amyotrophic Lateral Sclerosis (ALS), suggesting that his was not a “pure” case of CTE, either.


Fred McNeill


McNeill in 1974 (Mike Zerby / Minneapolis Star Tribune).

Obituary: Standout of the 1970s and 1980s was suffering from dementia and died from complications from ALS, according to Matt Blair [close friend and former teammate]

ALS is a motor neuron disease that causes progressive wasting and death of neurons that control voluntary muscles of the limbs and ultimately the muscles that control breathing and swallowing. Around 30-50% of individuals with ALS show cognitive and behavioral impairments.

According to a recent review (Hobson and McDermott, 2016):
Overlap between ALS and other neurodegenerative diseases, in particular frontotemporal dementia (FTD) and parkinsonism, is increasingly recognized. ...

Approximately 10–15% of patients with ALS show signs of FTD ... typically behavioural variant of FTD. A further 50% experience mild cognitive or behavioural changes. Patients with executive dysfunction have a worse prognosis, and behavioural changes have a negative impact on carer quality of life.

This raises the issue that repetitive head trauma can result in multiple neurodegenerative diseases, not only CTE. In fact, this has been recognized by other researchers who studied 14 retired soccer players who were experts at heading the ball (Ling et al., 2017). Only four had pathologically confirmed CTE:
...concomitant pathologies included Alzheimer's disease (N = 6), TDP-43 (N = 6), cerebral amyloid angiopathy (N = 5), hippocampal sclerosis (N = 2), corticobasal degeneration (N = 1), dementia with Lewy bodies (N = 1), and vascular pathology (N = 1); and all would have contributed synergistically to the clinical manifestations. ...   Alzheimer's disease and TDP-43 pathologies are common concomitant findings in CTE, both of which are increasingly considered as part of the CTE pathological entity in older individuals.

So the blanket term of “CTE” can include build-up of not only tau, but other abnormal proteins typically seen in Alzheimer's disease (Aβ) and the ALS-FTD spectrum (TDP-43). This lowers the utility of an in vivo marker specific to tau in diagnosing CTE in living individuals, an important enterprise because definitive diagnosis is only obtained post-mortem.

This brings us to the problematic report on Mr. McNeill's brain and the news coverage surrounding it.


CTE confirmed for 1st time in live person, according to exam of ex-NFL player

The recent study by Omalu and colleagues (2017) performed a PET scan on Mr. Neill almost 4.5 years before he died. This was before any motor signs of ALS had appeared. Clearly, 4.5 years is a very long time in the course of progressive neurodegenerative diseases, so right off the bat a comparison of his PET scan and post-mortem pathology is highly problematic.


Former Vikings linebacker Fred McNeill identified as subject of breakthrough CTE study

Another reason this study was not the “breakthrough” of news headlines is because the type of pathology plainly visible on MRI, and the type of cognitive deficits shown on neuropsychological tests, were quite typical of Alzheimer's disease and perhaps also vascular dementia. The MRI scan taken at the time of PET “showed mild, global brain atrophy with enlarged ventricles, moderate bilateral hippocampal atrophy, and diffuse white matter hyperintensities.”

Among his worst cognitive deficits at the time of testing were memory and picture naming, which is characteristic of Alzheimer's disease (AD). Likewise, the behavioral deficits reported by his wife are typically seen in AD.




Two years after the PET scan, he developed motor symptoms of ALS. His wife noted he could no longer tie his shoes or button his shirts. He developed muscle twitching in his arms and showed decreased muscle mass in his arms and shoulders. He was diagnosed with ALS 17 months prior to death, which was in addition to his presumed diagnosis of CTE.




FDA says no to marketing FDDNP for CTE

Finally, the molecular imaging probe used to identify abnormal tau protein in the living brain, [18F]-FDDNP, is not specific for tau. It also binds to beta-amyloid and a variety of other misfolded proteins. Or maybe not!

As I've written before, the brain diagnostics company TauMark™ was admonished by the FDA for making false claims. Six authors on the current paper hold a financial interest in the company. Most other research groups use more specific tau imaging tracers such as [18F]T807 (aka [18F]AV-1451 or Flortaucipir).

I certainly acknowledge that theses types of pre- and post-mortem studies are very difficult to conduct, and although the n=1 is a known weakness, you have to start somewhere. Nonetheless, the stats relating FDDNP binding to tau pathology were very thin and not all that believable. The paragraph below presents the results in their entirety. Note that p=.0202 was considered “highly correlated” while p=.1066 was not significant.
Correlation analysis was performed to investigate whether the in vivo regional [F-18]FDDNP binding level agreed with the density of tau pathology based on autopsy findings. Spearman rank-order correlation coefficient (rs) was calculated for the regional [F-18]FDDNP DVRs (Figure 1) and the density of tau pathology, as well as for amyloid and TDP-43 substrates (Table 5). Our results showed that the tau regional findings and densities obtained from antemortem [F-18]FDDNP-PET imaging and postmortem autopsy were highly correlated (rs = 0.592, P = .0202). However, no statistical correlation was found with the presence of amyloid deposition (r s = -0.481; P = .0695) or of TDP-43 (rs = 0.433; P = .1066).

Also, FDDNP-PET showed that in cortical regions, the medial temporal lobes showed the highest distribution volume ratio (DVR), along with anterior and posterior cingulate cortices. Isn't this typical of the Aβ distribution in AD?

I'm not denying the existence of CTE as a complex clinical entity, or saying that multiple concussions don't harm your brain. Along with others (e.g., Iverson et al., 2018), I'm merely suggesting that the clinical, cognitive, behavioral, and pathological sequelae of repeated head trauma should be carefully studied, and not presented in a sensationalistic manner.


Footnotes

1Illustration of the coronal plane of section.



2 Note that most cases of ALS and FTD are not caused by concussions.



Read Part 1 of the series:

Brief Guide to the CTE Brains in the News. Part 1: Aaron Hernandez


References

Hobson EV, McDermott CJ. (2016). Supportive and symptomatic management of amyotrophic lateral sclerosis. Nat Rev Neurol. 12(9):526-38.

Iverson GL, Keene CD, Perry G, Castellani RJ. (2018). The Need to Separate ChronicTraumatic Encephalopathy Neuropathology from Clinical Features. J Alzheimers Dis. 61(1):17-28.

Ling H, Morris HR, Neal JW, Lees AJ, Hardy J, Holton JL, Revesz T, Williams DD. (2017). Mixed pathologies including chronic traumatic encephalopathy account fordementia in retired association football (soccer) players. Acta Neuropathol. 133(3):337-352.

Omalu B, Small GW, Bailes J, Ercoli LM, Merrill DA, Wong KP, Huang SC, Satyamurthy N, Hammers JL, Lee J, Fitzsimmons RP. (2017). Postmortem Autopsy-Confirmation of Antemortem [F-18] FDDNP-PET Scans in a Football Player With Chronic Traumatic Encephalopathy. Neurosurgery. 2017 Nov 10.


Further Reading I've written about CTE a lot, you can read more below.

FDA says no to marketing FDDNP for CTE

Is CTE Detectable in Living NFL Players?

The Ethics of Public Diagnosis Using an Unvalidated Method

The Truth About Cognitive Impairment in Retired NFL Players

Lou Gehrig Probably Died of Lou Gehrig's Disease

Blast Wave Injury and Chronic Traumatic Encephalopathy: What's the Connection?

Little Evidence for a Direct Link between PTSD and Chronic Traumatic Encephalopathy

Amygdala Stimulation in the Absence of Emotional Experience Enhances Memory for Neutral Objects

$
0
0


The amygdala is a small structure located within the medial temporal lobes (MTL), consisting of a discrete set of nuclei. It has a reputation as the “fear center” or “emotion center” of the brain, although it performs multiple functions. One well-known activity of the amygdala, via its connections with other MTL areas, involves an enhancement of memories that are emotional in nature (compared to neutral). Humans and rodents with damaged or inactivated amygdalae fail to show this emotion-related enhancement, although memory for neutral items is relatively preserved (Adolphs et al., 1997; Phelps & Anderson, 1997; McGaugh, 2013).

A new brain stimulation study (Inman et al., 2017) raises interesting questions about the necessity of subjective emotional experience in the memory enhancement effect. A group of 14 refractory epilepsy patients underwent surgery to implant electrodes in the left or right amygdala (and elsewhere) for the sole purpose of monitoring the source of their seizures. In a boon for affiliated research programs everywhere, patients are able to participate in experiments while waiting around for seizures to occur.

The stimulating electrodes were located in or near the basolateral complex of the amygdala (BLA), shown below. The stimulation protocol was developed from similar studies in rats, which demonstrated that direct electrical stimulation of BLA can improve memory for non-emotional events when tested on subsequent days (Bass et al., 2012; 2014; 2015).



Fig. 1A and B (modified from Inman et al., 2017). 
(A) A representative postoperative coronal MRI showing electrode contacts in the amygdala (white square). (B) Illustration of left amygdala with black circles indicating estimated centroids of bipolar stimulation in or near the BLA in all 14 patients. White borders denote right-sided stimulation.


The direct translation from animals to humans is a clear strength of the paper (Inman et al., 2017):
...direct activation of the BLA modulated neuronal activity and markers of synaptic plasticity in the hippocampus and perirhinal cortex, two structures important for declarative memory that are directly innervated by the BLA.  ... These and other studies [in animals] have led to the view that an emotional experience engages the amygdala, which in turn enhances memory for that experience through modulation of synaptic plasticity-related processes underlying memory consolidation in other brain regions. This model predicts that direct stimulation of the human amygdala could enhance memory in a manner analogous to emotion’s enhancing effects on long-term memory.

The experimental task was a test of object recognition memory. Pictures of 160 neutral objects were presented on Day 1 while the participants made “indoor” or “outdoor” decisions (which were quite ambiguous in many cases). The purpose of this task was to engage a deep level of semantic encoding of each object, which was presented for 3 seconds. Immediately after stimulus offset for half the items (n=80), a train of electrical stimulation pulses was presented for 1 second (each pulse = 500 μs biphasic square wave; pulse frequency = 50 Hz; train frequency = 8 Hz). For the other half (n=80), no stimulation was presented. Each trial was separated by a 5 second interval.


Fig. 1D (modified from Inman et al., 2017).


An immediate recognition memory test was presented after completion of the study phase. Yes/no decisions were made on 40 old objects with post-stimulation, 40 old objects with no stimulation, and 40 new objects (“foils”). Then 24 hours later, a similar yes/no recognition test was presented, but this time with the other set of items not tested previously, along with a new set of foils. The prediction was that electrical stimulation of the amygdala would act as an artificial “boost” of performance on the 24 hour test, after memory consolidation had occurred.

This prediction was (mostly) supported as shown below, with one caveat I'll explain shortly. In Panel A, a commonly used measure of discrimination performance (d′) is shown for the Immediate and One-Day tests, with red dots indicating stimulation and blue dots no stimulation (one dot per patient). Most participants performed better on stimulated items regardless of whether on the Immediate test or One-Day test, although variability was higher on the Immediate test. Panel B shows a summary of the performance difference for stimulation no stimulation trials. Paired-samples t-tests (two sided) were conducted for each recognition-memory interval. The result for One-Day was significant (p=.003), but the result for Immediate was not (p=.30). This would seem to be convincing evidence that amygdala stimulation during encoding enhanced delayed recognition memory selectively.



Fig. 2A and B (modified from Inman et al., 2017).


HOWEVER, from the statistics presented thus far, we don't know whether the memory enhancement effect was statistically larger for the One-Day test. My guess is not, because an ANOVA showed a main effect of test day (p< 0.001) and a main effect of stimulation (p= 0.03). But no interaction between these variables was reported.

Nonetheless, the study was fascinating because the patients were unable to say whether or not stimulation was delivered in a subsequent test of awareness (10 trials of each condition):
All 14 patients denied subjective awareness of the amygdala stimulation on every trial. In addition, no patient reported emotional responses associated with amygdala stimulation during the stimulation awareness test or during recognition-memory testing. Moreover, similar amygdala-stimulation parameters caused no detectable autonomic changes in patients (n = 7) undergoing stimulation parameter screening.

The take-home message is that subjective and objective indicators of emotion were not necessary for amygdala stimulation during encoding to enhance subsequent recognition of neutral material. “This memory enhancement was accompanied by neuronal oscillations during retrieval that reflected increased interactions between the amygdala, hippocampus, and perirhinal cortex”1 (as had been shown previously in animals).2

So it seems that subjective emotional experience may be an unnecessary epiphenomenon for the boosting effect of emotion in the formation of declarative memories. Or at least in this limited (albeit impressive) laboratory setting. And here I will step aside from being overly critical. Anyone who wants to slam the reproducibility of an n=14 rare patient sample size should be prepared to run the same study with 42 individuals with amygdala depth electrodes.


Footnotes

1Inman et al., 2017:
For [n = 5 patients] with electrodes localized concurrently in the amygdala, hippocampus, and perirhinal cortex), local field potentials (LFPs) from each region were recorded simultaneously during the immediate and one-day recognition-memory tests... LFP oscillations were apparent in the theta (here 5–7 Hz) and gamma (30–55 Hz) ranges...  ...  Recognition during the one-day test but not during the immediate test exhibited increased power in perirhinal cortex in the gamma frequency range for remembered objects previously followed by stimulation compared with remembered objects without stimulation. Furthermore, LFPs during the one-day test, but not during the immediate test, revealed increased coherence of hippocampal–perirhinal oscillations in the theta frequency range for remembered objects previously followed by stimulation compared with remembered objects without stimulation.

2 If you think the 14 patients with epilepsy were variable, wait until you see the [overly honest] results from even smaller studies with rats.


Fig. S7 (Inman et al., 2017).

Conveniently, Professor Dorothy Bishop has a new blog post on Using simulations to understand the importance of sample size. So yes, sample size matters...


References

Adolphs R, Cahill L, Schul R, Babinsky R. (1997). Impaired declarative memory for emotional material following bilateral amygdala damage in humans. Learn Mem. 4(3):291-300.

Bass DI, Manns JR. (2015). Memory-enhancing amygdala stimulation elicits gamma synchrony in the hippocampus. Behav Neurosci. 129(3):244-56.

Bass DI, Nizam ZG, Partain KN, Wang A, Manns JR. (2014). Amygdala-mediated enhancement of memory for specific events depends on the hippocampus. Neurobiol Learn Mem. 107:37-41.

Bass DI, Partain KN, Manns JR. (2012). Event-specific enhancement of memory via brief electrical stimulation to the basolateral complex of the amygdala in rats. Behav Neurosci. 126(1):204-8.

Ikegaya Y, Saito H, Abe K. (1996). The basomedial and basolateral amygdaloid nuclei contribute to the induction of long-term potentiation in the dentate gyrus in vivo. Eur J Neurosci. 8(9):1833-9.

Inman CS, Manns JR, Bijanki KR, Bass DI, Hamann S, Drane DL, Fasano RE, Kovach CK, Gross RE, Willie JT. (2017). Direct electrical stimulation of the amygdala enhances declarative memory in humans. Proc Natl Acad Sci.  Dec 18. [Epub ahead of print]

McGaugh JL.(2013). Making lasting memories: remembering the significant. Proc Natl Acad Sci 110 Suppl 2:10402-7.

Phelps EA, Anderson AK. (1997). Emotional memory: what does the amygdala do?Curr Biol. 7(5):R311-4.

Least Popular Posts of 2017

$
0
0


2017 was a really bad year. The U.S. is more divided than ever, the truth is meaningless, well-researched journalism is called FAKE NEWS, the President lies once every minute, white supremacist rallies have been normalized, some tech companies1 continue to invade our privacy/extract personal data, exploit the middle and lower classes,2 and displace long-time residents from urban areas. And who knows what health care and Alaska will look like in 2018.

Yes, this is classic Neurocritic pessimism.3

While everyone else rings in the New Year by commemorating the best and brightest of 2017 in formulaic Top Whatever lists, The Neurocritic has decided to wallow in shame. To mark this Celebration of Failure, I have compiled a Bottom Five list,4 the year's least popular posts as measured by Google Analytics. The last time I compiled a “Worst of” list was in 2012.

Methods: The number of pageviews per post was copied and pasted into an Excel file, sorted by date. Then the total pageviews for each post was prorated by the vintage of the post, to give an estimate of daily views.5 

Results: The posts are listed in inverse order, starting with #5 and ending with #1 (least popular).


5 Most Unpopular Posts of 2017

5.Terrorism and the Implicit Association Test– I actually worked pretty hard on this one. It's about the stereotyping of Muslims, the importance of language (e.g., Theresa May: “the single, evil ideology of Islamist extremism that preaches hatred, sows division, and promotes sectarianism”), a demonstration that semantics derived automatically from language corpora contain human-like biases, the Arab-Muslim IAT (which found little to no bias against Muslims), and some general problems with the IAT.

4. Smell as a Weapon, and Odor as Entertainment– This was from my two-part olfactory series, which covered the interesting history of Olfactory Warfare (e.g, stink bombs, stealth camouflage) and the use of smell in cinematic and VR contexts. {or at least, it was interesting to me}.

3. The Big Bad Brain– This featured a fun and catchy music video (High) by Sir Sly, which was an earworm for me. But too esoteric and not much staying power.

2.What's Popular at #CNS2017?– This falls under the perennially unpopular category of “yearly conference announcements”, which is only relevant around the time of the meeting.

1. Olfactory Deterrence– This was about the prospect of nuclear war and how putrid smells might deter the use of nuclear weapons, along with eradicating cavalier attitudes about them.


Discussion: We can easily see some themes emerging: the IAT, olfaction, music videos, and the Cognitive Neuroscience Society meeting.

Conclusion: People are sick of the IAT, aren't thrilled about the sense of smell (especially in relation to nuclear war), and do not like music videos or CNS Meeting announcements. However, they do like meeting recaps, as shown by the popularity of What are the Big Ideas in Cognitive Neuroscience? and The Big Ideas in Cognitive Neuroscience, Explained.


Footnotes

1Uber deserves special mention.

2This one is from 2016, but it's a real eye-opener: The Not-So-Wholesome Reality Behind The Making of Your Meal Kit.

3This has been the worst-ever year for me personally as well, so I see no reason to be optimistic.

4Actually, #5 is Survival and Grief. I cannot bear to feature this one, so the closely ranked #6 is a stand-in.

5 The post with the absolute lowest number of views (Brief Guide to the CTE Brains in the News. Part 2: Fred McNeill) was written on 12/11/2017. For a true reading of yearly “staying power” we'd need to follow all posts for 365 days.



Sexual Violence is Horrible, But First Look at Causes Outside the Brain

$
0
0

"At the brain level, empathy for social exclusion of personalized women recruited areas coding the affective component of pain (i.e., anterior insula and cingulate cortex), the somatosensory components of pain (i.e., posterior insula and secondary somatosensory cortex) together with the mentalizing network (i.e., middle frontal cortex) to a greater extent than for the sexually objectified women. This diminished empathy is discussed in light of the gender-based violence that is afflicting the modern society" (Cogoni et al., 2018).

A new brain imaging paper on Cyberball, social exclusion, objectification, and empathy went WAY out on a limb and linked the results to sexual violence, despite the lack of differences between male and female participants. It's quite a leap from watching a video of women in differing attire, comparing levels of empathy when “objectified” vs. “personalized” women are excluded from the game, and actually perpetrating violence against women in the real world.



modified from Fig. 1 (Cogoni et al., 2018).(A) objectified women in little black dresses; (B) personalized women in pants and t-shirt. Note: the black bar didn't appear in the actual videos.


I'm not a social psychologist (so I've always been a bit skeptical), but Cyberball is a virtual game designed as a model for social rejection and ostracism (Williams et al., 2000). The participant is led to believe they are playing an online ball-tossing game with other people, who then proceed to exclude them from the game. It's been widely used to study exclusion, social pain, and empathy for another's person's pain.


The present version went beyond this simple animation and used 1521 second videos (see still image in Fig. 1) with the “self” condition represented by a pair of hands. More important, though, was a comparison of the two “other person” conditions.



“Each video displayed either a ‘social inclusion’ or a ‘social exclusion’ trial.  ...  At the end of each trial, the participant was asked to rate the valence of the emotion felt by themselves (self condition), or by the other person (other conditions), during the game on a  Likert-type rating scale going from −10 = ‘very negative’ over 0 to +10 = ‘very positive’.”

The participants were 19 women and 17 men, who showed no differences in their emotion ratings. Curiously, the negative emotion ratings on exclusion trials did not differ between the Self, Objectified, and Personalized conditions. So there appears to be no empathy gap for objectified women who were excluded from Cyberball. The difference was on the inclusion trials, when the subjects didn't feel as positively towards women in little black dresses when they were included in the game (in comparison to when women in pants were included, or when they themselves were included).


Fig. 3 (Cogoni et al., 2018).


At this point, I won't delve deeper into the neuroimaging results, because the differences shown at the top of the post were for the exclusion condition, when behavioral ratings were the all same. And any potential sex differences in the imaging data weren't reported.1 Or else I'm confused. At any rate, perhaps an fMRI study of perpetrators would be more informative in the future. But ultimately, culture and social conditions and power differentials (all outside the brain) are the major determinants of violence against women.





When discussing the objectification of women in the present era, it's hard to escape the Harvey Weinstein scandal. One of the main purposes of Miramax2 was to turn young women inro sex objects. Powerful essays by Lupita Nyong’o, Salma Hayek, and Brit Marling (to name just a few) describe the indignities, sexual harassment, and outright assault they endured from this highly influential career-maker or breaker. Further, they describe the identical circumstances, the lingering doubt, the self-blame, and the commodification of themselves. Here's Marling:
Hollywood was, of course, a rude awakening to that kind of idealism. I quickly realized that a large portion of the town functioned inside a soft and sometimes literal trafficking or prostitution of young women (a commodity with an endless supply and an endless demand). The storytellers—the people with economic and artistic power—are, by and large, straight, white men. As of 2017, women make up only 23 percent of the Directors Guild of America and only 11 percent are people of color.
. . .

Once, when I was standing in line for some open-call audition for a horror film, I remember catching my reflection in the mirror and realizing that I was dressed like a sex object. Every woman in line to audition for “Nurse” was, it seemed. We had all internalized on some level the idea that if we were going to be cast we’d better sell what was desired—not our artistry, not our imaginations—but our bodies.

Dacher Keltner wrote about empathy deficits of the rich and famous in Sex, Power, and the Systems That Enable Men Like Harvey Weinstein. But he emphasized the abuse of power: “The challenge, then, is to change social systems in which the abuses of power arise and continue unchecked.” 


Footnotes

1 Although they listed a variety of reasons, the authors didn't do themselves any favors with this explanation for the lack of sex differences:
“Although this issue is still debated, in this study we refer to gender violence as a phenomenon that mainly entails not only active participation, but also passive acceptance or compliance and therefore involving both men and women’ behaviors.”

2 And Hollywood in general...


References

Cogoni C, Carnaghi A, Silani G. (2018). Reduced empathic responses for sexually objectified women: an fMRI investigation. Cortex  99: 258–272.  {PDF}

Williams KD, Cheung CK, Choi W. (2000). Cyberostracism: effects of being ignored over the Internet. J Pers Soc Psychol. 79:748-62.


Further Reading:The Cyberball Collection (by The Neurocritic)

Suffering from the pain of social rejection? Feel better with TYLENOL®

Vicodin for Social Exclusion

Existential Dread of Absurd Social Psychology Studies

The Mental Health of Lonely Marijuana Users

Acetaminophen Probably Isn't an "Empathy Killer"

Advil Increases Social Pain (if you're male)

Oh, and... Spanner or Sex Object?



I should have done this by now...

$
0
0


Today marks the day of 12 years of blogging. Twelve years! During this time, I've managed to remain a mysterious pseudonym to almost everyone. Very few people know who I am.

But a lot has changed since then. The Open Science movement, the rise of multiple platforms for critique, the Replication Crisis in social psychology, the emergence of methodological terrorists, data police, and destructo-critics. Assertive psychologists and statisticians with large social media presences have openly criticized flawed studies using much harsher language than I do. Using their own names. It's hard to stay relevant...

Having a pseudonym now seems quaint.


The most famous neuro-pseudonym of all, Neuroskeptic, interviewed me 2 years ago in a post on Pseudonyms in Science. He asked:

What led you to choose to blog under a pseudonym?

My answer:
It was for exactly the same reason that reviewers of papers and grants are anonymous: it gives you the ability to provide an honest critique without fear of retaliation. If peer review ever becomes completely open and transparent, then I’d have no need for a pseudonym any more.

In an ideal world, reviewers should be identified and held accountable for what they write. Then shoddy reviews and nasty comments would (presumably) become less common. We’ve all seen anonymous reviews that are incredibly insulting, mean, and unprofessional. So it’s hypocritical to say that bloggers are cowardly for hiding under pseudonyms, while staunchly upholding the institution of anonymous peer review. ...

Neuroskeptic also interviewed Neurobonkers (who went public) and Dr. Primestein (who has not).


Have you ever been tempted to drop the pseudonym and use your real name? What do you think would happen (positive and negative if you did?)

My answer:
. . .

If I were to drop the pseudonym, it might be good (and bad) for my career as a neuroscientist. I could finally take credit for my writing, but then I’d have to take all the blame too! But overall, it’s likely that less would happen than I currently imagine.

{At this point, most people probably don't care who I am.}


So what has changed? Have I left the field? No. But some serious and tragic life events have rendered my anonymity irrelevant. I just don't care any more.

In September, my closest childhood friend died from cancer (see Survival and Grief).



I'm on the right.



Then a month later, my wife was diagnosed with stage 4 cancer. My sadness and depression and anxiety over this is beyond words.

I don't want to go into any more detail right now, but I'd like to show you who we are. We met via our blogs in 2006.



Snowshoeing on Mt. Seymour, December 2016
I'm on the left.


So yeah, think of this as my “coming out”. Sorry if I've offended anyone with my ability to blend into male-dominated settings.

Thank you for reading, and for your continued support during this difficult time.

Head Impact and Hyperphosphoralated Tau in Teens

$
0
0


We all agree that repeated blows to the head are bad for the brain. What we don't yet know is:
  • who will show lasting cognitive and behavioral impairments
  • who will show only transient sequelae (and for how long)
  • who will manifest long-term neurodegeneration
  • ...and by which specific cellular mechanism(s)

Adding to the confusion is the unclear terminology used to describe impact-related head injuries. Is a concussion the same as a mild traumatic brain injury (TBI)? Sharp and Jenkins say absolutely not, and contend that Concussion is confusing us all:
It is time to stop using the term concussion as it has no clear definition and no pathological meaning. This confusion is increasingly problematic as the management of ‘concussed’ individuals is a pressing concern. Historically, it has been used to describe patients briefly disabled following a head injury, with the assumption that this was due to a transient disorder of brain function without long-term sequelae. However, the symptoms of concussion are highly variable in duration, and can persist for many years with no reliable early predictors of outcome. Using vague terminology for post-traumatic problems leads to misconceptions and biases in the diagnostic process, producing uninterpretable science, poor clinical guidelines and confused policy. We propose that the term concussion should be avoided. Instead neurologists and other healthcare professionals should classify the severity of traumatic brain injury and then attempt to precisely diagnose the underlying cause of post-traumatic symptoms.

In an interview about the impressive mega-paper by Tagge, Fisher, Minaeva, et al. (2018), co-senior author Dr. Lee Goldstein also said no, but had a different interpretation:
When it comes to head injuries and CTE, Goldstein spoke of three categories that are being jumbled: concussions, TBI and CTE. Concussion, he says, is a syndrome defined “by consensus really every couple of years, based on the signs and symptoms of neurological syndrome, what happens after you get hit in the head. It’s nothing more than that, a syndrome...

A TBI is different. “it is an injury, an event,” he said.“It’s not a syndrome. It’s an event and it involves damage to tissue. If you don’t have a concussion, you can absolutely have brain injury and the converse is true.”
. . .

“So concussion may or may not be a TBI and equally important not having a concussion may or may not be associated with a TBI. A concussion doesn’t tell you anything about a TBI. Nor does it tell you anything about CTE.”

I think I'm even more confused now... you can have concussion (the syndrome) without an injury or an event?

But I'm really here to tell you about 8 post-mortem brains from teenage males who had engaged in contact sports. These were from Dr. Ann McKee's brain bank at BU, and were included in the paper along with extensive data from a mouse model (Tagge, Fisher, Minaeva, et al., 2018). Four brains were in the acute-subacute phase after mild closed-head impact injury and had previous diagnoses of concusion. The other 4 brains were control cases, including individuals who also had previous diagnoses of concussion. Let me repeat that. The controls had ALSO suffered head impact injuries at unknown (“not recent”) pre-mortem dates (>7 years prior in one case).

This amazing and important work was made possible by magnanimous donations from grieving parents. I am very sorry for the losses they have suffered.

Below is a summary of the cases.


Case 1
  • 18 year old multisport athlete American football (9 yrs), baseball, basketball, weight-lifting
  • history of 10 sports concussions
  • died by suicide (hanging) 4.2 months after a snowboarding accident with head injury
  • evidence of hyperphosphorylated tau protein 


    Fig. 1 (Tagge, Fisher, Minaeva, et al., 2018). Case 1.(C) and (D)Hemosiderin-laden macrophages indicated by arrows, consistent with subacute head injury. (E)  microhemorrhage surrounded by neurites immunoreactive for phosphorylated tau protein (asterisks).


    Case 2
    • 18 year old multisport athlete American football (3 yrs), rugby, soccer, hockey
    • history of 4 concussions
    • one “severe concussion” 1 month before death, followed by “a second rugby-related head injury that resulted in sideline collapse and a 2-day hospitalization”
    • died a week later after weightlifting 
    • neuropathology not shown

    Case 3
    • 17 year old multisport athlete American football, lacrosse
    • history of 2 concussions, the second resulting in confusion and memory loss
    • small anterior cavum septum pellucidum (associated with CTE in other studies)
    • died by suicide (hanging) 2 days after second concussion


    Fig. 1 (Tagge, Fisher, Minaeva, et al., 2018). Case 3.(F)-(H)amyloid precursor protein (APP)-immunostaining in the corpus callosum (arrows).


    Case 4
    • 17 year old American football player
    • history of 3 concussions (26 days, 2 days, 1 day before death)
    • final head injury was fatal, due to swelling and brain herniation
    • evidence of hyperphosphorylated tau protein
    • diagnosed with early-stage CTE


    Fig. 1 (Tagge, Fisher, Minaeva, et al., 2018). Case 4. (O) Phosphorylated tau protein-containing neurofibrillary tangles, pretangles, and neurites in the sulcal depths of the cerebral cortex consistent with neuropathological diagnosis of early-stage CTE.



    CONTROLS none showed evidence of microvascular or axonal injury, astrocytosis, microgliosis, or phosphorylated tauopathy indicative of CTE or other neurodegenerative disease

    Case 5
    • 19 year old American football player 
    • history of concussion not reported (but can assume possible “blows to the head”)
    • died from multiple organ failure and cardiac arrest

    Case 6
    • 19 year old hockey player 
    • history of 6 concussions (time pre-mortem unknown)
    • died from cardiac arrhythmia

    Case 7
    • 17 year old American football player
    • history of concussion not reported (but can assume “blows to the head”)
    •  0.3-cm cavum septum pellucidum (consistent with impact injury)
    • died from oxycodone overdose (a factor neglected in previous studies)

    Case 8
    • 22 year old former American football player
    • history of 3 concussions (one with loss of consciousness) at least 7 years before death
    • history of bipolar disorder and 2 prior suicide attempts
    • died by suicide of unknown mechanism (also neglected in previous studies, but we don't know if asphyxiation was involved)


    Fig. 1 (Tagge, Fisher, Minaeva, et al., 2018). Case 8. (K) Minimal GFAP-immunoreactive astrocytosis in white matter. (N)Few activated microglia in brainstem white matter [NOTE: not an acute-subacute case].


    The goal of this study was to look at pathology after acute-subacute head injury (e.g., astrocytosis, macrophages, and activated microglia). Only 2 of the cases showed hyperphosphorylated tau protein, which is characteristic of CTE. But in the media (e.g., It's not concussions that cause CTE. It's repeated hits), all of these changes have been conflated with CTE, a neurodegenerative condition that presumably develops over a longer time scale. Overall, the argument for a neat and tidy causal cascade is inconclusive in humans (in my view), because hyperphosphoralated tau was not observed in any of the controls, including those with significant histories of concussion. Or in Cases 2 and 3. Are we to assume, then, that concussions do not produce tauopathy in all cases? Is there a specific “dose” of head impact required? The mouse model is more precise in this realm, and those results seemed to drive the credulous headlines.

    Importantly, the authors admit that “Clearly, not every individual who sustains a head injury, even if repeated, will develop CTE brain pathology.” Conversely, CTE pathology can occur without having suffered a single blow to the head (Gao et al., 2017).

    Clearly, there's still a lot to learn.


    References

    Gao AF, Ramsay D, Twose R, Rogaeva E, Tator C, Hazrati LN. (2017). Chronic traumatic encephalopathy-like neuropathological findings without a history of trauma. Int J Pathol Clin Res. 3:050.

    Sharp DJ, Jenkins PO. (2015). Concussion is confusing us all. Practical neurology 15(3):172-86.

    Tagge CA, Fisher AM, Minaeva OV, Gaudreau-Balderrama A, Moncaster JA, Zhang XL, Wojnarowicz MW, Casey N, Lu H, Kokiko-Cochran ON, Saman S, Ericsson M, Onos KD, Veksler R, Senatorov VV Jr, Kondo A, Zhou XZ, Miry O, Vose LR, Gopaul KR, Upreti C, Nowinski CJ, Cantu RC, Alvarez VE, Hildebrandt AM, Franz ES, Konrad J, Hamilton JA, Hua N, Tripodis Y, Anderson AT, Howell GR, Kaufer D, Hall GF, Lu KP, Ransohoff RM, Cleveland RO, Kowall NW, Stein TD, Lamb BT, Huber BR, Moss WC, Friedman A, Stanton PK, McKee AC, Goldstein LE. (2018). Concussion, microvascular injury,and early tauopathy in young athletes after impact head injury and an impact concussion mouse model. Brain 141: 422-458.


    Super Bowl Confetti Made Entirely From
    Shredded Concussion Studies

     
    A gift from The Onion


    Policy Insights from The Neurocritic: Alarm Over Acetaminophen, Ibuprofen Blocking Emotion Is Overblown

    $
    0
    0

    Just in time for Valentine's Day, floats in a raft of misleading headlines:

    Scientists have found the cure for a broken heart

    Painkillers may also mend a broken heart

    Taking painkillers could ease heartaches - as well as headaches

    Paracetamol and ibuprofen could ease heartaches - as well as headaches


    If Tylenol and Advil were so effective in “mending broken hearts”, “easing heartaches”, and providing a “cure for a broken heart”, we would be a society of perpetually happy automatons, wiping away the suffering of breakup and divorce with a mere dose of acetaminophen. We'd have Tylenol epidemics and Advil epidemics to rival the scourge of the present Opioid Epidemic.

    Really, people,1words have meanings. If you exaggerate, readers will believe statements that are blown way out of proportion. And they may even start taking doses of drugs that can harm their kidneys and livers.


    These media pieces also have distressing subtitles:

    Common painkillers that kill empathy
    ... some popular painkillers like ibuprofen and acetaminophen have been found to reduce people’s empathy, dull their emotions and change how people process information.

    A new scientific review of studies suggests over-the-counter pain medication could be having all sorts of psychological effects that consumers do not expect.

    Not only do they block people’s physical pain, they also block emotions.

    The authors of the study, published in the journal Policy Insights from the Behavioral and Brain Sciences, write: “In many ways, the reviewed findings are alarming. Consumers assume that when they take an over-the-counter pain medication, it will relieve their physical symptoms, but they do not anticipate broader psychological effects.”

    Cheap painkillers affect how people respond to hurt feelings, 'alarming' review reveals
    Taking painkillers could ease the pain of hurt feelings as well as headaches, new research has discovered.

    The review of studies by the University of California found that women taking drugs such as ibuprofen and paracetamol reported less heartache from emotionally painful experiences, compared with those taking a placebo.

    However, the same could not be said for men as the study found their emotions appeared to be heightened by taking the pills.

    Researchers said the findings of the review were 'in many ways...alarming'.

    I'm here to tell you these worries are greatly exaggerated. Just like there's a Trump tweet for every occasion, there's a Neurocritic post for most of these studies (see below).

    A new review in Policy Insights from the Behavioral and Brain Sciences has prompted the recent flurry of headlines. Ratner et al. (2018) reviewed the literature on OTC pain medications.
    . . . This work suggests that drugs like acetaminophen and ibuprofen might influence how people experience emotional distress, process cognitive discrepancies, and evaluate stimuli in their environment. These studies have the potential to change our understanding of how popular pain medications influence the millions of people who take them. However, this research is still in its infancy. Further studies are necessary to address the robustness of reported findings and fully characterize the psychological effects of these drugs.

    The studies are potentially transformative, yet the research is still in its infancy. The press didn't read the “further studies are necessary” caveat. But I did find one article that took a more modest stance:

    Do OTC Pain Relievers Have Psychological Effects?
    Ratner wrote that the findings are “in many ways alarming,” but he told MD Magazine that his goal is not so much to raise alarm as it is to prompt additional research. “Something that I want to strongly emphasize is that there are really only a handful of studies that have looked at the psychological effects of these drugs,” he said.

    Ratner said a number of questions still need to be answered. For one, there is not enough evidence out there to know to what extent these psychological effects are merely the result of people being in better moods once their pain is gone.

    . . .

    Ratner also noted that the participants in the studies were not taking the medications because of physical pain, and so the psychological effects might be a difference in cases where the person experienced physical pain and then relief.

    For now, Ratner is urging caution and nuanced interpretation of the data. He said stoking fears of these drugs could have negative consequences, as could a full embrace of the pills as mood-altering therapies.

    Ha! Not so alarming after all, we see on a blog with 5,732 Twitter followers (as opposed to 2.4 million and 2.9 million for the most popular news pieces). I took 800 mg of ibuprofen before writing this post, and I do not feel any less anxious or disturbed about events in my life. Or even about feeling the need to write this post, with my newly “out” status and all.


    There's a Neurocritic post for every occasion...

    As a preface to my blog oeuvre, these are topics I care about deeply. I'm someone who has suffered heartache and emotional pain (as most of us have), as well as chronic pain conditions, four invasive surgeries, tremendous loss, depression, anxiety, insomnia, etc.... My criticism does not come lightly.

    I'm not entirely on board with studies showing that one dose (or 3 weeks) of Tylenol MAY {or may not} modestly reduce social pain or “existential distress” or empathy as sufficient models of human suffering and its alleviation by OTC drugs. In fact, I have questions about all of these studies.

    Suffering from the pain of social rejection? Feel better with TYLENOL®– My first question has always been, why acetaminophen and not aspirin or Advil? Was there a specific mechanism in mind?

    Existential Dread of Absurd Social Psychology Studies– Does a short clip of Rabbits (by David Lynch) really produce existential angst and thoughts of death? [DISCLAIMER: I'm a David Lynch fan.]

    Tylenol Doesn't Really Blunt Your Emotions– Why did ratings of neutral stimuli differ as a function of treatment (in one condition)?

    Does Tylenol Exert its Analgesic Effects via the Spinal Cord?– and perhaps brainstem

    Acetaminophen Probably Isn't an "Empathy Killer"– How do very slight variations in personal distress ratings translate to real world empathy?

    Advil Increases Social Pain (if you're male)– Reduced hurt from Cyberball exclusion in women, but a disinhibition effect in men (blunting their tendency to suppress their emotional pain)?

    ...and just for fun:

    Vicodin for Social Exclusion– not really – but social pain and physical pain are not interchangeable

    Use of Anti-Inflammatories Associated with Threefold Increase in Homicides– cause/effect issue, of course



    Scene from Rabbits by David Lynch


    Footnote

    1And by “people” I mean scientists and journalists alike. Read this tweetstorm from Chris Chambers, including:





    Reference

    Ratner KG, Kaczmarek AR, Hong Y. (2018). Can Over-the-Counter Pain Medications Influence Our Thoughts and Emotions?Policy Insights from the Behavioral and Brain Sciences. Feb 6:2372732217748965.

    Universal Linguistic Decoders are Everywhere

    $
    0
    0
    Pereira et al. (2018) - click image to enlarge


    No, they're not. They're really not. They're “everywhere” to me, because I've been listening to Black Celebration. How did I go from “death is everywhere” to “universal linguistic decoders are everywhere”? I don't imagine this particular semantic leap has occurred to anyone before. Actually, the association travelled in the opposite direction, because the original title of this piece was Decoders Are Everywhere.1 {I was listening to the record weeks ago, the silly title of the post reminded me of this, and the semantic association was remote.}

    This is linguistic meaning in all its idiosyncratic glory, a space for infinite semantic vectors that are unexpected and novel. My rambling is also an excuse to not start out by saying, oh my god, what were you thinking with a title like, Toward a universal decoder of linguistic meaning from brain activation (Pereira et al., 2018). Does the word “toward” absolve you from what such a sage, all-knowing clustering algorithm would actually entail? And of course, “universal” implies applicability to every human language, not just English. How about, Toward a better clustering algorithm (using GloVe vectors) for inferring meaning from the distribution of voxels, as determined by an n=16 database of brain activation elicited by reading English sentences?

    But it's unfair (and inaccurate) to suggest that the linguistic decoder can decipher a meandering train of thought when given a specific neural activity pattern. Therefore, I do not want to take anything away from what Pereira et al. (2018) have achieved in this paper. They say:
    • “Our work goes substantially beyond prior work in three key ways. First, we develop a novel sampling procedure for selecting the training stimuli so as to cover the entire semantic space. This comprehensive sampling of possible meanings in training the decoder maximizes generalizability to potentially any new meaning.”
    •  
    • “Second, we show that although our decoder is trained on a limited set of individual word meanings, it can robustly decode meanings of sentences represented as a simple average of the meanings of the content words. ... To our knowledge, this is the first demonstration of generalization from single-word meanings to meanings of sentences.”
    •  
    • “Third, we test our decoder on two independent imaging datasets, in line with current emphasis in the field on robust and replicable science. The materials (constructed fully independently of each other and of the materials used in the training experiment) consist of sentences about a wide variety of topics—including abstract ones—that go well beyond those encountered in training.”

    Unfortunately, it would take me days to adequately pore over the methods, and even then my understanding would be only cursory. The heavy lifting would need to be done by experts in linguistics, unsupervised learning, and neural decoding models. But until then...


    Death is everywhere
    There are flies on the windscreen
     For a start
     Reminding us
     We could be torn apart
    Tonight

    ---Depeche Mode, Fly on the Windscreen


    Footnote

    1 Well, they are super popular right now.


    Reference

    Pereira F, Lou B, Pritchett B, Ritter S, Gershman SJ, Kanwisher N, Botvinick M, Fedorenko E. (2018). Toward a universal decoder of linguistic meaning from brain activation. Nat Commun. 9(1):963.





    Come here
    Kiss me
    Now
    Come here
    Kiss me
    Now

    ---ibid


    25 Years of Cognitive Neuroscience in Boston

    $
    0
    0


    The 25th Annual Meeting of the Cognitive Neuroscience Society starts off with a big bang on Saturday afternoon with the Big Theory versus Big Data Debate, moderated by David Poeppel.1


    Big Theory versus Big Data: What Will Solve the Big Problems in Cognitive Neuroscience?


    My non-commital answers are:

    (1) Both.

    (2) It depends. (on what you want to do: predict behavior2 (or some mental state), explain behavior, control behavior, etc.)

    Abstract: All areas of the sciences are excited about the innovative new ways in which data can be acquired and analyzed. In the neurosciences, there exists a veritable orgy of data – but is that what we need? Will the colossal datasets we now enjoy solve the questions we seek to answer, or do we need more ‘big theory’ to provide the necessary intellectual infrastructure? Four leading researchers, with expertise in neurophysiology, neuroimaging, artificial intelligence, language, and computation will debate these big questions, arguing for what steps are most likely to pay off and yield substantive new explanatory insight.


    Talk 1: Eve Marder The Important of the Small for Understanding the Big

    Talk 2: Jack Gallant Which Presents the Biggest Obstacle to Advances in Cognitive Neuroscience Today: Lack of Theory or Lack of Data?

    Talk 3: Alona Fyshe Data Driven Everything

    Talk 4: Gary Marcus Neuroscience, Deep Learning, and the Urgent Need for an Enriched Set of Computational Primitives


    Levels of analysis! Marr! [Poeppel is the moderator] New new new! Transformative techniques, game-changing paradigms, groundbreaking schools of thought, and multiple theories for myriad neural circuits. There is no single computational system that can possibly explain brain function at all levels of analysis (gasp! not even the Free Energy Prinicple).3

    A Q&A or panel discussion would be nice... (although not on the schedule)


    This Special Symposium will be preceded by the ever-exciting Data Blitz (a series of 5 minute talks) and followed by a Keynote Address by the Godfather of Cognitive Neuroscience:

    Michael Gazzaniga

    The Consciousness Instinct

    How do neurons turn into minds? How does physical “stuff”—atoms, molecules, chemicals, and cells—create the vivid and various alive worlds inside our heads? This problem has gnawed at us for millennia. In the last century there have been massive breakthroughs that have rewritten the science of the brain, and yet the puzzles faced by the ancient Greeks are still present. In this lecture I review the the history of human thinking about the mind/brain problem, giving a big-picture view of what science has revealed. Understanding how consciousness could emanate from a confederation of independent brain modules working together will help define the future of brain science and artificial intelligence, and close the gap between brain and mind.


    Plus there is a jam packed schedule of posters, talks, and prestigious award recipients/presenters on Sunday through Tuesday. Another highlight:

    Symposium 3The Next 25 Years of Cognitive Neuroscience: Opportunities and Challenges (Brad Postle, Chair)4


    I belong to the school of slow blogging, so I probably won't have immediate recaps. Follow #CNS2018 and enjoy the conference!



    Footnotes

    1 The passenger next to me was watching the Big Bang Theory, so yay for repetition priming.

    2At multiple levels of analysis, e.g. from molecular processes to motor output and all in between. Not daunting or anything. Perhaps not even possible...

    3 Although Poster A87 suggests otherwise. {I think}

    4 Unfortunately, this conflicts with Symposium 1 Memory Modulation via Direct Brain Stimulation in Humans which I really want to attend as well.

    Automatically-Triggered Brain Stimulation during Encoding Improves Verbal Recall

    $
    0
    0
    Fig. 4 (modified from Ezzyat et al., 2018). Stimulation targets showing numerical increase/decrease in free recall performance are shown in red/blue. Memory-enhancing sites clustered in the middle portion of the left middle temporal gyrus.


    Everyone forgets. As we grow older or have a brain injury or a stroke or develop a neurodegenerative disease, we forget much more often. Is there a technological intervention that can help us remember? That is the $50 million dollar question funded by DARPA's Restoring Active Memory (RAM) Program, which has focused on intracranial electrodes implanted in epilepsy patients to monitor seizure activity.

    Led by Michael Kahana's group at the University of Pennsylvania and including nine other universities, agencies, and companies, this Big Science project is trying to establish a “closed-loop” system that records brain activity and stimulates appropriate regions when a state indicative of poor memory function is detected (Ezzyat et al., 2018).

    Initial “open-loop” efforts targeting medial temporal lobe memory structures (entorhinal cortex, hippocampus) were unsuccessful (Jacobs et al., 2016). In fact, direct electrical stimulation of these regions during encoding of spatial and verbal information actually impaired memory performance, unlike an initial smaller study (Suthana et al., 2012).1

    {See Bad news for DARPA's RAM program: Electrical Stimulation of Entorhinal Region Impairs Memory}


    However, during the recent CNS symposium on Memory Modulation via Direct Brain Stimulation in Humans, Dr. Suthana suggested that “Stimulation of entorhinal white matter and not nearby gray matter was effective in improving hippocampal-dependent memory...” 2

    {see this ScienceNews story}


    Enter the Lateral Temporal Cortex

    Meanwhile, the Penn group and their collaborators moved to a different target region, which was also discussed in the CNS 2018 symposium: “Closed-loop stimulation of temporal cortex rescues functional networks and improves memory” (based on Ezzyat et al., 2018).


    Fig. 4 (modified from Ezzyat et al., 2018). Horizontal section. Stimulation targets showing numerical increase/decrease in free recall performance are shown in red/blue. Memory-enhancing sites clustered in the middle portion of the left middle temporal gyrus.


    Twenty-five patients performed a memory task in which they were shown a list of 12 nouns, followed by a distractor task, and finally a free recall phase, where they were asked to remember as many of the words as they could. The participants went through a total of 25 rounds of this study-test procedure.


    Meanwhile, the first three rounds were “record-only” sessions, where the investigators developed a classifier a pattern of brain activity that could predict whether or not the patient would recall the word at better than chance (AUC = 0.61, where chance =.50).” 3 The classifier relied on activity across all electrodes that were placed in an individual patient.


    Memory blocks #4-25 alternated between Simulation (Stim) and No Stimulation (NoStim) lists. In Stim blocks, 0.5-2.25 mA stimulation was delivered for 500 ms when the classifier AUC predicted 0.5 recall during word presentation. In NoStim lists, stimulation was not delivered on analogous trials, and the comparison between those two conditions comprised the main contrast shown below.


    Fig. 3a (modified from Ezzyat et al., 2018). Stimulation delivered to lateral temporal cortex targets increased the probability of recall compared to matched unstimulated words in the same subject (P < 0.05) and stimulation delivered to Non-lateral temporal targets in an independent group (P < 0.01).


    The authors found that that lateral temporal cortex stimulation increased the relative probability of item recall by 15% (using a log-binomial model to estimate the relative change in recall probability). {But if you want to see all of the data, peruse the Appendix below. Overall recall isn't that great...}

    Lateral temporal cortex (n=18) meant MTG, STG, and IFG (mostly on the left). Non-lateral temporal cortex (n=11) meant elsewhere (see Appendix below). The improvements were greatest with stimulation in the middle portion of the left middle temporal gyrus. There are many reason for poor encoding, and one could be that subjects were not paying enough attention. The authors didn't have the electrode coverage to test that explicitly. This leads me to believe that electrical stimulation was enhancing the semantic encoding of the words. The MTG is thought to be critical for semantic representations and language comprehension in general (Turken & Dronkers, 2011).

    Thus, my interpretation of the results is that stimulation may have boosted semantic encoding of the words, given the nature of the stimuli (words, obviously), the left lateralization with a focus in MTG, and the lack of an encoding task. The verbal memory literature clearly demonstrates that when subjects have a deep semantic encoding task (e.g., living/non-living decision), compared to shallow orthographic (are there letters that extend above/below?) or phonological tasks, recall and recognition are improved. Which led me to ask some questions, and one of the authors kindly replied (Dan Rizzuto, personal communication). 4

    1. Did you ever have conditions that contrasted different encoding tasks? Here I meant to ask about semantic vs orthographic encoding (because the instructions were always to “remember the words” with no specific encoding task).
       
    • We studied three verbal learning tasks (uncategorized free recall, categorized free recall, paired associates learning) and one spatial navigation task during the DARPA RAM project. We were able to successfully decode recalled / non-recalled words using the same classifier across the three different verbal memory tasks, but we never got sufficient paired associates data to determine whether we could reliably increase memory performance on this task.
     
  • Did you ever test nonverbal stimuli (not nameable pictures, which have a verbal code), but visual-spatial stimuli? Here I was trying to assess the lexical-semantic nature of the effect. 
    •  
    • With regard to the spatial navigation task, we did observe a few individual patients with LTC stimulation-related enhancement, but we haven't yet replicated the effect across the population.

    Although this method may have therapeutic implications in the future, at present it is too impractical, and the gains were quite small. Nonetheless, it is an accomplished piece of work to demonstrate closed-loop memory enhancement in humans.


    Footnotes

    1 Since that time, however, the UCLA group has reported that theta-burst microstimulation of....
    ....the right entorhinal area during learning significantly improved subsequent memory specificity for novel portraits; participants were able both to recognize previously-viewed photos and reject similar lures. These results suggest that microstimulation with physiologic level currents—a radical departure from commonly used deep brain stimulation protocols—is sufficient to modulate human behavior and provides an avenue for refined interrogation of the circuits involved in human memory.

    2 Unfortunately, I was running between two sessions and missed that particular talk.

    3 This level of prediction is more like a proof of concept and would not be clinically acceptable at this point.

    4 Thanks also to Youssef Ezzyat and Cory Inman, whom I met at the symposium.


    References

    Ezzyat Y, Wanda PA, Levy DF, Kadel A, Aka A, Pedisich I, Sperling MR, Sharan AD, Lega BC, Burks A, Gross RE, Inman CS, Jobst BC, Gorenstein MA, Davis KA, Worrell GA, Kucewicz MT, Stein JM, Gorniak R, Das SR, Rizzuto DS, Kahana MJ. (2018). Closed-loop stimulation of temporal cortex rescues functional networks and improves memory. Nat Commun. 9(1): 365.

    Jacobs, J., Miller, J., Lee, S., Coffey, T., Watrous, A., Sperling, M., Sharan, A., Worrell, G., Berry, B., Lega, B., Jobst, B., Davis, K., Gross, R., Sheth, S., Ezzyat, Y., Das, S., Stein, J., Gorniak, R., Kahana, M., & Rizzuto, D. (2016). Direct Electrical Stimulation of the Human Entorhinal Region and Hippocampus Impairs Memory. Neuron 92(5): 983-990.

    Suthana, N., Haneef, Z., Stern, J., Mukamel, R., Behnke, E., Knowlton, B., & Fried, I. (2012). Memory Enhancement and Deep-Brain Stimulation of the Entorhinal Area. New England Journal of Medicine 366(6): 502-510.

    Titiz AS, Hill MRH, Mankin EA, M Aghajan Z, Eliashiv D, Tchemodanov N, Maoz U, Stern J, Tran ME, Schuette P, Behnke E, Suthana NA, Fried I. (2017). Theta-burstmicrostimulation in the human entorhinal area improves memory specificity. Elife Oct 24;6.

    Turken AU, Dronkers NF. (2011). The neural architecture of the language comprehension network: converging evidence from lesion and connectivity analyses. Front Syst Neurosci. Feb 10;5:1.


    Appendix (modified from Supplementary Table 1) 

    - click on image for a larger view - 



    In the table above, Stim and NoStim recall percentages are for ALL words in the blocks. But:
    • Only half of the words in each Stim list were stimulated, however, so this comparison is conservative. The numbers improve slightly if you compare just the stimulated words with the matched non-stimulated words. Not all subjects exhibited a significant within-subject effect, but the effect is reliable across the population (Figure 3a)

    Big Theory, Big Data, and Big Worries in Cognitive Neuroscience

    $
    0
    0

    Eve Marder, Alona Fyshe, Jack Gallant, David Poeppel, Gary Marcus
    image by @jonasobleser


    What Will Solve the Big Problems in Cognitive Neuroscience?

    That was the question posed in the Special Symposium moderated by David Poeppel at the Boston Sheraton (co-sponsored by the Cognitive Neuroscience Society and the Max-Planck-Society). The format was four talks by prominent experts in (1) the complexity of neural circuits and neuromodulation in invertebrates; (2) computational linguistics and machine learning; (3) human neuroimaging/the next wave in cognitive and computational neuroscience; and (4) language learning/AI contrarianism. These were followed by a lively panel discussion and a Q&A session with the audience. What a great format!


    We already knew the general answer before anyone started speaking.


    But I believe that Dr. Eve Marder, the first speaker, posed the greatest challenges to the field of cognitive neuroscience, objections that went mostly unaddressed by the other speakers. Her talk was a treasure trove of quotable witticisms (paraphrased):
    • How much ambiguity can you live with in your attempt to understand the brain? For me I get uncomfortable with anything more than 100 neurons
    • If you're looking for optimization (in[biological] neural networks), YOU ARE DELUSIONAL!
    • Degenerate mechanisms produce the same changes in behavior, even in a 5 neuron network...
    • ..so Cognitive Neuroscientists should be VERY WORRIED


    Dr. Marder started her talk by expressing puzzlement about why she would be asked to speak on such a panel, but she gamely agreed. She initially expressed some ideas that almost everyone endorses:
    • Good connectivity data is essential
    • Simultaneous recordings from many neurons is a good idea[but how many is enough?]
    But then she turned to the nightmare of trying to understand large-scale brain networks, as is the fashion these days in human fMRI and connectivity studies.
    • It's not clear what changes when circuits get big
    • Assuming a “return to baseline” is always hiding a change that can be cryptic
    • On the optimization issue... nervous systems can't optimize for one situation if it makes them unable to deal with other [unexpected] situations.
    • How does degeneracy relieve the tyranny?
    No one knows...

    Dr. Marder was also a speaker at the Canonical Computation in Brains and Machines meeting in mid-March (h/t @neuroecology), and her talk from that conference is available online.

    I believe the talks from the present symposium will be on the CNS YouTube channel as well, and I'll update the post if/when that happens.

    Speaking of canonical computation, now I know why Gary Marcus was apoplectic at the thought of “one canonical cortical circuit to rule them all.” More on that in a moment...


    The next speaker was Dr. Alona Fyshe, who spoke about computational vision. MLE, MAP, ImageNet, CNNs. I'm afraid I can't enlighten you here. Like everyone else, she thought theory vs. data is a false dichotomy. Her memorable tag line was “Kill Your Darlings.” At first I thought this meant delete your best line [of code? of your paper?], but in reality “our theories need to be flexible enough to adapt to data” (always follow @vukovicnikola #cns2018 for the best real-time conference coverage).


    Next up was Dr. Gary Marcus, who started out endorsing the famous Jonas and Kording (2017) paper Could a Neuroscientist Understand a Microprocessor? which suggested that current data analysis methods in neuroscience are inadequate for producing a true understanding of the brain. Later, during the discussion, Dr. Jack Gallant quipped that the title of that paper should have been “Neuroscience is Hard” (on Twitter, @KordingLab thought this was unfair). For that matter, Gallant told Marcus, “I think you just don't like the brain.” [Gallant is big on data, but not mindlessly]



    image via @vukovicnikola


    This sparked a lively debate during the panel discussion and the Q&A.


    Anyway, back to Marcus. “Parsimony is a false god,” he said. I've long agreed with this sentiment, especially when it comes to the brain the simplest explanation isn't always true. Marcus is pessimistic that deep learning will lead to great advances in explaining neural systems (or AI). It's that pesky canonical computation again. The cerebral cortex (and the computations it performs) isn't uniform across regions (Marcus et al., 2014).

    This is not a new idea. In my ancient dissertation, I cited Swindale (1990) and said:
    Swindale (1990) argues that the idea of mini-columns and macro-columns was drawn on insufficient data. Instead, the diversity of cell types in different cortical areas may result in more varied and complex organization schemes which would adequately reflect the different types of information stored there [updated version would be “types of computations performed there”].1

    Finally, Dr. Jack Gallant came out of the gate saying the entire debate is silly, and that we need both theory and data. But he also thinks it's silly that we'll get there with theory alone. We need to build better measurement tools, stop faulty analysis practices, and develop improved experimental paradigms. He clearly favors the collection of more data, but in a refined way. For the moment, collect large rich naturalistic data sets using existing technology.

    And remember, kids, “the brain is a horror show of maps.”



     image via @vukovicnikola



    Big Data AND Big Theory: Everyone Agrees (sorta)

    Eve Marder– The Important of the Small for Understanding the Big

    Alona Fyshe– Data Driven Everything

    Gary Marcus– Neuroscience, Deep Learning, and the Urgent Need for an Enriched Set of Computational Primitives

    Jack Gallant– Which Presents the Biggest Obstacle to Advances in Cognitive Neuroscience Today: Lack of Theory or Lack of Data?



    Gary Marcus talking over Jack Gallant. Eve Marder is out of the frame.
    image by @CogNeuroNews


    Footnote

    1Another quote from the young Neurocritic:
    As finer analyses are applied to both local circuitry and network properties, our theoretical understanding of neocortical operation may require further revision, if not total replacement with other metaphors. At our current state of knowledge, a number of different conceptual frameworks can be overlaid on the existing data to derive an order that may not be there. Or conversely, the data can be made to fit into one's larger theoretical view.

    The Fractionation of Auditory Semantic Knowledge: Agnosia for Bird Calls

    $
    0
    0



    How is semantic knowledge represented and stored in the brain? A classic way of addressing this question is via single-case studies of patients with brain lesions that lead to a unique pattern of deficits. Agnosia is the inability to recognize some class (or classes) of entities such as objects or persons. Agnosia in the visual modality is most widely studied, but agnosias in the auditory and olfactory modalities have been reported as well. A key element is that basic sensory processing is intact, but higher-order recognition of complex entities is impaired.

    Agnosias that are specific for items in a particular category (e.g., animals, fruits/vegetables, tools, etc.) are sometimes observed. An ongoing debate posits that some category-specific dissociations may fall out along sensory/functional lines (the Warrington view), or along domain-specific lines (the Caramazza view).1 The former suggests that knowledge of living things is more reliant on vision (you don't pick up and use an alligator), while knowledge of tools is more reliant on how you use them. The latter hypothesis suggests that evolutionary pressures led to distinct neural systems for processing different categories of objects.2

    Much less work has examined how nonverbal auditory knowledge is represented in the brain. A new paper reports on a novel category-specific deficit in an expert bird-watcher who developed semantic dementia (Muhammed et al., 2018). Patient BA lost the ability to identify birds by their songs, but not by their appearance. As explained by the authors:
    BA is a dedicated amateur birder with some 30 years’ experience, including around 10 weeks each spring spent in birdwatching expeditions and over the years had also regularly attended courses in bird call recognition, visual identification and bird behaviour. He had extensive exposure to a range of bird species representing all major regions and habitats of the British Isles. He had noted waning of his ability to name birds or identify them from their calls over a similar timeframe to his evolving difficulty with general vocabulary. At the time of assessment, he was also becoming less competent at identifying birds visually but he continued to enjoy recognising and feeding the birds that visited his garden. There had been no suggestion of any difficulty recognising familiar faces or household items nor any difficulty recognising the voices of telephone callers or everyday noises. There had been no evident change in BA's appreciation of music.

    BA's brain showed a pattern of degeneration characteristic of semantic dementia, with asymmetric atrophy affecting the anterior, medial, and inferior temporal lobes, to a greater extent in the left hemisphere.



    Fig. 1 (modified from Muhammed et al., 2018).Note that L side of brain shown on R side of scan. Coronal sections of BA's T1-weighted volumetric brain MRI through (A) temporal poles; (B) mid-anterior temporal lobes; and (C) temporo-parietal junctional zones. There is more severe involvement of the left temporal lobe.



    The authors developed a specialized test of bird knowledge in the auditory, visual, and verbal modalities. The performance of BA was compared to that of three birders similar in age and experience.


    Results indicated that “BA performed below the control range for bird knowledge derived from calls and names but within the control range for knowledge derived from appearance.” There was a complicated pattern of results for his knowledge of specific semantic characteristics in the different modalities, but the basic finding suggested an agnosia for bird calls. Interestingly, he performed as well as controls on tests of famous voices and famous face pictures.

    Thus, the findings suggest separate auditory and visual routes to avian conceptual knowledge, at least in this expert birder. Also fascinating was the preservation of famous person identification via voice and image. The authors conclude with a ringing endorsement of single case studies in neuropsychology:
    This analysis transcends the effects of acquired expertise and illustrates how single case experiments that address apparently idiosyncratic phenomena can illuminate neuropsychological processes of more general relevance.

    link via @utafrith


    References

    Caramazza A, Mahon BZ. (2003). The organization of conceptual knowledge: the evidence from category-specific semantic deficits. Trends Cogn Sci. 7(8):354-361.

    Muhammed L, Hardy CJD, Russell LL, Marshall CR, Clark CN, Bond RL, Warrington EK, Warren JD. (2018). Agnosia for bird calls. Neuropsychologia 113:61-67.

    Warrington EK, McCarthy RA. (1994). Multiple meaning systems in the brain: a case for visual semantics. Neuropsychologia 32(12):1465-73.

    Warrington EK, Shallice T. (1984). Category specific semantic impairments. Brain 107(Pt 3):829-54.


    Footnotes

    1 I'm using this nomenclature as a shorthand, obviously, as many more researchers have been involved in these studies. And this is an oversimplification based on the origins of the debate.

    2 In fact, the always-argumentative Prof. Caramazza gave a lecture on The Representation of Objects in the Brain: Nature or Nurture for winning the Fred Kavli Distinguished Career Contributions in Cognitive Neuroscience Award (#CNS2018). Expert live-tweeter @vukovicnikola captured the following series of slides, which summarizes the debate as resolved in Caramazza's favor (to no one's surprise).







    “My family say they grieve for the old me” – profound personality changes after deep brain stimulation

    $
    0
    0


    Deep brain stimulation (DBS) of the subthalamic nucleus in Parkinson's disease (PD) has been highly successful in controlling the motor symptoms of this disorder, which include tremor, slowed movement (akinesia), and muscle stiffness or rigidity. The figure above shows the electrode implantation procedure for PD, where a stimulating electrode is placed in either the subthalamic nucleus, (STN), a tiny collection of neurons within the basal ganglia circuit, or in the internal segment of the globus pallidus, another structure in the basal ganglia (Okun, 2012). DBS of the STN is more common, and more often a source of disturbing non-motor side effects.

    In brief, DBS of the STN alters neural activity patterns in complex cortico-basal-ganglia-thalamo-cortical networks (McIntyre & Hahn, 2010).

    DBS surgery may be recommended for some patients in whom dopamine (DA) replacement therapy has become ineffective, usually after a few years. DA medications include the classic DA precursor L-DOPA, followed by DA agonists such as pramipexole, ropinirole, and bromocriptine. But unfortunately, impulse control disorders (ICDs, e.g., compulsive shopping, excessive gambling, binge eating, and compulsive sexual behavior) occur in about 17% of PD patients on DA agonists (Voon et al., 2017).

    There are many first-person accounts from PD patients who describe uncharacteristic and embarrassing behavior after taking DA agonists, like this grandpa who started seeing prostitutes for the first time in his life:
    'I have become an embarrassment'

    For most of his life John Smithers was a respected family man who ran a successful business. Then he started paying for sex. Now, in his 70s, he explains how his behaviour has left him broke, alone and tormented

    I am 70 years old and used to be respectable. I was a magistrate for 25 years, and worked hard to feed my children and build up the family business. I was not the most faithful of husbands, but I tried to be discreet about my affairs.1 Now I seem to be a liability. Over the last two decades I have spent a fortune on prostitutes and lost two wives. I have made irrational business decisions that took me to the point of bankruptcy. I have become an embarrassment to my nearest and dearest.

    Also reports like: Drug 'led patients to gamble'.


    New-onset ICDs can also occur in patients receiving STN DBS, but the effects are mixed across the entire population: ICD symptoms can also improve or remain unchanged. Why this is the case is a vexing problem that includes premorbid personality, genetics, family history, past and present addictions, and demographic factors (Weintraub & Claassen).


    - click on image for a larger view -



    Neuroethicists are weighing in on the potential side effects of DBS that may alter a patient's perception of identity and self. A recent paper included a first-person account of altered personality and a sense of self-estrangement in a 46 year old woman undergoing STN DBS for PD (Gilbert & Viaña, 2018):
    The patient reported a persistent state of self-perceived changes following implantation. More than one year after surgery, her narratives explicitly refer to a persistent perception of strangeness and alteration of her concept of self. For instance, she reported:
    "can't be the real me anymore—I can't pretend . . . I think that I felt that the person that I have been [since the intervention] was somehow observing somebody else, but it wasn't me. . . . I feel like I am who I am now. But it's not the me that went into the surgery that time. . . . My family say they grieve for the old [me]. . . ."

    Many of her quotes are striking in their similarity to behaviors that occur in the manic phase of bipolar disorder {loss of control, grandiosity}:
    The patient also reported developing severe postoperative impulsivity: "I cannot control the impulse to go off if I'm angry." In parallel, while describing a sense of loss of control over some impulsions, she has also recognized that DBS gave her increased feelings of strength: "I never had felt this lack of power or this giving of power—until I had deep brain stimulation."

    {also uncharacteristic sexual urges and hypersexuality; excessively energetic; compulsive shopping}:
    ...she experienced radically enhanced capacities, in the form of increased uncontrollable sexual urges:
    "I know this is a bit embarrassing. But I had 35 staples in my head, and we made love in the hospital bathroom and that wasn't just me. It was just I had felt more sexual with the surgery than without."
    And greater physical energy:
    "I remember about a week after the surgery, I still had the 35 staples in my head and I was just starting to enter the cooler months of winter but my kids had got me winter clothes so I had nothing to wear to the follow up appointment and when I went back there of the morning, I thought "I can walk into the doctor's" even though it was 5 kilometers into town. It's like the psychologist said: "For a woman who had a very invasive brain surgery 9 days ago and you've just almost walked 10 kilometers."And on the way, I stopped and bought a very uncharacteristic dress, backless—completely different to what I usually do."

    Examining the DSM-5 criteria for bipolar mania, it seems clear (to me, at least) that the patient is indeed having a prolonged manic episode induced by STN DBS.
    In order for a manic episode to be diagnosed, three (3) or more of the following symptoms must be present:
    • Inflated self-esteem or grandiosity
    • Decreased need for sleep (e.g., one feels rested after only 3 hours of sleep)
    • More talkative than usual or pressure to keep talking
    • Flight of ideas or subjective experience that thoughts are racing
    • Attention is easily drawn to unimportant or irrelevant items
    • Increase in goal-directed activity (either socially, at work or school; or sexually) or psychomotor agitation
    • Excessive involvement in pleasurable activities that have a high potential for painful consequences (e.g., engaging in unrestrained buying sprees, sexual indiscretions, or foolish business investments)

    It's also notable that she divorced her husband, moved to another state, ruptured the therapeutic relationship with her neurologist and surgical team, and made a suicide attempt. She also took up painting and perceived the world in a more vibrant, colorful way {which resembles narratives of persons experiencing manic episodes}:
    "I don't know, all the senses came alive. I wanted to listen to Paul Kelly and all of my favorite music really loud in the toilet. And you know, also everything was colourful. . . . Well, since brain surgery I can. I didn't bother before. I can see the light . . . the light that is underlying every masterpiece in photography. . . . I've seen it like I've never seen it before . . . I am a totally different person. I like it that I love photography and music and colourful clothes, but where is the old me now?"

    However, she appears to display more insight into her altered behavior than {most} people in the midst of bipolar mania. Perhaps her reality monitoring abilities are more intact? Or it's because her symptoms wax and wane.2 But like in many manic individuals, she did not want this feeling to stop:
    "I went to the psychiatrist, and he said, 'Right, well, this is bordering on mania[NOTE: that is an understatement], you need to turn the settings right down to manage it.'I said to him, 'Please don't, this is not over the top—this is just joy.'"

    I think this line of research studying individuals with Parkinson's who have impulse control disorders due to DA replacement or DBS   can provide insight into bipolar mania. Certainly, drugs that act as antagonists at multiple DA receptor subtypes (typical and atypical antipsychotics) are used in the management of bipolar disorder.

    Patient narratives are also informative in this regard, and provide critical information for individuals considering various types of therapies for PD. In this paper, the patient was not informed by the medical team that there could be undesirable psychiatric side effects. She has taken legal action against the lead neurosurgeon, and the proceedings were ongoing when the article was written.


    Footnote

    1One might wonder whether Mr. Smithers' premorbid propensity for affairs made him more vulnerable for compulsive sexual activity after DA agonists. And that is one consideration displayed in the box and circle diagram above.

    2 She did experience bouts of depression as well as mania, perhaps related to the stimulation parameters and precise location. And bipolar individuals also gain insight once the manic episode subsides.


    References

    Gilbert F, Viaña JN. (2018). A Personal Narrative on Living and Dealing with Psychiatric Symptoms after DBS Surgery. Narrat Inq Bioeth. 8(1):67-77.

    McIntyre CC, Hahn PJ. (2010). Network perspectives on the mechanisms of deep brain stimulation. Neurobiol Dis. 38(3):329-37.

    Voon V, Napier TC, Frank MJ, Sgambato-Faure V, Grace AA, Rodriguez-Oroz M, Obeso J, Bezard E, Fernagut PO. (2017). Impulse control disorders and levodopa-induceddyskinesias in Parkinson's disease: an updateLancet Neurol. 16(3):238-250.

    Weintraub D, Claassen DO. (2017). Impulse Control and Related Disorders in Parkinson's Disease. Int Rev Neurobiol. 133:679-717.


    What counts as "memory" and who gets to define it?

    $
    0
    0

    Do Plants Have “Memory”?


    A new paper by Bédécarrats et al. (2018) is the latest entry into the iconoclastic hullabaloo claiming a non-synaptic basis for learning and memory. In short, “RNA extracted from the central nervous system of Aplysia given long-term sensitization training induced sensitization when injected into untrained animals...” The results support the minority view that long-term memory is not encoded by synaptic strength, according to the authors, but instead by molecules inside cells (à la Randy Gallistel).

    Adam Calhoun has a nice summary of the paper at Neuroecology:
    ...there is a particular reflex1(memory) that changes when they [Aplysia] have experienced a lot of shocks. How memory is encoded is a bit debated but one strongly-supported mechanism (especially in these snails) is that there are changes in the amount of particular proteins that are expressed in some neurons. These proteins might make more of one channel or receptor that makes it more or less likely to respond to signals from other neurons. So for instance, when a snail receives its first shock a neuron responds and it withdraws its gills. Over time, each shock builds up more proteins that make the neuron respond more and more. These proteins are built up by the amount of RNA (the “blueprint” for the proteins, if you will) that are located in the vicinity of the neuron that can receive this information.  ...

    This new paper shows that in these snails, you can just dump the RNA on these neurons from someone else and the RNA has already encoded something about the type of protein it will produce.

    Neuroskeptic has a more contentious take on the study, casting doubt on the notion that sensitization of a simple reflex to any noxious stimulus (a form of non-associative “learning”) produces “memories” as we typically think of them. But senior author Dr. David Glanzman tolerated none of this, and expressed strong disagreement in the comments:
    “I’m afraid you have a fundamental misconception of what memory is. We claim that our experiments demonstrate transfer of the memory—or essential components of the memory—for sensitization. Now, although sensitization may not comport with the common notion of memory—it’s not like the memory of my Midwestern grandmother’s superb blueberry pies, for example—it nevertheless has unambiguous status as memory.  ...  [didactic lesson continues] ...  We do not claim in our paper that declarative memories—such as my memory of my grandmother’s blueberry pies—or even simpler forms of associative memories like those induced during classical conditioning—can be transferred by RNA. That remains to be seen.”

    OK, so Glanzman gets to define what memory is. But later on he's caught in a trap and has to admit:
    “Of course, there are many phenomena that can be loosely regarded as memory—the crease in folded paper, for example, can be said to represent the memory of a physical action.”

    That was in response to who said:
    “So a transfer of RNA that activates a cellular mechanism associated with touch isn't memory, but rather just exogenously turning on a cellular pathway. By that logic, gene therapy to treat sickle cell anemia changes blood "memory".” 2

    However, my favorite comment was from Smut Clyde:
    “Kandel set the precedent that reflexes in Aplysia are "memories", and now we're stuck with it.”

    This reminded me of Dr. Kandel's bold [outlandish?] attempt to link psychoanalysis, Aplysia withdrawal reflexes, and human anxiety (Kandel, 1983). I was a bit flabbergasted that gill withdrawal in a sea slug was considered “mentation” (thought) and could support Freudian views.3
    In the past, ascribing a particular behavioral feature to an unobservable mental process essentially excluded the problem from direct biological study because the complexity of the brain posed a barrier to any complementary biological analysis. But the nervous systems of invertebrates are quite accessible to a cellular analysis of behavior, including certain internal representations of environmental experiences that can now be explored in detail; This encourages the belief that elements of cognitive mentation relevant to humans and related to psychoanalytic theory can be explored directly [in Aplysia] and need no longer be merely inferred.

    - click on image for a larger view -



    So anticipatory anxiety in humans is isomorphic to invertebrate responses in a classical aversive conditioning paradigm, and chronic anxiety is recreated by long-term sensitization paradigms. Perhaps I missed the translational advances here, and any application to Psychoanalytic and Neuropsychoanalytic practice that has been fully realized.

    If we want to accept a flexible definition of learning and memory in animals, why not consider associative learning experiments in pea plants, where a neutral cue predicting the location of a light source had a greater effect on the direction of plant growth than innate phototropism (Gagliano et al., 2016)? Or review the literature on associative and non-associative learning in Mimosa? (Abramson & Chicas-Mosier, 2016). Or evaluate the field of ‘plant neurobiology’ and even the ‘Philosophy of Plant Neurobiology’ (Calvo, 2016). Or are the possibilities of chloroplast-based consciousness and “mentation” without neurons too threatening (or too fringe)?

    But in the end, we know we've reached peak plant cognition when a predictive coding model appears — Predicting green: really radical (plant) predictive processing (Calvo & Friston, 2017).


    Further Reading

    The Big Ideas in Cognitive Neuroscience, Explained (especially the sections on Gallistel and Ryan)

    What are the Big Ideas in Cognitive Neuroscience? (you can watch the videos of their 2017 CNS talks)


    Footnotes

    1edited to indicate my emphasis on reflex more specifically, the gill withdrawal reflex in Aplysia which can only go so far as a model of other forms of memory, in my view.

    Another skeptic (but for different reasons) is Dr. Tomás Ryan, who was paraphrased in Scientific American:
    But [Ryan] doesn’t think the behavior of the snails, or the cells, proves that RNA is transferring memories. He said he doesn’t understand how RNA, which works on a time scale of minutes to hours, could be causing memory recall that is almost instantaneous, or how RNA could connect numerous parts of the brain, like the auditory and visual systems, that are involved in more complex memories.

    3But I haven't won the Nobel Prize, so what do I know?


    References

    Abramson CI, Chicas-Mosier AM. (2016). Learning in plants: lessons from Mimosa pudica. Frontiers in psychology Mar 31;7:417.

    Bédécarrats A, Chen S, Pearce K, Cai D, Glanzman DL. (2018). RNA from Trained Aplysia Can Induce an Epigenetic Engram for Long-Term Sensitization in Untrained Aplysia. eNeuro. May 14:ENEURO-0038.

    Calvo P. (2016). The philosophy of plant neurobiology: a manifesto. Synthese 193(5):1323-43.

    Calvo P, Friston K. Predicting green: really radical (plant) predictive processing. Journal of The Royal Society Interface. 14(131):20170096.

    Gagliano M, Vyazovskiy VV, Borbély AA, Grimonprez M, Depczynski M. (2016). Learning by association in plants. Scientific Reports Dec 2;6:38427.

    Kandel ER. (1983). From metapsychology to molecular biology: explorations into the nature of anxiety. Am J Psychiatry 140(10):1277-93.

    Citric Acid Increases Balloon Inflation (aka sour taste makes you more risky)

    $
    0
    0

    from Balloon Analog Risk Task (BART)– Joggle Research for iPad


    Risk taking and risk preference1 are complex constructs measured by self-report questionnaires (“propensity”), laboratory tasks, and the frequency of real-life behaviors (smoking, alcohol use, etc).  A recent mega-study of 1507 healthy adults by Frey et al. (2017) measured risk preference using six questionnaires (and their subscales), eight behavioral tasks, and six frequency measures of real-life behavior.


    Table 1 (Frey et al., 2017). Risk-taking measures used in the Basel-Berlin Risk Study.

    -- click on image for a larger view --


    The authors were interested in whether they could extract a general factor of risk preference (R), analogous to the general factor of intelligence (g). They used a bifactor model to account for the general factor as well as specific, orthogonal factors (seven in this case). The differing measures above are often used interchangeably and called “risk”, but the general factor R only...
    ...explained substantial variance across propensity measures and frequency measures of risky activities but did not generalize to behavioral measures. Moreover, there was only one specific factor that captured common variance across behavioral measures, specifically, choices among different types of risky lotteries (F7). Beyond the variance accounted for by R, the remaining six factors captured specific variance associated with health risk taking (F1), financial risk taking (F2), recreational risk taking (F3), impulsivity (F4), traffic risk taking (F5), and risk taking at work (F6).

    In other words, the behavioral tasks didn't explain R at all, and most of them didn't even explain common variance across the tasks themselves (F7 below).



    Fig. 2 (Frey et al., 2017). Bifactor model with all risk-taking measures, grouped by measurement tradition. BART is outlined in red.


    Here's where we come to the recent study on “risk” and taste. The headlines were either misleading (A Sour Taste in Your Mouth Means You’re More Likely to Take Risks) or downright false no lemons were used (When Life Gives You Lemons, You Take More Risks) and this doozy (The Fruit That Helps You Take Risks – May Help Depressed And Anxious).

    To assess risk-tasking, Vi and Obrist (2018) administered the Balloon Analog Risk Task (BART) to 70 participants in the UK and 71 in Vietnam. They were randomly assigned to one of five taste groups [yes, n=14 each] of Bitter (caffeine), Salty (sodium chloride), Sour (citric acid), Umami (MSG), and Sweet (sugar, presumably). They were given two rounds of BART and consumed 20 ml of flavored drink or plain water before each (in counterbalanced order).

    [Remember that BART didn't load on a general factor of risk-taking, nor did it capture common variance across behavioral tasks.]

    As in the animation above (and a video made by the authors)2, the participant “inflates” a virtual balloon via mouse click until they either stop and win a monetary reward, or else they pop the balloon and lose money. The number of clicks (pumps) indicates risk-taking behavior. Overall, the Vietnamese students (all recruited from the School of Biotechnology and Food Technology at Hanoi University) appeared to be riskier than the UK students (but I don't know if this was tested directly). The main finding was that both groups clicked more after drinking citric acid than the other solutions.



    Why would this this balloon pumping be more vigorous after tasting a sour solution? We could also ask, why were the Vietnamese subjects more risk-averse after drinking salt water, and riskier (relative to UK subjects) after drinking sugar water?3 We simply don't know the answer to any of these questions, but the authors weren't shy about extrapolating to clinical populations:
    For example, people who are risk-averse (e.g., people with anxiety disorders or depression) may benefit from a sour additive in their diet.

    Smelling lemon oil is relaxing, but tasting citric acid promotes risk:
    Prior work has, for instance, shown that in cases of psychiatric disorders such as depression, anxiety, or stress-related disorders the use of lemon oils proved efficient and was further demonstrated to reduce stress. While lemon and sour are not the same, they share common properties that can be further investigated with respect to risk-taking.

    We're really not sure how any of this works. The authors offered many more analyses in the Supplementary Materials, but they didn't help explain the results. Although the sour finding was interesting and observed cross culturally, would it replicate using groups larger than n=14?


    Footnotes

    1 From Frey et al. (2017):
    The term “risk” refers to properties of the world, yet without a clear agreement on its definition, which has ranged from probability, chance, outcome variance, expected values, undesirable events, danger, losses, to uncertainties. People’s responses to those properties, on the other hand, are typically described as their “risk preference.”

    2 The video conveniently starts by illustrating risk as skydiving, which bears no relation to being an adventurous eater.

    3 The group difference in umami had a cultural explanation.


    References

    Frey R, Pedroni A, Mata R, Rieskamp J, Hertwig R. (2017). Risk preference shares the psychometric structure of major psychological traits. Science Advances 3(10):e1701381.

    Vi CT, Obrist M. (2018). Sour promotes risk-taking: an investigation into the effect of taste on risk-taking behaviour in humans. Scientific Reports 8(1):7987.




    The Lie of Precision Medicine

    $
    0
    0


    This post will be my own personalized rant about the false promises of personalized medicine. It will not be about neurological or psychiatric diseases, the typical topics for this blog. It will be about oncology, for very personal reasons: misery, frustration, and grief. After seven months of research on immunotherapy clinical trials, I couldn't find a single [acceptable] one1 in either Canada or the US that would enroll my partner with stage 4 cancer. For arbitrary reasons, for financial reasons, because it's not the “right” kind of cancer, because the tumor's too rare, because it's too common, because of unlisted exclusionary criteria, because one trial will not accept the genomic testing done for another trial.2 Because of endless waiting and bureaucracy.

    But first, I'll let NIH explain a few terms. Is precision medicine the same as personalized medicine? Yes and no. Seems to me it's a bit of a branding issue.
    What is the difference between precision medicine and personalized medicine?

    There is a lot of overlap between the terms "precision medicine" and "personalized medicine."According to the National Research Council, "personalized medicine" is an older term with a meaning similar to "precision medicine."

    Here's a startling paper from 1971, Can Personalized Medicine Survive? (by W.M. GIBSON, MB, ChB in Canadian Family Physician).




    [it's a defense of the old-fashioned family doctor (solo practitioner) by Gibson]:
    ...will the solo practitioner's demise be welcomed, his replacement being a battery of experts in the fields of medicine, surgery, psychiatry and all the new allied health sciences, infinitely better trained than their singlehanded predecessor?

    We wouldn't want any confusion between a $320 million dollar initiative and the ancient art of medicine. NIH again:
    However, there was concern that the word "personalized" could be misinterpreted to imply that treatments and preventions are being developed uniquely for each individual; in precision medicine, the focus is on identifying which approaches will be effective for which patients based on genetic, environmental, and lifestyle factors.

    The Council therefore preferred the term "precision medicine" to "personalized medicine." However, some people still use the two terms interchangeably.

    So “precision medicine” is considered a more contemporary and cutting-edge term.


    Archived from The White House Blog (Obama edition), January 30, 2015.


    What about pharmacogenomics? 
    Pharmacogenomics is a part of precision medicine. Pharmacogenomics is the study of how genes affect a person’s response to particular drugs. This relatively new field combines pharmacology (the science of drugs) and genomics (the study of genes and their functions) to develop effective, safe medications and doses3 that are tailored to variations in a person’s genes.

    At present, precision pharmacogenomics is just a “tumor grab” with no promise of treatment in most cases. There are some serious and admirable efforts, but accessibility and costs are major barriers.


    But we've been promised such a utopia for quite a while.
    Personalized medicine in oncology: the future is now (Schilsky, 2010):

    Cancer chemotherapy is in evolution from non-specific cytotoxic drugs that damage both tumour and normal cells to more specific agents and immunotherapy approaches. Targeted agents are directed at unique molecular features of cancer cells, and immunotherapeutics modulate the tumour immune response; both approaches aim to produce greater effectiveness with less toxicity. The development and use of such agents in biomarker-defined populations enables a more personalized approach to cancer treatment than previously possible and has the potential to reduce the cost of cancer care.



    IT'S 2018, WHERE IS THAT FUTURE YOU PROMISED US?

    But wait, let's go back further, to 1999:
    New Era of Personalized Medicine 
    Targeting Drugs For Each Unique Genetic Profile

    Certainly, there are success stories for specific types of cancer (e.g., Herceptin). A more recent example is the PD-1 inhibitor pembrolizumab (Keytruda®), which has shown remarkable results in patients with melanoma, including Jimmy Carter. The problem is, direct-to-consumer marketing creates false hope about the probability that a patient with another form of cancer will respond to this treatment, or one of the many other immunotherapies with PR machines. But if there's a 25% chance or even a 10% chance it'll extend the life of your loved one, you'll go to great lengths to try to acquire it, one way or another. Speaking from personal experience.



    But exaggerated claims and the use of the superlatives in describing massively expensive cancer drugs (e.g., “breakthrough,” “game changer,” “miracle,” “cure,” “home run,” “revolutionary,” “transformative,” “life saver,” “groundbreaking,” and “marvel”) are highly questionable (Abola & Prasad, 2016) and even harmful.

    It's a truly horrible feeling when you realize there are no options available, and all your hope is gone.


    References

    Abola MV, Prasad V. (2016). The use of superlatives in cancer research. JAMA oncology. 2(1):139-41.

    Gibson WM. (1971). Can personalized medicine survive?Can Fam Physician. 17(8):29-88.

    Langreth R, Waldholz M. (1999). New era of personalized medicine: targeting drugs for each unique genetic profile. Oncologist 4(5):426-7.

    Schilsky RL. (2010). Personalized medicine in oncology: the future is now. Nat Rev Drug Discov. 9(5):363-6.  {PDF}


    Footnotes

    1 

    2  But hey, we'll do yet another biopsy of your tumor, and let you know the results in 2-3 months, when you're too ill to be enrolled in any trial. Here's a highly relevant article The fuzzy world of precision medicine: deliberations of a precision medicine tumor board but I'm afraid to read it.

    3 OMFG, you have got to be kidding me. Here is a subset of the possible side effects from one toxic monoclonal antibody duo:

    Very likely (21% or more, or more than 20 people in 100):
    • fatigue/tiredness
    • decrease or loss of appetite, which may result in weight loss
    • cough
    • inflammation of the small intestine and / or large bowel causing abdominal pain and diarrhea which may be severe and life threatening

    Less likely (5 – 20% or between 5 and 20 people in 100):
    • pain and or inflammation in various areas including: muscles , joint, belly, back, chest, headache
    • flu-like symptoms such as body aches, fever, chills, tiredness, loss of appetite, cough
    • constipation
    • dizziness
    • shortness of breath
    • infection which may rarely be serious and become life threatening
    • nausea and vomiting
    • dehydration
    • skin inflammation causing hives or rash which may rarely be severe and become life threatening
    • anemia which may cause tiredness, or may require blood transfusion
    • itching
    • abnormal liver function seen by blood tests. This may rarely lead to jaundice (yellowing of the skin and whites of eyes) and be severe or life threatening
    • abnormal function of your thyroid gland which cause changes in hormonal levels. A decrease in thyroid function as seen on blood tests may cause you to feel tired, cold or gain weight while an increase in thyroid function may cause you to feel shaky, have a fast pulse or lose weight.
    • Swelling of arms and/or legs (fluid retention)
    • Changes in the level of body salts as seen on blood tests. You may not have symptoms.
    • Inflammation of the pancreas that results in increased level of digestive enzymoes (lipase, amylase) seen in bloods and may cause abdominal pain
    • Inflammation of the lungs (including fluid in the lungs) which could cause shortness of breath, chest pain, new or worse cough. It could be serious and/or life threatening. May occur more frequently if you are receiving radiation treatment to your chest or if you are Japanese.
    • Serious bleeding events leading to death may occur in patients with head and neck tumors. Please talk to your doctor immediately if you are experiencing bleeding.
    • Decrease of a protein in your blood called albumin that may cause fluid retention and results in swelling of your legs or arms

    You get the idea. I'll skip:

    Rarely (1 – 4% or less than 5 in 100 people)

    Very Rare (less than 1% or less than 1 in 100 people)

    An epidemic of "Necessary and Sufficient" neurons

    $
    0
    0
    A great deal of neuroscience has become “circuit cracking.”
    — Alex Gomez-Marin


    A miniaturized holy grail of neuroscience is discovering that activation or inhibition of a specific population of neurons (e.g., prefrontal parvalbumin interneurons) or neural circuit (e.g., basolateral amygdala nucleus accumbens) is “necessary and sufficient” (N&S) to produce a given behavior.



    from: Optogenetics, Sex, and Violence in the Brain: Implications for Psychiatry1 


    In the last year or so, it has become acceptable to question the dominant systems/circuit paradigm of “manipulate and measure” as THE method to gain insight into how the brain produces behavior (Krakauer et al., 2017; Gomez-Marin, 2017). Detailed analysis of an organism's natural behavior is indispensable for progress in understanding brain-behavior relationships. Claims that optogenetic and other manipulations of a neuronal population can demonstrate that it is “N&S” for a complex behavior have also been challenged. Gomez-Marin (2017) pulled no punches and stated:
    I argue that to upgrade intervention to explanation is prone to logical fallacies, interpretational leaps and carries a weak explanatory force, thus settling and maintaining low standards for intelligibility in neuroscience. To claim that behavior is explained by a “necessary and sufficient” neural circuit is, at best, misleading.

    The latest entry into this fault-fest goes further, indicating that most N&S claims in biology violate the principles of formal logic and should be called ‘misapplied-N&S’ (Yoshihara & Yoshihara, 2018). They say the use of “necessary and sufficient” terminology should be banned and replaced with “indispensable and inducing” (except for a handful of instances). 2



    modified from Fig. 1A (Yoshihara & Yoshihara, 2018). The relationship between squares and rectangles as a typical example of true necessary (being a rectangle; pale green) and sufficient condition (being a square; magenta) in formal logic.


    N&S claims are very popular in optogenetics, which has become a crucial technique in neuroscience. But demonstrating true N&S is nearly impossible, because the terminology disregards: activity in the rest of the brain, whether all the activated neurons are “necessary” (instead of only a subset), what actually happens under natural conditions (rather than artificially induced), the requirement of equivalence, etc. Yoshihara & Yoshihara (2018) are especially disturbed by the incorrect use of “sufficient”, which leads to results being overstated and misinterpreted:
    The main problem comes from the word ‘sufficient,’ which is often used to emphasize that artificial expression of only a single gene or activation of only a single neuron can cause a substantial and presumably relevant effect on the whole process of interest. Although it may be sufficient as an experimental manipulation for triggering the effect, it is not actually sufficient for executing the whole effect itself.

    And for optogenetics:
    Rather, the importance of ‘sufficiency’ experiments lies in demonstrating a causal link through optogenetic activation of neurons... Thus, words such as triggers, promotes, induces, switches, or initiates may better reflect or express the desired nuance without creating such confusion.

    Y & Y (2018) aren't shy about naming names in their Commentary, and even say that misapplied-N&S has generated unproductive and misleading studies that offer no scientific insight whatsoever. Although one could say that N&S has a different meaning in biology, or is merely a figure of speech, such strong statements have consequences for the future directions of a field.

    Thanks to BoOrg Lab for the link to Gomez-Marin.


    Footnotes

    1“...neurons necessary and sufficient for inter-male aggression are located within the ventrolateral subdivision of the ventromedial hypothalamic nucleus (VMHvl)...”

    2One of the instances uses the old discredited “command neuron” concept of Ikeda & Wiersma (1964). They call it A‘Witch Hunt’ of Command Neurons and note that only three command neurons meet the true N&S criteria (one each in lobster, Aplysia, and Drosophila).


    References

    Gomez-Marin A. (2017). Causal circuit explanations of behavior: Are necessity and sufficiency necessary and sufficient? In: Decoding Neural Circuit Structure and Function (pp. 283-306). Springer, Cham.  {PDF}

    Krakauer JW, Ghazanfar AA, Gomez-Marin A, MacIver MA, Poeppel D. (2017). Neuroscience Needs Behavior: Correcting a Reductionist Bias. Neuron. 93(3):480-490.

    Yoshihara M, Yoshihara M. (2018). 'Necessary and sufficient' in biology is not necessarily necessary - confusions and erroneous conclusions resulting from misapplied logic in the field of biology, especially neuroscience. J Neurogenet. 32(2):53-64.

    Improved Brain Health for All! (update on the BRAIN initiative)

    $
    0
    0

    adapted from Figure 3 (Koroshetz et al., 2018). Magnetic resonance angiography highlighting the vasculature in the human brain in high resolution, without the use of any contrast agent, on a 7T MRI scanner. Courtesy of Plimeni & Wald (MGH). [ed. note: here's a great summary on If, how, and when fMRI goes clinical, by Dr. Peter Bandettini.]


    The Journal of Neuroscience recently published a paywalled article on The State of the NIH BRAIN Initiative. This paper reviewed the research and technology development funded by the “moonshot between our ears” [a newly coined phrase]. The program has yielded a raft of publications (461 to date) since its start in 2014. Although the early emphasis has not been on Human Neuroscience, NIH is ramping up its funding for human imaging and neuromodulation.



    They've developed a Neuroethics Division, because...
    ...neuroscience research in general and the BRAIN Initiative specifically, with its focus on unraveling the mysteries of the human brain, generate many important ethical questions about how these new tools could be responsibly incorporated into medical research and clinical practice.

    I don't think most of the current grant recipients are focused on “unraveling the mysteries of the human brain”, however. They're interested in cell types, circuit diagrams, and monitoring and manipulating neural activity in model organisms such as Drosophila, zebrafish, and mice. There are aspirations for a Human Cell Atlas, but many of the other tools are very far away (or impossible) for use in humans.

    - click on image to enlarge -



    Some aspects of the terminology used by Koroshetz et al., (2018)are vague to the savvy but non-expert eye. What is a neural circuit? The authors never actually define the term. You'll get different answers depending on who you ask. We know that “individual neuroscientists have chosen to work at specific spatial scales, ranging from .. ion channels ... to systems level” and we know there is a range of temporal scales, “from the millisecond of synaptic firing to the entire lifespan” (Koroshetz et al., 2018):
    Within this diverse set of scales, the circuit is a key point of focus for two primary reasons: (1) neural circuits perform the calculations necessary to produce behavior; and (2) dysfunction at the level of the circuit is the basis of disability in many neurological and psychiatric disorders.

    So maybe key point #1 is a generic working definition of a neural circuit, and is the focus of many NIH BRAIN-funded neuroscientists. But there's a huge leap from the impressive work on e.g. mapping, manipulating, and controlling stress-related feeding behaviors in rodents, and key point #2: isolating circuit dysfunction and ultimately treating eating disorders in humans. There is a lot of “promise” and many “aspirational goals”, but the concluding sentence is just too aspirational and promises too much:
    With diverse scientists jointly working in novel team structures, often in partnership with industry, and sharing unprecedented types and quantities of data, the BRAIN Initiative offers a unique opportunity to open the door to a golden age in brain science and improved brain health for all.

    The research that gets closest to bridging this gap is electocorticography (ECoG) and deep brain stimulation (DBS) in human patients.1The exemplar cited in the NIH paper is by Swann et al. (2018), and involved testing a closed-loop DBS system in two Parkinson's patients. The Activa PC + S system (Medtronic) is able to both stimulate the brain target region (subthalamic nucleus, STN) and record neural activity at the same time. The local field potential (LFP) activity is then fed back to the stimulator, which adjusts its parameters based on a complex control algorithm derived from the neural data.

    Fig. 4 (Swann et al., 2018). Adaptive DBS.


    The unique aspect here is that the authors recorded gamma oscillations (60–90 Hz in this case) from a subdural lead over motor cortex to adjust stimulation. In earlier work, they showed this gamma power was indicative of dyskinesia (abnormal, uncontrolled, involuntary movement), so STN stimulation was adjusted when gamma was above a certain threshold. The study demonstrated feasibility, and its greatest benefit at this early point was energy savings that preserved the battery.

    It's cool work that has been promoted by NIH, but unfortunately the first author was not mentioned in the press release, not featured in the accompanying video, and her name isn't even visible on a shot of the poster that appears in the video.2  [the last author gets all the credit.]

    Future NIH BRAIN studies will address essential tremor, epilepsy, obsessive-compulsive disorder, major depressive disorder, traumatic brain injury, stroke, tetraplegia, and blindness (apparently).


    Returning to key point #1, some have criticized the distinct lack of emphasis on behavior, which echosrecentpapers (see An epidemic of "Necessary and Sufficient" neurons).


    The next tweet is critical too, and an interesting discussion ensued.


    And given all the technology development funded by BRAIN, it's a great time to be a neuroengineer, but not a neuropsychologist, ethologist, or behavioral specialist.
    Indeed, the BRAIN Initiative funded an equal number of investigators trained in engineering relative to those trained in neuroscience in 2016 (Koroshetz et al., 2018).

    Footnotes

    1DARPA is the biggest investor here.

    2We interrupt the NIH press coverage of this paper to acknowledge the first author, Dr. Nicki Swann. Dr. Swann and many of her female colleagues have described the difficulties of traveling and attending conferences while being a new mother, and offered some possible solutions. If the BRAIN Initiative is serious about addressing Neuroethics (for animals and futuristic sci-fi applications to human patients), they should also be actively involved in issues affecting women and minority researchers. And I imagine they are, it just wasn't apparent here.


    References

    Koroshetz W, Gordon J, Adams A, Beckel-Mitchener A, Churchill J, Farber G, Freund M, Gnadt J, Hsu N, Langhals N, Lisanby S. (2018). The State of the NIH BRAIN Initiative. Journal of Neuroscience Jun 19:3174-17.  NOTE: this should really be open access...

    Swann NC, de Hemptinne C, Thompson MC, Miocinovic S, Miller AM, Gilron R, Ostrem JL, Chizeck HJ, Starr PA. (2018). Adaptive deep brain stimulation for Parkinson's disease using motor cortex sensing. J Neural Eng. 15(4):046006.

    Viewing all 329 articles
    Browse latest View live




    Latest Images