Quantcast
Channel: The Neurocritic
Viewing all 329 articles
Browse latest View live

A Preventable Tragedy in a Man with Semantic Dementia

$
0
0

TAKE HOME MESSAGE: All suicide attempts and parasuicidal gestures should be taken very seriously in patients with dementia.
“Previous parasuicide is a predictor of suicide. The increased risk of subsequent suicide persists without decline for at least two decades.”

A new case report on a 53 year old man1 with semantic dementia (SD) presented his prior parasuicidal gestures as “stereotypic behaviour” [ed. NOTE: repeated attempts to hang himself with a cord is “stereotyped behavior”], with tragic consequences:
The patient showed abnormal behaviours such as following around his wife and frequently visiting a drug store to purchase sleeping pills, which necessitated hospitalization. Despite having no depressive symptoms including suicidal ideation, he repeatedly attempted to hang himself with a cord during a temporary stay at home. At the time of the interview, he stated, ‘I found a cord suspended from the ceiling, and so just played with it by hanging myself. It was just a play’, indicating an absence of suicidal ideation and lack of seriousness for the event. In March 2012, he died by hanging himself with a towel inside his hospital room.

...Despite the fact that the man had been severely depressed for two years before his SD diagnosis, had a well-documented history of suicidal ideation, and had made several suicide attempts (Kobayashi et al., 2018):
In April 2009, the patient started to express suicidal ideation such as ‘I would like to hang myself’. From May to June, he was admitted to a psychiatric hospital because of a deliberate overdose. After being discharged, the patient started to show lack of ability to understand what others were saying, kept insisting on his own way, and became excessively fixated on certain things. In July 2010, he was dismissed from his job because of poor performance. In September 2010, the patient was hospitalized after multiple attempts to hang himself with a cord. During this hospitalization, he was found to have difficulty in naming familiar objects.

His difficulty in naming familiar objects could be an early sign of neurodegeneration (especially in a 53 year old man), but by itself is not diagnostic. But he also had difficulty understanding what other people were saying, i.e. a problem in language comprehension. These symptoms are characteristic of semantic dementia, a type of frontotemporal lobar degeneration associated with a profound loss of meaning words and objects don't make sense any more. He did very poorly on subsequent neuropsychological testing. Neuroimaging results revealed atrophy in bilateral (but L > R) anterior and inferior temporal cortices that is characteristic of SD.



Now, it's easy for me to sit back and be all critical. BUT:I am not a clinician, I was not involved in this case, and hindsight is often 20/20. But it always pays to err on the side of caution when suicidal actions are expressed, even in a person who denies being suicidal, but especially in one who may no longer understand exactly what he's doing.


If you are contemplating suicide or know someone who is, please consult:

Online Suicide Help directory


Footnote

1They say he's 50 in the Abstract, but the Case Presentation starts out by saying he's 53.


Reference

Kobayashi R, Hayashi H, Tokairin T, Kawakatsu S, Otani K. (2018). Suicide as a result of stereotypic behaviour in a case with semantic dementia. Psychogeriatrics Jul 30. [Epub ahead of print]


Brain stimulation during sleep does not enhance memory for learned material

$
0
0


“Learn while you sleep” has been the claim of snake oil salesmen since the 1950s. The old pseudoscience methods involved listening to tapes and records. From a 1958 article by Lester David:
Max Sherover, president of the Linguaphone Institute of New York ... coined the word “dormiphonics,” defining it as a “new scientific method that makes quick relaxed learning possible, awake or asleep.” Dormiphonics, declares Mr. Sherover, works by “repeated concentrated impact of selected material on the conscious and subconscious mind.”

An “experiment” was conducted at the Tulare County Prison, where 100 convicts “volunteered to act as guinea pigs” (considered completely unethical by today's standards). During sleep, they were subjected to low-volume recordings that exhorted them to be better human beings: “Love shall rule your life. You shall love God, your family and others. You shall do unto others as you want others to do unto you. . .” The low voice also warned them away from the evils of alcohol.


Knight Education Recordings (1960s)
a commercially available product of the era


Even earlier, the Psycho-Phone (Salinger, 1927) played wax cylinders with different self-help messages, e.g., “Prosperity” and “Life Extension” on a phonograph while the unwitting customer slept. The Cummings Center Blog has a great post on this odd contraption. Salinger sold the machines for the whopping price of $235 (the equivalent of $3,250 in 2017). He didn't need Kickstarter or Indiegogo.




In the modern era, DIY brain stimulation enthusiasts promote self-experimentation with battery-driven devices. These transcranial direct current stimulation (tDCS) kits are available online, with a primary goal of enhancing cognitive performance. Using state-of-the-art professionally manufactured devices, scientists have published thousands of peer-reviewed papers, with mixed results as to the efficacy of different tDCS protocols.

A newer method is transcranial alternating current stimulation (tACS), which delivers stimulation within precise frequency bands with the aim of synchronizing oscillations within that band (e.g., ~10 Hz for alpha, ~1-4 Hz for delta, etc.). The goal is to modulate ongoing oscillatory brain rhythms to affect behavior.1

Today, the importance of sleep for the consolidation of previously learned material has been well-documented. Conceptually, this is quite different from the discredited “subliminal sleep learning” from days of yore. New research aims to improve retention of information learned during the day by delivering precisely timed and calibrated tACS during slow wave sleep (Ketz et al., 2018).



Fig. 1 (modified from Ketz et al., 2018).(A) Target detection task. (B) Memory was tested on two image types: Repeated (identical to Original) and Generalized (same as Original but from different viewpoint). (C) tDCS montage used during training (left), and tACS montage used to augment slow waves during sleep (right).


During the day, participants were trained on a difficult military task that required them to detect hidden targets (explosive devices, snipers, suicide bombers) that were concealed or disguised (Clark et al, 2012). As in their earlier study, tDCS or sham stimulation was delivered during the training phase (over right frontal or right parietal cortices). Previous findings indicated that significant improvements in learning and performance were observed after 30 min tDCS (anodal 2.0 mA) vs. “sham” (0.1 mA). However, this tDCS finding did not replicate in the current study (see Fig, 3A, left below). Why? The authors speculated that possible differences in current generation between their previous iontophoresis system (2.0 mA) and the present use of StarStim (1.0 mA) could explain the failure to replicate.

After training, tACS was delivered during sleep. The authors' cool closed-loop approach recorded the dominant slow wave (SW) frequency, and then delivered stimulation to match the phase and frequency of this dominant oscillation (range of 0.5 to 1.2 Hz). Fig. 3A (right) doesn't look terribly impressive, however. tACS did not improve performance for Repeated images, and had highly variable effects for Generalized images. Nonetheless, the two- and three-way interactions were significant, as was the pairwise comparison between active tACS vs. sham for Generalized images (all p's ≈ .015 for n=16).



Fig. 3 (modified from Ketz et al., 2018). (A) Waking tDCS effects (left) and SW tACS effects during sleep (right). (B) SW events broken down per sleep stage (left) and total SW events for each stimulation condition (right). Note that active stimulation had fewer total SW events compared with sham.


Why was there no change in performance for Repeated images but a small improvement for Generalized images? The authors recognize this conundrum and say:
...it is unclear why there was no improvement in Repeated images induced by SW tACS, as might be expected based on previous studies.

Then they speculate that consolidation of essential ‘gist’ — rather than recognition of specific items — was impacted by tACS.

Media coverage of this modest finding was predictably overblown, and originated with the Society for Neuroscience press release, Overnight Brain Stimulation Improves Memory: Non-invasive technique enhances memory storage without disturbing sleep. If it enhances memory storage, then why were Repeated images unaffected? Anyway, most commenters at Hacker News were pretty skeptical, which was a pleasant surprise.


Footnote

1However, there is some question whether tACS delivered at typical stimulation intensities can really entrain endogenous rhythms (Lafon et al., 2017).


References

Clark VP, Coffman BA, Mayer AR, Weisend MP, Lane TD, Calhoun VD, Raybourn EM, Garcia CM, Wassermann EM. (2012). TDCS guided using fMRI significantly accelerates learning to identify concealed objects. Neuroimage 59(1):117-28.

Ketz N, Jones AP, Bryant NB, Clark VP, Pilly PK (2018). Closed-Loop Slow-Wave tACS Improves Sleep-Dependent Long-Term Memory Generalization by Modulating Endogenous Oscillations. J Neurosci. 38(33):7314-7326.

Lafon B, Henin S, Huang Y, Friedman D, Melloni L, Thesen T, Doyle W, Buzsáki G, Devinsky O, Parra LC, A Liu A. (2017). Low frequency transcranial electrical stimulation does not entrain sleep rhythms measured by human intracranial recordings. Nat Commun. 8(1):1199.





Hugo Gernsback
December 1921

Dr. Bernard Carroll: blogger, funny tweeter, and critic of scientific and ethical lapses in psychiatry

$
0
0

Dr. Bernard Carroll (Nov 21, 1940 – Sep 10, 2018)




I was friends with Dr. Carroll (“Barney”) on Twitter, and always enjoyed his wit.



Before that he was an early commenter and supporter of my blog, The Neurocritic. Which pleased me to no end, given this brief biography from his blogger site.


My blogsHealth Care Renewal
OccupationPsychopharmacology
IntroductionPast chairman FDA Psychopharmacologic Drugs Advisory Committee.
Past chairman, department of psychiatry Duke University Medical Center.
InterestsProfessional ethics, medicine


He didn't know who I was and didn't care. He assessed me by the quality of my writing, and allowed me entrée into a world I would have no access to otherwise.1

As I'm facing the most catastrophic loss of my life, I will miss him too. He was a brilliant, principled, and compassionate man.


Remembrance from Health Care RenewalRemembering Dr Bernard Carroll


Obituary in BMJ by Dr. Allen Frances (and Dr. Barney Carroll):

Barney Carroll: the conscience of psychiatry
A pioneer in biological psychiatry, more recently Bernard Carroll (‘‘Barney’’) became a withering critic of its compromised ethics and corruption by industry. Shortly before his death, he helped prepare this obituary—his last chance to help correct the perverse incentives that too often influence the conduct and reporting of scientific research.
. . .

Barney rejected grand biological theories that offered neat, simple-but-wrong explanations of psychopathology. Ever aware of the complexity of the human brain, he was an early rejecter of blind optimism that any simple imbalance of monoamine transmitters could account for the wide variety of mental disorders. More recently, he deplored the ubiquitous hype that suggested that genetics or neuroimaging or big data mining could provide simple answers to deeply complex questions. He predicted—presciently—that these powerful new tools would have great difficulty in producing solid, replicable findings that could be translated to clinical practice.





Footnote

1 i.e., Very senior male psychiatrists. When I wrote my blog post about being female, and my wife's diagnosis of stage 4 cancer...

So yeah, think of this as my “coming out”. Sorry if I've offended anyone with my ability to blend into male-dominated settings.

Thank you for reading, and for your continued support during this difficult time.

...Barney was the first to comment, with his usual wit and grace: “I am pretty sure we can handle that. Bless you both.”

There Is a Giant Hole Where My Heart Used To Be

$
0
0
With profound grief, I announce that Sandra’s journey has come to an end.



Gardens at Government House, Victoria BC (June 2017)


Sandra Dawson was taken from this earth by the indiscriminate brutality of metastatic cancer. She died on October 2, 2018 at the age of 51. This horrific experience was not a “fight.” She did NOT lose a battle against the unchecked proliferation of malignant cells. Instead, Sandra saw the final phase of her life a journey. She was incredibly brave while facing the ravages of this terrible disease, and she was ultimately accepting of her fate. She was gracious and generous in sharing the final stages of her journey with friends and family, and also with nearly 25,000 followers of her @unsuicide Twitter account.1There was an outpouring of love and support and visitors and flowers, which buoyed her spirits and made her feel loved.

She really loved flowers.




Sandra was many things – a writer, a blogger, a jewelry designer, a crochet artist, a mental health advocate, a board member of the Mental Health Commission of Canada, and the 2016 winner of a Sovereign's Medal for Volunteers from The Governor General of Canada, for over a decade of work in suicide prevention.



Government House, Victoria BC (June 2017)



September 10 was World Suicide Prevention Day, and Dr. Erin Michalak of CREST.BD wrote a touching tribute to Sandra’s work.

Sandra Dawson’s Legacy

. . .

“Most significantly, Sandra created the Unsuicide directory of online and mobile crisis supports, as well as a popular corresponding Twitter feed (@Unsuicide) with close to 25,000 followers. Her Unsuicide online supports are authentically grounded in her lived experience of bipolar disorder, but also unfailingly focused on helping people, regardless of their geography, to access credible and safe online and mobile support tools. In 2016, she was awarded the Sovereign’s Medal for Volunteers from the Governor General of Canada in acknowledgement of the impact of her work as an advocate for people facing mental health challenges and in suicide prevention.”

But mostly I think of her as a writer.



Radar Queer Reading Series, SF Public Library (October 2016)


She was also my partner and wife of nearly 12 years.


December 2017


We met in 2006 through our respective blogs, The Neurocritic and Neurofuture. The neuroblogging community was quite small then. Neurofuture started in January 2006 a blog about Brain Science and Neurofuturism that was ahead of its time (so to speak):
The future is now, in many ways. Neuroscience and psychiatry are fields that have experienced tremendous growth, especially in the last few decades, and these advances already have practical applications. … At the same time, much is still unknown…
. . .

Neuroscience, psychiatry, neuroethics and transhumanism are the four areas of focus for this blog. They have applications in a broad range of fields, and I'll be aggregating diverse information. Expect a lot of interesting links. I invite your comments.

In June 2006, she started a video blog, Channel N, that shared interesting content related to neuroscience, psychology, and mental health. Channel N eventually moved to Psych Central, a trusted mega-site for mental health, depression, bipolar, ADHD & psychology information. Sandra also wrote posts for World of Psychology, the main PsychCentral blog, including many Top 10 lists, which were always popular.

Along with Steve Higgins, she blogged for Omni Brain (from December 2006 – January 2008), which was “an exploration of the serious, fun, ridiculous / past, present, future of the brain and the science that loves it” – as part of the long-defunct Science Blogs network.


But Sandra’s real love was writing fiction (mostly under the pseudonym S. Kay). She wrote an unpublished novel (or two), flash fiction, and a novella that was published by Maudlin House (ironically titled Joy).





The advent of Twitter really changed her writing. She started writing microfiction, ultra-short stories in the form of Tweets (140 characters or less). Sometime they were standalone zaps that told an entire tiny tale.





Other times, she crafted a number of tweets together to tell a longer story. These were published in various venues and included pieces such as Neurotech Light and DarkCloud Glitches, Facebook Algorithm of Death,2 and her final piece, Goth Robots (robots were always a favorite theme; see the interview Weird words with S. Kay). Her blueberrio tumblr has a comprehensive list of her published work.




Her masterwork was Reliant, “an apocalypse in tweets” published in 2015 by the late tNY Press (but still available for purchase at Amazon):
“Selfies, sexbots, and drones collide in these interwoven nanofictions about a society before, during, and after its collapse. With dazzling humor and insight, debut author S. Kay reveals a future that looks disconcertingly like the present. Beautifully illustrated by Thoka Maer, Reliant is a bold examination of society's unrequited love for technology.”
There was a nice review in Entropy by Christopher Iacono.




But my proudest literary-moment-by-proxy was when Sandra read at Writers With Drinks, a long-standing, monthly series of readings by spectacular writers, held in a bar and hosted by the talented and amusing Charlie Jane Anders. It was a fun evening and the ideal crowd for reading Reliant.



Writers With Drinks (Nov. 14, 2015)


Sandra's next book, Lost in the Land of Bears (designed and published by Reality Hands), had a truly unique limited edition faux fur cover, but it's still available as an e-book.


James Knight wrote a great review at Sabotage Reviews.


Sandra was an early adopter of all forms of online communication. She was an avid blogger, social media user, and before that an online diarist. She was prescient about the future of social media:
I have no optimism that social media will bring the world together with mutual empathy improving society. Sheep are still sheep and their bleatings still need shepherds to make them a coherent flock. An important lesson for the next decade. The media is still the media and if anything, is more segregated than ever.

Sandra Dawson, January 4, 2007


I could go on and on about her other wildly creative projects, like her Spambot Psychosis origami text cube, her beachpunk jewelry, her minibook necklaces (sample here), her upcycled cashmere brooches, her Postcards from the Post-Apocalypse, and her exhibit of crocheted art hats (and bonus EEG cap) at Femina Potens (the Cultivating Cozy exhibition).



January 18th, 2008



But what I can't express in words right now is how much I'll miss her.






Footnotes

1 Like me, she had many Twitter accounts and blogs and pseudonyms; the latter included Sandra K, Sandra Kiume, and S. Kay.

2Sadly, this was based on a true story that had an even more tragic ending.




I love you.
RIP.

Survey Skeleton

Manifestations of Fear in Cross-Cultural Interpretations of Sleep Paralysis

$
0
0

Frontispiece from: Blicke in die Traum- und Geisterwelt (A look into the dream and spirit world), by Friedrich Voigt (1854).


What are you most afraid of? Not finding a permanent job? Getting a divorce and losing your family? Losing your funding? Not making this month's rent? Not having a roof over your head? Natural disasters? Nuclear war? Cancer? Having a loved one die of cancer?

FAILURE?

There are many types of specific phobias (snakes, spiders, heights, enclosed spaces, clowns, mirrors, etc.), but that's not what I'm talking about here.

What are you really afraid of? Death? Pain? A painful death?

Devils, demons, ghosts, witches, and other supernatural apparitions? This latter category (haunting, demon possession) is common among many cultures with religious or spiritual practices, and can evoke primal fear. As a former Catholic, I am still frightened by movies or TV shows that involve demonic possession, like American Horror Story: Asylum.




I used this show as an exemplar in a post about Possession Trance Disorder in DSM-5.

A fantastic long-form article by Nike Mariani has just appeared in The Atlantic. The author intermixes the individual case study of Louisa Muskovits with the history of exorcism and facts about its modern-day resurgence.

American Exorcism
Priests are fielding more requests than ever for help with demonic possession, and a centuries-old practice is finding new footing in the modern world.
 . . .
  • The official exorcist for Indianapolis has received 1,700 requests so far in 2018.
  • Father Thomas said that as many as 80 percent of the people who come to him seeking an exorcism are sexual-abuse survivors.
  • Some abused children are subjected to such agonizing experiences that they adopt a coping mechanism in which they force themselves into a kind of out-of-body experience. As they mature, this extreme psychological measure develops into a disorder that may manifest unpredictably. “There is a high prevalence of childhood abuse of different kinds with dissociative disorders,” Roberto Lewis-Fernández, a Columbia University psychiatry professor who studies dissociation, told me.

This brings me to another topic I've been meaning to write about for weeks. Sleep paralysis is the terrifying condition of being half awake but unable to move (or speak or scream). It can feel like you're frozen in bed, aware of your surroundings yet completely paralyzed. This is because the complete muscle atonia typically experienced during REM sleep has oozed into lighter stages of non-REM sleep. Scary dream imagery can intrude while in this state, making it even worse.

A fascinating new paper covers interpretations of this frightening phenomenon across different cultures (Olunu et al., 2018). A common theme is being attacked, visited, or sat upon by supernatural beings, such as demons, witches, ghosts, and spirits.


-- click on image for a larger view --


The eerie presences are called Jinn in Egypt, Kabus in Iran, Phi Um in Thailand, Old Hag in lots of places, and the especially horrifying Kokma in Saint Lucia, which are “attacks by dead spirits or unbaptized babies that jump into a body and squeeze the throat”. In Nigeria, believers in supernatural explanations exist alongside others who hold rational explanations:
Nigerians describe it as “visitation of an evil spirit, witches, or some form of spiritual attack.” Others have beliefs that it may be due to anxiety or emotions associated with family problems.
The Wikipedia page on the folklore of the night hag also has a pretty good listing.


Interestingly, sleep paralysis was considered as a partial explanation for “demonic possession” in the case of Louisa Muskovits (Atlantic):
Louisa seemed to vacillate between this unhinged state and her normal self. One minute she would snarl and bare her teeth, and the next she would beg for help. “It definitely had this appearance where she was fighting within herself,” Harp [her former therapist] told me.

. . .
[Another time] Louisa ... woke up abruptly, only to find her body locked in place—but with the added shock of what seemed to be visual hallucinations, including one of a giant spider crawling into her bedroom. Louisa was so jolted that she barely ate or slept for three days. “I didn’t feel safe,” she said. “I felt violated.”

. . .
Sleep paralysis seemed like a promising explanation. A phenomenon in which sufferers move too quickly in and out of rem sleep for the body to keep up, sleep paralysis causes a person’s mind to wake up before the body can shake off the effects of sleep. Hovering near full consciousness, the person can experience paralysis and hallucinations.

But Louisa didn’t think this could account for the hand on her collarbone, which she swore she’d felt while she was completely awake [oh of course it can account for this phenomenology!].


What are your experiences of sleep paralysis?


Further Reading

When Waking Up Becomes the Nightmare: Hypnopompic Hallucinatory Pain

The Phenomenology of Pain During REM Sleep

The Neurophysiology of Pain During REM Sleep

Possession Trance Disorder in DSM-5

Spirit Possession as a Trauma-Related Disorder in Uganda

"The spirit came for me when I went to fetch firewood" - Personal Narrative of Spirit Possession in Uganda

Possession Trance Disorder Caused by Door-to-Door Sales

Fatal Hypernatraemia from Excessive Salt Ingestion During Exorcism

Diagnostic Criteria for Demonic Possession

The Devilish Side of Psychiatry


Reference

Olunu E, Kimo R, Onigbinde EO, Akpanobong MU, Enang IE, Osanakpo M, Monday IT, Otohinoyi DA, John Fakoya AO. (2018). Sleep Paralysis, a Medical Condition with a Diverse Cultural Interpretation. Int J Appl Basic Med Res. 8(3):137-142.



Scene from The Wailing. Although it's certainly not for everybody, it is an amazing film.


Folie à deux and Homicide for the Holidays

$
0
0


Nothing says home for the holidays like a series of murders committed by family members with a shared delusion. So sit back, sip your hot apple cider or spiked egg nog, and revel in family dysfunction worse than your own.

{Well! There is an actual TV show called Homicide for the Holidays, which I did not know. Kind of makes my title seem derivative... but it was coincidental.}


“Folie à deux”, or Shared Psychotic Disorder, was a diagnosis in DSM-IV-TR:

(A) A delusion develops in an individual in the context of a close relationship with another person(s), who has an already-established delusion. 

(B) The delusion is similar in content to that of the person who already has the established delusion. 

(C) The disturbance is not better accounted for by another Psychotic Disorder (e.g., Schizophrenia) or a Mood Disorder With Psychotic Features and is not due to the direct physiological effects of a substance (e.g., a drug of abuse, a medication) or a general medical condition.

Folie à deux occurs in the secondary partner, who shares a delusion with the primary partner (diagnosed with schizophrenia, delusional disorder, or psychotic depression). In in DSM-5, folie à deux no longer exists as a specific disorder. Instead, the secondary partner is given a diagnosis of “other specified schizophrenia spectrum and other psychotic disorder” with a specifier: “delusional symptoms in the partner of individual with delusional disorder” (APA, 2013).

The first cases were reported in the 19th century by the French psychiatrists Baillarger (1860) and Lasègue & Falret (1877). The latter authors note that insanity isn't contagious, but under special circumstances...
a) In the “folie à deux” one individual is the active element; being more intelligent than the other he creates the delusion and gradually imposes it upon the second one who is the passive element. At the beginning the latter resists but later, little by little, he suffers the pressure of his associate, although at the some degree he also reacts and influences the former to correct, modify, and coordinate the delusion that then becomes their common delusion, to be repeated to all in an almost identical fashion and with the same words.
The two individuals are in a close relationship and typically live in an isolated environment.

A recent paper by Guivarch et al. (2018) covered the history of the disorder, and performed a literature review on folie à deux and homicide. They found 17 articles:
In the cases examined, homicides were committed with great violence, usually against a victim in the family circle, and were sometimes followed by suicide. The main risk factor for homicide was the combination of mystical and persecutory delusions. The homicides occurred in response to destabilization of the delusional dyads.

Body mutilation is not uncommon: “These features appear in the reported case of a mother who was delusional and killed her young son by hitting him on the head 3 times with a hatchet.”

The authors presented a detailed history of induced psychosis involving Mr. A (the secondary) and Mrs. A (the primary, who had a family history of delusion). Shortly after getting married, they had a child who was removed by social services due to inadequate parenting.
Subsequently, the couple engaged in several years of delusional wandering in France and Italy, traveling from village to village to accomplish “a divine mission”, during which time they were hosted in monasteries or abbeys. They expressed delusional feelings but never visited a psychiatrist and were never confronted by the police. The couple's relationship transformed; the partners stopped having sexual relations and quickly established a delusional hierarchical relation, with Mrs. A being called “Your Majesty” and Mr. A considering himself “King of Australia, Secretary of Her Majesty”.

After about 20 years of this, in a fit of overkill, Mr. A murdered a random 11 year old child by inflicting 44 stab wounds. Earlier, he had felt humiliated and persecuted at a police check point, which provoked an “incident.” The murder of the child was part of their delusional divine mission, to make a necessary sacrifice that would restore balance.


Paranoia of the exalted type in a setting of folie à deux

The famous case of Pauline P (“a dark, rather sulky looking but not unattractive girl of stocky build”) and Juliet H (“a tall, willowy, frail, attractive blonde with large blue eyes”) was also mentioned (Medlicott, 1955). The two girls established a very close bond, constructed an elaborate make-believe world of fictional characters, withdrew from all others, became sexually involved, and developed a superiority complex. They killed Pauline's mother “because one of the girls was going to move with her parents, which would have led to the separation of the delusional dyad (Medlicott, 1955).” This formed the basis of the fantastic 1994 film, Heavenly Creatures, featuring Melanie Lynskey and Kate Winslet.




Granted, their indissoluble bond was pathological, but laughable 1955 views of same-sex relationships were on display in this analysis:
There is of course no doubt that the relationship between these two girls was basically homosexual in nature. Pauline made attempts in 1953 of establishing heterosexual relationships, but in spite of intercourse on one occasion there was no evidence of real erotic involvement. All her escapades were fully discussed with Juliet which is a common feature amongst people basically homosexual in orientation.

Yes, we can generalize and say that all teenage girls in the 1950s commonly bragged about their heterosexual exploits with their lesbian lovers.

From Pauline's 1953 diary:
“To-day Juliet and I found the key to the 4th World.  ... We have an extra part of our brain which can appreciate the 4th World. Only about 10 people have it. When we die we will go to the 4th World, but meanwhile on two days every year we may use the key and look in to that beautiful world which we have been lucky enough to be allowed to know of, on this Day of Finding the Key to the Way through the Clouds.”

Your family gatherings may not always be harmonious, but presumably your delusional children are not plotting to kill you. Happy Holidays.





References

Baillarger J. (1860). Quelques exemples de folie communiquée. Gazette Des Hôpitaux Civils et Militaires 38: 149-151.

Guivarch J, Piercecchi-Marti MD, Poinso F. (2018). Folie à deux and homicide: Literature review and study of a complex clinical case. International Journal of Law and Psychiatry 61:30-39.

Lasègue C, Falret J. (1877). La folie à deux (ou folie communiquée). Annales Médico Psychologiques 18: 321-355. English translation (Dialogues in Philosophy, Mental and Neuro Sciences, December 2016).

Medlicott R. (1955). Paranoia of the exalted type in a setting of folie à deux; a study of two adolescent homicides. The British journal of medical psychology 28:205-223.

2018 Was a Year to Forget. Really.

$
0
0

Our memory for the details of real-life events is poor, according to a recent study.

Seven MIT students took a one hour walk through Cambridge, MA. A day later, they were presented with one second video clips they may or may not have seen during their walk (the “foils” were taken from another person's recording). Mean recognition accuracy was 55.7%, barely better than guessing.1


Minimal recognition memory for detailed events. Dashed line is chance performance. Adapted from Fig. 2 of
Misra et al. (2018).


How did the researchers capture the details of what was seen during each person's stroll about town (2.1 miles / 3.5 km)? They were fitted with eye tracking glasses to follow their eye movements (because you can't remember what you don't see), and a GoPro camera was mounted on a helmet.


from Fig. 1 (Misra et al., 2018).


One problem with this setup, however, was that the eye tracking data had to be excluded. The overwhelmingly bright summer sun prevented the eye tracker from obtaining accurate images of the pupil. Thus, Experiment 2 was performed inside the Boston Museum of Fine Arts with a separate group of 10 students.


from Fig. 1 (Misra et al., 2018).


Recognition performance was better in Experiment 2. Mean accuracy was 63.2% well above chance (p=.0005) but still not great. Participants correctly identified clips they had seen 59% of the time, and correctly rejected clips they hadn't seen 67% of the time. One participant (#4) was really good, and you'll notice the individual differences below.

Dashed line is chance performance. Adapted from Fig. 2 of Misra et al. (2018).


In Exp. 2, the investigators were able to look at the influence of eye fixations on memory performance. Not surprisingly, people were better at remembering what they looked at (fixated on), but this only held for certain categories of items: talking people, objects rated as “distinctive” (but not distinctive faces), and paintings (but not sculptures).




How do the authors interpret this finding? We don't necessarily pay attention to everything we look at.
“What subjects fixated on also correlated with performance (Fig. 4), but it is clear that subjects did not remember everything that they laid eyes on. There is extensive literature showing that subjects may not pay attention or be conscious of what they are fixating on. Therefore, it is quite likely that, in several instances, subjects may have fixated on an object without necessarily paying attention to that object. Additionally, attention is correlated with the encoding of events into memory. Thus, the current results are consistent with the notion that eye fixations correlate with episodic memory but they are neither necessary nor sufficient for successful episodic memory formation.”

For me personally, 2018 was a year to forget.2 Yet, certain tragic images are etched into my mind, cropping up at inopportune times to traumatize me all over again. That's a very different topic for another time and place.


May your 2019 brighten the sky.


The number 2019 is written in the air with a sparkler near a tourist camp outside Krasnoyarsk, Russia, on January 1, 2019. (The Atlantic)


Footnotes

1 However:
“Two subjects from Experiment I were excluded from the analyses. One of these subjects had a score of 96%, which was well above the performance of any of the other subjects (Figure 2). The weather conditions on the day of the walk for this subject were substantially different, and this subject could thus easily recognize his own video clips purely from assessing the weather conditions. Another subject was excluded 260 because he responded 'yes'>90% of the trials.”

2 See:

I should have done this by now...

The Lie of Precision Medicine

Derealization / Dying

There Is a Giant Hole Where My Heart Used To Be

How to Reconstruct Your Life After a Major Loss


Reference

Misra P, Marconi A, Peterson M, Kreiman G. (2018). Minimal memory for details in real life events. Sci Rep. 8(1):16701.





What Can Brain Imaging Tell Us About Violent Extremism?

$
0
0

Before answering that question, I'll tell you about an incredibly impressive ethnographic study and field survey. For a one year period, the investigators (Pretus, Hamid et al., 2018) conducted field work within the community of young Moroccan men in Barcelona, Spain. As the authors explain, the Moroccan diaspora is an immigrant community susceptible to jihadist forms of radicalization:
Spain hosts Europe’s second largest Moroccan diaspora community (after France) and its least integrated, whereas Catalonia hosts the largest and least integrated Moroccan community in Spain. Barcelona ... was most recently the site of a mass killing ... by a group of young Moroccan men pledging support for the Islamic State. According to a recent Europol’s latest annual report on terrorism trends, Spain had the second highest number of jihadist terrorism-related arrests in Europe (second only to France) in 2016...

After months of observation in selected neighborhoods, the researchers approached prospective participants about completing a survey, with the assurance of absolute anonymity. No names were exchanged, and informed consent procedures were performed orally, to prevent any written record of participation. The very large sample included 535 respondents (average age 23.47 years, range 18–42), who were all Sunni Muslim Moroccan men.

The goal of the study was to look at sacred values in these participants, and whether these values might affect their willingness to engage in violent extremism. “Sacred values are immune or resistant to material tradeoffs and are associated with deontic (duty-bound) reasoning...” (Pretus, Hamid et al., 2018). The term sacred values doesn't necessarily refer to religious beliefs. One of the most common is the basic human value, “it is wrong to kill another human being.” But theoretically speaking, we could include statements such as, “it is wrong to kill endangered species for sport (or for any other reason).”

In this study, Sacred Values included:
  • Palestinian right of return
  • Western military forces being expelled from all Muslim lands
  • Strict sharia as the rule of law in all Muslim countries
  • Armed jihad being waged against enemies of Muslims
  • Forbidding of caricatures of Prophet Mohammed
  • Veiling of women in public

What were the Nonsacred Values? We don't know. I couldn't find examples anywhere in the paper. It's crucial that we know what these were, to help understand the “sacralization” of nonsacred values, which was observed in an fMRI experiment (described later). So I turned to the Supplemental Material of Berns et al. (2012), inferring that the statements below are good examples of nonsacred values in a population of adults in Atlanta.
  • You are a dog person.
  • You are a cat person.
  • You are a Pepsi drinker.
  • You are a Coke drinker.
  • You believe that Target is superior to Walmart.
  • You believe that Walmart is superior to Target.

But what if the nonsacred values in the present study of violent extremism were a little more contentious and meaningful?
  • You are a fan of FC Barcelona.
  • You are a fan of AC Milan.

Anyway, to choose participants for the fMRI experiment, the investigators first divided the entire group into those who were more (n=267) or less (n=268) vulnerable to recruitment into violent extremism (see Appendix for details). An important comparison would have been to directly contrast brain activity in these two groups, but that wasn't done here. Out of the 267 men more vulnerable to violent extremism, 38 agreed to participate in the fMRI study. These 38 were more likely to Endorse Militant Jihadism (score 4.24 out of 7) than the general fMRI pool (3.35) and the non-fMRI pool (2.43).1 

A battery of six sacred and six nonsacred values was constructed individually for each person and presented in the scanner, along with a number of grammatical variants, for a list of 50 different items per condition. The 38 participants were randomly assigned to one of two manipulations in a between-subjects design: exclusion (n=19) and inclusion (n=19) in the ever-popular ball-tossing video game of Cyberball. [PDF]2



Unfortunately, this reduced the study's statistical power. Nonetheless, a major goal of the experiment was to examine how social exclusion affects the processing of sacred values. I don't know if Cyberball studies are ever conducted in a within-subjects design (perhaps with an intervening task), or if exposure to one of the two conditions is too “contaminating”. At any rate, in real life, discrimination against Muslim immigrants is isolating and causes exclusion from social and economic benefits. Feelings of marginalization can result in greater radicalization and support for (and participation in) extremist groups. At this point in time, I don't think neuroimaging can add to the extensive knowledge gained from years of field work.

Nevertheless, the investigators wanted to extend the findings of Berns et al. (2012) to a very different population. The earlier study wanted to determine whether sacred values are processed in a deontological way (based on strict rules of right and wrong) or in a utilitarian fashion (based on cost/benefit analysis of outcome). As interpreted by those authors, processing sacred values was associated with increased activation of left temporoparietal junction (semantic storage) and left ventrolateral prefrontal cortex (semantic retrieval). Berns et al. suggested that “sacred values affect behaviour through the retrieval and processing of deontic rules and not through a utilitarian evaluation of costs and benefits.” Based on those results, the obvious prediction in the present study is that sacred values should activate left temporoparietal junction (L TPJ) and left ventrolateral prefrontal cortex (L VLPFC).


Fig. 3A (Pretus, Hamid et al., 2018).


Fig. 3A shows that only the latter half of that prediction was observed, and there was no explanation for the lack of activation in L TPJ. Instead, there was a finding in R TPJ in the excluded group which I won't discuss further.

Of note, the excluded participants rated themselves as being more likely to fight and die for nonsacred values, compared to the included participants. This was termed “sacralization” and now you can see why it's so important to know the nonsacred values. Are we talking about fighting and dying for Pepsi vs. Coke? For FC Barcelona vs. AC Milan? Not to be glib, but this would help us understand why social exclusion (in an artificial experimental setting) would radicalize these participants (in an artificial experimental setting).



Fig. 3B (Pretus, Hamid et al., 2018).Nonsacred values activate Left Inferior Frontal Gyrus (IFG, aka VLPFC) in the excluded group, but not in the included group. This was interpreted as a neural correlate of “sacralization”.


Another interpretation of Fig. 3B is that the exclusion manipulation was distracting, making it more difficult for these participants to process stimuli expressing nonsacred values (due to increased encoding demands, syntactic processing, etc.). Exclusion increased emotional intensity ratings, and decreased feelings of belongingness and being in control. This distraction could have carried over to the task of rating one's willingness to fight and die in defense of values.

Even if we say the brain imaging results weren't especially informative, the extensive ethnographic study and field surveys were a highly valuable source of data on a marginalized group of young Muslim men at risk of recruitment by violent extremist groups. It's a vicious cycle: terrorist attacks result in greater discrimination and persecution of innocent Muslim men, which has the unintended effect of further radicalization in some of the most vulnerable individuals. To conclude, I acknowledge that my comments may be out of turn because I have no authority or expertise, and because I'm from a country with an appalling record of discriminating against Muslims.


Footnotes

1I was a bit confused by some of these scores, because they changed from one paragraph to the next, and differed from what was in Table 1. Perhaps one was a composite score, and the other from an individual questionnaire.

2I've written extensively about whether Cyberball is a valid proxy for social exclusion, but I won't get into that here.


References

Berns GS, Bell E, Capra CM, Prietula MJ, Moore S, Anderson B, Ginges J, Atran S. (2012). The price of your soul: neural evidence for the non-utilitarian representation of sacred values. Philos Trans R Soc Lond B Biol Sci. 367(1589):754-62.

Pretus C, Hamid N, Sheikh H, Ginges J, Tobeña A, Davis R, Vilarroya O, Atran S. (2018). Neural and Behavioral Correlates of Sacred Values and Vulnerability to Violent Extremism. Front Psychol. 9:2462.


Appendix


Modified from Table 1 (Pretus, Hamid et al., 2018).

[The] measures included (1) a modified inventory on general radicalization (support for violence as a political tactic) based on a prior longitudinal study on violent extremist attitudes among Swiss adolescents (Nivette et al., 2017); (2) a scale on personal grievances and previously used on imprisoned Islamist militants in the Philippines, and Tamil Tigers in Sri Lanka (Webber et al., 2018); (3) a scale on collective narcissism which has been shown to shape in-group authoritarian identity and support for military aggression against outgroups (de Zavala et al., 2009); (4) a self-report delinquency inventory adapted from Elliott et al. (1985), based on the disproportionate number of Muslim European delinquents who join jihadist terrorist groups (Basra and Neumann, 2016); and (5) a series of items assessing endorsement of militant jihadism (“The fighting of the Taliban, Al Qaida, ISIS is justified,” “The means of jihadist groups are justified,” “Attacks against Western nations by jihadist groups are justified,” “Attacks against Muslim nations by jihadist groups are justified,” “Attacks against civilians by jihadist groups are justified,” “Spreading Islam using force is every part of the world is an act of justifiable jihad,” and “A Caliphate must be resurrected even by force”) that we combined into a reliable composite score, “Endorsement of Militant Jihadism”...

Unlucky Thirteen

$
0
0


Today is the 13th anniversary of this blog. I wanted to write a sharp and subversive post.1 Or at least compose a series of self-deprecating witticisms about persisting this long. Alas, it has been an extremely  difficult year.

Instead, I drew inspiration from Twitter (@neuroecology) and a blogger who's been at it even longer than I (@DoctorZen). Very warily I might add, because I knew the results would not be flattering or pretty.

Behold my scores on the “Big Five” personality traits (and weep). Some of the extremes are partly situational, and that's why I'm presenting these traits separately. Sure, negative emotionality is a relative fixed part of my personality, but the 100% scores on depression and anxiety are influenced by grief (due to the loss of my spouse of 12 years). Personality psychologists would turn this around and say that someone high in trait negative emotionality (formerly known as the more disparaging “neuroticism”) would be predisposed to depression and anxiety.




Another fun trait score is shown below. This one might be even sadder. Yeah, I'm introverted, but people in my situation often tend to withdraw from friends, family, and society.2Again, reverse the causality if you wish, but social isolation is not an uncommon response.





But hey, I am pretty conscientious, as you can see from my overall test results on the Big Five. You too can take the test HERE.




I'll have something more interesting for you next time.



Footnotes

1Why? To prove to myself that I can still do it? To impress the dwindling number of readers? To show how the blog has not exceeded its expiry date it still has relevance in its own modest and quirky way.

2Hey, I actually had two social engagements this weekend! My lack of assertiveness is disturbing, however. But I absolutely do not want to take the lead on anything right now.




Is executive function different from cognitive control? The results of an informal poll

$
0
0

It ended in a tie!




Granted, this is a small and biased sample, and I don't have a large number of followers. The answers might have been different had @russpoldrack (Yes in a landslide) or @Neuro_Skeptic (n=12,458 plus 598 wacky write-in votes) posed the question.

Before the poll I facetiously asked:
Other hypothetical questions (that you don't need to answer) might include:
  • Are you a clinical neuropsychologist? 
  • Do you use computational modeling in your work?1
  • What is your age?
Here, I was thinking:
  • Clinical neuropsychologists would say No
  • Computational researchers would say Yes
  • On average, older people would be more likely to say No than younger people

After the poll I asked, “So what ARE the differences between executive function and cognitive control? Or are the terms arbitrary, and their usage a matter of context / subfield?”

No one wanted to expound on the differences between the terms.2
I answered No, because I think the terms are arbitrary, and their usage a matter of context and subfield. Not that Wikipedia is the ultimate authority, but I was amused to see this:

Executive functions

From Wikipedia, the free encyclopedia
  (Redirected from Cognitive control)
Executive functions (collectively referred to as executive function and cognitive control) are a set of cognitive processes that are necessary for the cognitive control of behavior: selecting and successfully monitoring behaviors that facilitate the attainment of chosen goals. Executive functions include basic cognitive processes such as attentional control, cognitive inhibition, inhibitory control, working memory, and cognitive flexibility

Nature said this:

Cognitive control

Cognitive control is the process by which goals or plans influence behaviour. Also called executive control, this process can inhibit automatic responses and influence working memory. Cognitive control supports flexible, adaptive responses and complex goal-directed thought. Some disorders, such as schizophrenia and ADHD, are associated with impairments of executive function.

They're using the terms interchangeably! The terms cognitive control, executive control, executive function, and executive control functions are not well-differentiated, except in specific contexts. For instance, the Carter Lab definition below sounds specific at first, but then branches out to encompass many “executive functions” not named as such.

Cognitive Control

"Cognitive control" is a construct from contemporary cognitive neuroscience that refers to processes that allow information processing and behavior to vary adaptively from moment to moment depending on current goals, rather than remaining rigid and inflexible. Cognitive control processes include a broad class of mental operations including goal or context representation and maintenance, and strategic processes such as attention allocation and stimulus-response mapping. Cognitive control is associated with a wide range of processes and is not restricted to a particular cognitive domain. For example, the presence of impairments in cognitive control functions may be associated with specific deficits in attention, memory, language comprehension and emotional processing. ...

Actually, the term Cognitive Control dates back to the 1920s, if not further. Two quick examples.

(1) When talking about Charles Spearman and his theory of intelligence and his three qualitative principles, Charles S. Slocombe (1928) said:
“To these he adds five quantitative principles, cognitive control (attention), fatigue, retentivity, constancy of output, and primordial potency...”
Simple! Cognitive Control = Attention.

(2) Frederick Anderson (1942), in The Relational Theory of Mind:
“Meanings, then, are mental processes which, although not themselves objects for consciousness, actively modify and characterize that of which we are for the moment conscious. They differ from other subconscious processes in this respect, that we have cognitive control over them and can at any moment bring them to light if we choose.”
Cognitive Control = having the capacity of “bringing things into consciousness” — is this different from attention, or “paying attention” to something by making it the focus of awareness?


Moving into the 21st century, two of the quintessential contemporary cognitive control papers that [mostly] banish executives from their midst are:

Miller and Cohen (2001):
“The prefrontal cortex has long been suspected to play an important role in cognitive control, in the ability to orchestrate thought and action in accordance with internal goals.”

Botvinick et al. (2001):
“A remarkable feature of the human cognitive system is its ability to configure itself for the performance of specific tasks through appropriate adjustments in perceptual selection, response biasing, and the on-line maintenance of contextual information. The processes behind such adaptability, referred to collectively as cognitive control, have been the focus of a growing research program within cognitive psychology.”

I originally approached this topic during research for a future post on Mindstrong and their “digital phenotyping” technology. Two of their five biomarkers are Executive Function and Cognitive Control. How do they differ? There's an awful lot of overlap, as we'll see in a future post.


Footnotes

1Another fun (and related) determinant might be, “does your work focus on the dorsal anterior cingulate cortex? In which case, the respondent would answer Yes.

2 except for one deliberately obfuscatory response.


References

Anderson F. (1942). The Relational Theory of Mind. The Journal of Philosophy 39(10):253-60.

Botvinick MM, Braver TS, Barch DM, Carter CS, Cohen JD. (2001). Conflict monitoring and cognitive control. Psychol Rev. 108(3):624-52.

Miller EK, Cohen JD. (2001). An integrative theory of prefrontal cortex function. Annual Rev Neurosci. 2001;24:167-202.

Slocombe CS. (1928). Of mental testing—a pragmatic theory. Journal of Educational Psychology 19(1):1-24.


Appendix

Many, many articles use the terms interchangeably. I won't single out anyone in particular. Instead, here is a valiant attempt by Nigg (2017) to make a slight differentiation between them in a review paper entitled:
On the relations among self-regulation, self-control, executive functioning, effortful control, cognitive control, impulsivity, risk-taking, and inhibition for developmental psychopathology.
But in the end he concludes, “Executive functioning, effortful control, and cognitive control are closely related.”

Depth Electrodes or Digital Biomarkers? The future of mood monitoring

$
0
0

Mood Monitoring via Invasive Brain Recordings or Smartphone Swipes

Which Would You Choose?


That's not really a fair question. The ultimate goal of invasive recordings is one of direct intervention, by delivering targeted brain stimulation as a treatment. But first you have to establish a firm relationship between neural activity and mood. Well, um, smartphone swipes (the way you interact with your phone) aim to establish a firm relationship between your “digital phenotype” and your mood. And then refer you to an app for a precision intervention. Or to your therapist / psychiatrist, who has to buy into use of the digital phenotyping software.

On the invasive side of the question, DARPA has invested heavily in deep brain stimulation (DBS) as a treatment for many disorders– Post-Traumatic Stress Disorder (PTSD), Major Depression, Borderline Personality Disorder, General Anxiety Disorder, Traumatic Brain Injury, Substance Abuse/Addiction, Fibromyalgia/Chronic Pain, and memory loss. None of the work has led to effective treatments (yet?), but the DARPA research model has established large centers of collaborating scientists who record from the brains of epilepsy patients. And a lot of very impressive papers have emerged – some promising, others not so much.

One recent study (Kirkby et al., 2018) used machine learning to discover brain networks that encode variations in self-reported mood. The metric was coherence between amygdala and hippocampal activity in the β-frequency (13-30 Hz). I can't do justice to their work in the context of this post, but I'll let the authors' graphical abstract speak for itself (and leave questions like, why did it only work in 13 out of 21 of your participants? for later).




Mindstrong

Then along comes a startup tech company called Mindstrong, whose Co-Founder and President is none other than Dr. Thomas Insel, former director of NIMH, and one of the chief architects1 of the Research Domain Criteria (RDoC), “a research framework for new approaches to investigating mental disorders” that eschews the DSM-5 diagnostic bible. The Appendix chronicles the timeline of Dr. Insel's evolution from “mindless” RDoC champion to “brainless” wearables/smartphone tech proselytizer.2


From Wired:
. . .

At Mindstrong, one of the first tests of the [“digital phenotype”] concept will be a study of how 600 people use their mobile phones, attempting to correlate keyboard use patterns with outcomes like depression, psychosis, or mania. “The complication is developing the behavioral features that are actionable and informative,” Insel says. “Looking at speed, looking at latency or keystrokes, looking at error—all of those kinds of things could prove to be interesting.”

Curiously, in their list of digital biomarkers, they differentiate between executive function and cognitive control — although their definitions were overlapping (see my previous post, Is executive function different from cognitive control? The results of an informal poll).
Mindstrong tracks five digital biomarkers associated with brain health: Executive function, cognitive control, working memory, processing speed, and emotional valence. These biomarkers are generated from patterns in smartphone use such as swipes, taps, and other touchscreen activities, and are scientifically validated to provide measurements of cognition and mood.

Whither RDoC?

NIMH established a mandate requiring that all clinical trials should postulate a neural circuit “mechanism” that would be responsible for any efficacious response. Thus, clinical investigators were forced to make up simplistic biological explanations for their psychosocial interventions:

“I hypothesize that the circuit mechanism for my elaborate new psychotherapy protocol which eliminates fear memories (e.g., specific phobias, PTSD) is implemented by down-regulation of amygdala activity while participants view pictures of fearful faces using the Hariri task.”



[a fictitious example]


I'm including a substantial portion of the February 27, 2014 text here because it's important.
NIMH is making three important changes to how we will fund clinical trials.

First, future trials will follow an experimental medicine approach in which interventions serve not only as potential treatments, but as probes to generate information about the mechanisms underlying a disorder. Trial proposals will need to identify a target or mediator; a positive result will require not only that an intervention ameliorated a symptom, but that it had a demonstrable effect on a target, such as a neural pathway implicated in the disorder or a key cognitive operation. While experimental medicine has become an accepted approach for drug development, we believe it is equally important for the development of psychosocial treatments. It offers us a way to understand the mechanisms by which these treatments are leading to clinical change.

OK, so the target could be a key cognitive operation. But let's say your intervention is a Housing First initiative in homeless individuals with severe mental illness and co-morbid substance abuse. Your manipulation is to compare quality of life outcomes for Housing First with Assertive Community Treatment vs. Congregate Housing with on-site supports vs. treatment as usual. What is the key cognitive operation here? Fortunately, this project was funded by the Canadian government and did not need to compete for NIMH funding.

I think my ultimate issue is one of fundamental fairness. Is it OK to skate away from the wreckage and profit by making millions of dollars? From Wired:
“I spent 13 years at NIMH really pushing on the neuroscience and genetics of mental disorders, and when I look back on that I realize that while I think I succeeded at getting lots of really cool papers published by cool scientists at fairly large costs—I think $20 billion—I don’t think we moved the needle in reducing suicide, reducing hospitalizations, improving recovery for the tens of millions of people who have mental illness,” Insel says. “I hold myself accountable for that.”

But how? You've admitted to spending $20 billion on cool projects and cool papers and cool scientists who do basic research. This has great value. But the big mistakes were an unrealistic promise of treatments and cures, and the charade of forcing scientists who study C. elegans to explain how they're going to cure psychiatric disorders.


Footnotes

1Dr. Bruce Cuthbert was especially instrumental, as well as a large panel of experts. But since this post is about digital biomarkers, the former director of NIMH is the focus of RDoC here.

2 The Insel archives of the late Dr. Mickey Nardo in his prolific blog, 1boringoldman.com, are a must-read. I also wish the late Dr. Barney Carroll was still here to issue his trenchant remarks and trademark witticisms.


Reference

Kirkby LA, Luongo FJ, Lee MB, Nahum M, Van Vleet TM, Rao VR, Dawes HE, Chang EF, Sohal VS. (2018). An Amygdala-Hippocampus Subnetwork that Encodes Variation in Human Mood. Cell 175(6):1688-1700.e14.


Additional Reading - Digital Phenotyping

Jain SH, Powers BW, Hawkins JB, Brownstein JS. (2015). The digital phenotype. Nat Biotechnol. 33(5):462-3. [usage of the term here means data mining of content such as Twitter and Google searches, rather than physical interactions with a smartphone]

Insel TR. (2017). Digital Phenotyping: Technology for a New Science of Behavior. JAMA 318(13):1215-1216. [smartphone swipes, NOT content:Who would have believed that patterns of typing and scrolling could reveal individual fingerprints of performance, capturing our neurocognitive function continuously in the real world?”]

Insel TR. (2017). Join the disruptors of health science. Nature 551(7678):23-26. [conversion to the SF Bay Area/Silicon Valley mindset]. Key quote:
“But what struck me most on moving from the Beltway to the Bay Area was that, unlike pharma and biotech, tech companies enter biomedical and health research with a pedigree of software research and development, and a confident, even cocky, spirit of disruption and innovation. They have grown by learning how to move quickly from concept to execution. Software development may generate a minimally viable product within weeks. That product can be refined through ‘dogfooding’ (testing it on a few hundred employees, families or friends) in a month, then released to thousands of users for rapid iterative improvement.”
[is ‘dogfooding’ a real term?? if that's how you're going to test technology designed to help people with severe mental illnesses — without the input of the consumers themselves — YOU WILL BE DOOMED TO FAILURE.]

Philip P, De-Sevin E, Micoulaud-Franchi JA. (2018). Technology as a Tool for Mental Disorders. JAMA 319(5):504.

Insel TR. (2018). Technology as a Tool for Mental Disorders-Reply. JAMA  319(5):504.

Insel TR. (2018). Digital phenotyping: a global tool for psychiatry. World Psychiatry 17(3):276-277.


Appendix - a selective history of RDoC publications























Post-NIMH Transition (articles start appearing less than a month later) 








#CNS2019

$
0
0


It's March, an odd-numbered year, must mean.... it's time for the Cognitive Neuroscience Society Annual Meeting to be in San Francisco!

I only started looking at the schedule yesterday and noticed the now-obligatory David Poeppel session on BIG stuff 1 on Saturday (March 23, 2019):

Special Session -The Relation Between Psychology and Neuroscience, David Poeppel, Organizer,  Grand Ballroom

Then I clicked on the link and saw a rare occurrence: an all-female slate of speakers!



Whether we study single cells, measure populations of neurons, characterize anatomical structure, or quantify BOLD, whether we collect reaction times or construct computational models, it is a presupposition of our field that we strive to bridge the neurosciences and the psychological/cognitive sciences. Our tools provide us with ever-greater spatial resolution and ideal temporal resolution. But do we have the right conceptual resolution? This conversation focuses on how we are doing with this challenge, whether we have examples of successful linking hypotheses between psychological and neurobiological accounts, whether we are missing important ideas or tools, and where we might go or should go, if all goes well. The conversation, in other words, examines the very core of cognitive neuroscience.

Also on the schedule tomorrow is the public lecture and keynote address by Matt Walker Why Sleep?
Can you recall the last time you woke up without an alarm clock feeling refreshed, not needing caffeine? If the answer is “no,” you are not alone. Two-thirds of adults fail to obtain the recommended 8 hours of nightly sleep. I doubt you are surprised by the answer to this question, but you may be surprised by the consequences. This talk will describe not only the good things that happen when you get sleep, but the alarmingly bad things that happen when you don’t get enough. The presentation will focus on the brain (learning, memory aging, Alzheimer’s disease, education), but further highlight disease-related consequences in the body (cancer, diabetes, cardiovascular disease). The take-home: sleep is the single most effective thing we can do to reset the health of our brains and bodies.

Why sleep, indeed.

Meanwhile, Foals are playing tonight at The Fox Theater in Oakland. Tickets are still available.




view video on YouTube.


Footnote

1 See these posts:

The Big Ideas in Cognitive Neuroscience, Explained #CNS2017

Big Theory, Big Data, and Big Worries in Cognitive Neuroscience #CNS2018

An Amicable Discussion About Psychology and Neuroscience

$
0
0

People like conflict (the interpersonal kind, not BLUE).1 Or at least, they like scientific debate at conferences. Panel discussions that are too harmonious seem to be divisive. Some people will say, “well, now THAT wasn't very controversial.” But as I mentioned last time, one highlight of the 2019 Cognitive Neuroscience Society Annual Meeting was a Symposium organized by Dr. David Poeppel.2

Special Session -The Relation Between Psychology and Neuroscience, David Poeppel, Organizer, Grand Ballroom
Whether we study single cells, measure populations of neurons, characterize anatomical structure, or quantify BOLD, whether we collect reaction times or construct computational models, it is a presupposition of our field that we strive to bridge the neurosciences and the psychological/cognitive sciences. Our tools provide us with ever-greater spatial resolution and ideal temporal resolution. But do we have the right conceptual resolution? This conversation focuses on how we are doing with this challenge, whether we have examples of successful linking hypotheses between psychological and neurobiological accounts, whether we are missing important ideas or tools, and where we might go or should go, if all goes well. The conversation, in other words, examines the very core of cognitive neuroscience.

Conversation. Not debate. So first, let me summarize the conversation. Then I'll get back to the merits (demerits) of debate. In brief, many of the BIG IDEAS motifs of 2017 were revisited...
  • David Marr and the importance of work at all levels of analysis 
  • What are the “laws” that bridge these levels of analysis?
  • Emergent properties” – a unique higher-level entity (e.g., consciousness, a flock of birds) emerges from the activity of lower-level activity (e.g., patterns of neuronal firing, the flight of individual birds)... the sum is greater than its parts
  • Generative Models – formal models that make computational predictions
...with interspersed meta-commentary on replication, publishing, and Advice to Young Neuroscientists. Without further ado:

Dr. David Poeppel – Introductory Remarks that examined the very core of cognitive neuroscience (i.e., “we have to face the music”).
  • the conceptual basis of cognitive neuroscience shouldn't be correlation 
For example, fronto-parietal network connectivity (as determined by resting state fMRI) is associated with some cognitive function, but that doesn't mean it causes or explains the behavior (or internal thought). We all know this, and we all know that “we must want more!” But we haven't the vaguest idea of how to relate complex psychological constructs such as attention, volition, and emotion to ongoing biological processes involving calcium channels, dendrites, and glutamatergic synapses.
  • but what if the psychological and the biological are categorically dissimilar??
In their 2003 book, Philosophical Foundations of Neuroscience, Bennett and Hacker warned that cognitive neuroscientists make the cardinal error of “...commit[ting] the mereological fallacy, the tendency to ascribe to the brain psychological concepts that only make sense when ascribed to whole animals.”
For the characteristic form of explanation in contemporary cognitive neuroscience consists in ascribing psychological attributes to the brain and its parts in order to explain the possession of psychological attributes and the exercise (and deficiencies in the exercise) of cognitive powers by human beings.” (p. 3)

On that optimistic note, the four panelists gave their introductory remarks.

(1) Dr. Lila Davachi asked, “what is the value of the work we do?” Uh, well, that's a difficult question. Are we improving society in some way? Adding to a collective body of knowledge that may (or may not) be the key to explaining behavior and curing disease? Although still difficult, Dr. Davachi posed an easier question, “what are your goals?” To describe behavior, predict behavior (correlation), explain behavior (causation), change behavior (manipulation)? But “what counts as an explanation?” I don't think anyone really answered that question. Instead she mentioned the recurring themes of levels of analysis (without invoking Marr by name), emergent properties (the flock of birds analogy), and bridging laws (that link levels of analysis). The correct level of analysis is/are the one(s) that advance your goals. But what to do about “level chauvinism” in contemporary neuroscience? This question was raised again and again.

(2) Dr. Jennifer Groh jumped right out of the gate with this motif. There are competing narratives in neuroscience we can call the electrode level (recording from neurons) vs. the neuroimaging level (recording large-scale brain activations or “network” interactions based on an indirect measure of neural activity). They make different assumptions about what is significant or worth studying. I found this interesting, since her lab is the only one that records from actual neurons. But there are ever more reductionist scientists who always throw stones at those above them. Neurobiologists (at the electrode level and below) are operating at ever more granular levels of detail, walking away from cognitive neuroscience entirely (who wants to be a dualist, anyway?). I knew exactly where she was going with this: the field is being driven by techniques, doing experiments merely because you can (cough — OPTOGENETICS— cough). Speaking for myself, however, the fact that neurobiologists can control mouse behavior by manipulating highly specific populations of cells raises the specter of insecurity... certain areas of research might not be considered “neuroscience” any more by a bulk of practitioners in the field (just attend the Society for Neuroscience annual meeting).

(3) Dr. Catherine Hartley continued with the recurring theme that we need both prediction and explanation to reach our ultimate goal of understanding behavior. Is a prediction system enough? No, we must know how the black box functions by studying “latent processes” such as representation and computation. But what if we're wrong about representations, I thought? The view of @PsychScientists immediately came to mind. Sorry to interrupt Dr. Hartley, but here's Golonka and Wilson in Ecological Representations:
Mainstream cognitive science and neuroscience both rely heavily on the notion of representation in order to explain the full range of our behavioral repertoire. The relevant feature of representation is its ability to designate (stand in for) spatially or temporally distant properties ... While representational theories are a potentially a powerful foundation for a good cognitive theory, problems such as grounding and system-detectable error remain unsolved. For these and other reasons, ecological explanations reject the need for representations and do not treat the nervous system as doing any mediating work. However, this has left us without a straight-forward vocabulary to engage with so-called 'representation-hungry' problems or the role of the nervous system in cognition.

They go on to invoke James J Gibson's ecological information functions. But I can already hear Dr. Poeppel's colleague @GregoryHickok and others on Twitter debating with @PsychScientists. Oh. Wait. Debate.

Returning to The Conversation that I so rudely interrupted, Dr. Hartley gave some excellent examples of theories that link psychology and neuroscience. The trichromatic theory of color vision— the finding that three independent channels convey color information — was based on psychophysics in the early-mid 1800s (Young–Helmholtz theory). This was over a century before the discovery of cones in the retina, which are sensitive to three different wavelengths. She also mentioned the more frequently used examples of Tolman's cognitive maps (which predated The Hippocampus as a Cognitive Map by 30 years) and error-driven reinforcement learning (Bush–Mosteller [23, 24] and Rescorla–Wagner, both of which predate knowledge of dopamine neurons). To generate good linking hypotheses in the present, we need to construct formal models that make quantitative predictions (generative models).

(4) Dr. Sharon Thompson-Schill gave a brief introduction with no slides, which is good because this post has gotten very long. For this reason, I won't cover the panel discussion and the Q&A period, which continued the same themes outlined above and expanded on “predictivism” (predictive chauvinism and data-driven neuroscience) and raised new points like the value (or not) of introspection in science. When the Cognitive Neuroscience Society updates their YouTube channel, I'll let you know. Another source is the excellent live tweeting of @VukovicNikola. But to wrap up, Dr. Thompson-Schill asked members of the audience whether they consider themselves psychologists or neuroscientists. Most identified as neuroscientists (which is a relative term, I think). Although more people will talk to you on a plane if you say you're a psychologist, “neuroscience is easy, psychology is hard,” a surprising take-home message.


Debating Debates

I've actually wanted to see more debating at the CNS meeting. For instance, the Society for the Neurobiology of Language (SNL) often features a lively debate at their conferences.3 Several examples are listed below.

2016:
Debate: The Consequences of Bilingualism for Cognitive and Neural Function
Ellen Bialystok & Manuel Carreiras

2014:
What counts as neurobiology of language – a debate
Steve Small, Angela Friederici

2013: Panel Discussions
The role of semantic information in reading aloud
Max Coltheart vs Mark Seidenberg

2012: Panel Discussions
What is the role of the insula in speech and language?
Nina F. Dronkers vs Julius Fridriksson


This one-on-one format has been very rare at CNS. Last year we saw a panel of four prominent neuroscientist address/debate...
Big Theory versus Big Data: What Will Solve the Big Problems in Cognitive Neuroscience?


Added-value entertainment was provided by Dr. Gary Marcus, which speaks to the issue of combative personalities dominating the scene.4


Gary Marcus talking over Jack Gallant. Eve Marder is out of the frame.
image by @CogNeuroNews


I'm old enough to remember the most volatile debate in CNS history, which was held (sadly) at the New York Marriott World Trade Center Hotel in 2001. Dr. Nancy Kanwisher and Dr. Isabel Gauthier debated whether face recognition (and activation of the fusiform face area) is a 'special' example of domain specificity (and perhaps an innate ability), or a manifestation of plasticity due to our exceptional expertise at recognizing faces:
A Face-Off on Brain Studies / How we recognize people and objects is a matter of debate
. . .

At the Cognitive Neuroscience Society meeting in Manhattan last week, a panel of scientists on both sides of the debate presented their arguments. On one side is Nancy Kanwisher of MIT, who first proposed that the fusiform gyrus was specifically designed to recognize faces–and faces alone–based on her findings using a magnetic resonance imaging device. Then, Isabel Gauthier, a neuroscientist at Vanderbilt, talked about her research, showing that the fusiform gyrus lights up when looking at many different kinds of objects people are skilled at recognizing.
Kudos to Newsday for keeping this article on their site after all these years.


Footnotes

1 This is the color-word Stroop task: name the font color, rather than read the word. BLUE elicits conflict between the overlearned response ("read the word blue") and the task requirment (say "red").

2 aka the the now-obligatory David Poeppel session on BIG STUFF. See these posts:
3 Let me now get on my soapbox to exhort the conference organizers to keep better online archives  — with stable urls— so I don't have to hunt through archive.org to find links to past meetings.

4 Although this is really tangential, I'm reminded of the Democratic Party presidential contenders in the US. Who deserves more coverage, Beto O'Rourke or Elizabeth Warren? Bernie Sanders or Kamala Harris?

Does ketamine restore lost synapses? It may, but that doesn't explain its rapid clinical effects

$
0
0

Bravado SPRAVATO™ (esketamine)
© Janssen Pharmaceuticals, Inc. 2019.


Ketamine is the miracle drug that cures depression:
“Recent studies report what is arguably the most important discovery in half a century: the therapeutic agent ketamine that produces rapid (within hours) antidepressant actions in treatment-resistant depressed patients (4, 5). Notably, the rapid antidepressant actions of ketamine are associated with fast induction of synaptogenesis in rodents and reversal of the atrophy caused by chronic stress (6, 7).”

– Duman & Aghajanian (2012). Synaptic Dysfunction in Depression: Potential Therapeutic Targets. Science 338: 68-72.

Beware the risks of ketamine:
“While ketamine may be beneficial to some patients with mood disorders, it is important to consider the limitations of the available data and the potential risk associated with the drug when considering the treatment option.”

– Sanacora et al. (2017). A Consensus Statement on the Use of Ketamine in the Treatment of Mood Disorders. JAMA Psychiatry 74: 399-405.

Ketamine, dark and light:
Is ketamine a destructive club drug that damages the brain and bladder? With psychosis-like effects widely used as a model of schizophrenia? Or is ketamine an exciting new antidepressant, the “most important discovery in half a century”?

For years, I've been utterly fascinated by these separate strands of research that rarely (if ever) intersect. Why is that? Because there's no such thing as “one receptor, one behavior.” And because like most scientific endeavors, neuro-pharmacology/psychiatry research is highly specialized, with experts in one microfield ignoring the literature produced by another...

– The Neurocritic (2015). On the Long Way Down: The Neurophenomenology of Ketamine

Confused?? You're not alone.


FDA Approval

The animal tranquilizer and club drug ketamine now known as a “miraculous” cure for treatment resistant depression has been approved by the FDA in a nasal spray formulation. No more messy IV infusions at shady clinics.

Here's a key Twitter thread that marks the occasion:


How does it work?

A new paper in Science (Moda-Sava et al., 2019) touts the importance of spine formation and synaptogenesis basically, the remodeling of synapses in microcircuits  in prefrontal cortex, a region important for the top-down control of behavior. Specifically, ketamine and its downstream actions are involved in the creation of new spines on dendrites, and in the formation of new synapses. But it turns out this is NOT linked to the rapid improvement in 'depressive' symptoms observed in a mouse model.



So I think we're still in the dark about why some humans can show immediate (albeit short-lived) relief from their unrelenting depression symptoms after ketamine infusion. Moda-Sava et al. say:
Ketamine’s acute effects on depression-related behavior and circuit function occur rapidly and precede the onset of spine formation, which in turn suggests that spine remodeling may be an activity-dependent adaptation to changes in circuit function (83, 88) and is consistent with theoretical models implicating synaptic homeostasis mechanisms in depression and the stress response (89, 90). Although not required for inducing ketamine’s effects acutely, these newly formed spines are critical for sustaining the antidepressant effect over time.

But the problem is, depressed humans require constant treatment with ketamine to maintain any semblance of an effective clinical response, because the beneficial effect is fleeting. If we accept the possibility that ketamine acts through the mTOR signalling pathway, in the long run detrimental effects on the brain (and non-brain systems) may occur (e.g., bladder damage, various cancers, psychosis, etc).

But let's stay isolated in our silos, with our heads in the sand.


Thanks to @o_ceifero for alerting me to this study.

Further Reading

Ketamine for Depression: Yay or Neigh?

Warning about Ketamine in the American Journal of Psychiatry

Chronic Ketamine for Depression: An Unethical Case Study?

still more on ketamine for depression

Update on Ketamine in Palliative Care Settings

Ketamine - Magic Antidepressant, or Expensive Illusion? - by Neuroskeptic

Fighting Depression with Special K - by Scicurious

On the Long Way Down: The Neurophenomenology of Ketamine


Reference

Moda-Sava RN, Murdock MH, Parekh PK, Fetcho RN, Huang BS, Huynh TN, Witztum J, Shaver DC, Rosenthal DL, Alway EJ, Lopez K, Meng Y, Nellissen L, Grosenick L, Milner TA, Deisseroth K, Bito H, Kasai H, Liston C. (2019). Sustained rescue of prefrontal circuit dysfunction by antidepressant-induced spine formation. Science 364(6436). pii: eaat8078.

The Paracetamol Papers

$
0
0

I have secretly obtained a large cache of files from Johnson & Johnson, makers of TYLENOL®, the ubiquitous pain relief medication (generic name: acetaminophen in North America, paracetamol elsewhere). The damaging information contained in these documents has been suppressed by the pharmaceutical giant, for reasons that will become obvious in a moment.1

After a massive upload of materials to Wikileaks, it can now be revealed that Tylenol not only...
...but along with the good comes the bad. Acetaminophen (paracetamol) also has ghastly negative effects that tear at the very fabric of society. These OTC tablets...

In a 2018 review of the literature, Ratner and colleagues warned:
“In many ways, the reviewed findings are alarming. Consumers assume that when they take an over-the-counter pain medication, it will relieve their physical symptoms, but they do not anticipate broader psychological effects.”

In the latest installment of this alarmist saga, we learn that acetaminophen blunts positive empathy, i.e. the capacity to appreciate and identify with the positive emotions of others (Mischkowski et al., 2019). I'll discuss those findings another time.

But now, let's evaluate the entire TYLENOL® oeuvre by taking a step back and examining the plausibility of the published claims. To summarize, one of the most common over-the-counter, non-narcotic, non-NSAID pain-relieving medications in existence supposedly alleviates the personal experience of hurt feelings and social pain and heartache (positive outcomes). At the same time, TYLENOL® blunts the phenomenological experiences of positive emotion and diminishes empathy for others' people's experiences, both good and bad (negative outcomes). Published articles have reported that many of these effects can be observed after ONE REGULAR DOSE of paracetamol. These findings are based on how undergraduates judge a series of hypothetical stories. One major problem (which is not specific to The Paracetamol Papers) concerns the ecological validity of laboratory tasks as measures of the cognitive and emotional constructs of interest. This issue is critical, but outside the main scope of our discussion today. More to the point, an experimental manipulation may cause a statistically significant shift in a variable of interest, but ultimately we have to decide whether a circumscribed finding in the lab has broader implications for society at large.


Why TYLENOL® ?

Another puzzling element is, why choose acetaminophen as the exclusive pain medication of interest? Its mechanisms of action for relieving fever, headache, and other pains are unclear. Thus, the authors don't have a specific, principled reason for choosing TYLENOL® over Advil (ibuprofen) or aspirin. Presumably, the effects should generalize, but that doesn't seem to be the case. For instance, ibuprofen actually Increases Social Pain in men.

The analgesic effects of acetaminophen are mediated by a complex series of cellular mechanisms (Mallet et al., 2017). One proposed mechanism involves descending serotonergic bulbospinal pathways from the brainstem to the spinal cord. This isn't exactly Prozac territory, so the analogy between Tylenol and SSRI antidepressants isn't apt. The capsaicin receptor TRPV1 and the Cav3.2 calcium channel might also be part of the action (Mallet et al., 2017). A recently recognized player is the CB1cannabinoid receptor. AM404, a metabolite of acetaminophen, indirectly activates CB1 by inhibiting the breakdown and reuptake of anandamide, a naturally occurring cannabinoid in the brain (Mallet et al., 2017).



Speaking of cannabinoids, cannabidiol (CBD) the non-intoxicating cousin of THC has a high profile now because of its soaring popularity for many ailments. Ironically, CBD has a very low affinity for CBandCB2 receptors and may act instead via serotonergic 5-HT1A receptors {PDF}, as a modulator of μ- and δ-opioid receptors, and as an antagonist and inverse agonist at several G protein-coupled receptors. Most CBD use seems to be in the non-therapeutic (placebo) range, because the effective dose for, let's say, anxiety is 10-20 times higher than the average commercial product. You'd have to eat 3-6 bags of cranberry gummies for 285-570 mg of CBD (close to the 300-600 mg recommended dose). Unfortunately, you would also ingest 15-30 mg of THC, which would be quite intoxicating.



Words Have Meanings

If acetaminophen were so effective in “mending broken hearts”, “easing heartaches”, and providing a “cure for a broken heart”, we would be a society of perpetually happy automatons, wiping away the suffering of breakup and divorce with a mere OTC tablet. We'd have Tylenol epidemics and Advil epidemics to rival the scourge of the present Opioid Epidemic.

Meanwhile, social and political discourse in the US has reached a new low. Ironically, the paracetamol “blissed-out” population is enraged because they can't identify with the feelings or opinions of the masses who are 'different' than they are. Somehow, I don't think it's from taking too much Tylenol. A large-scale global survey could put that thought to rest for good.




Footnotes

1 This is not true, of course, I was only kidding. All of the information presented here is publicly available in peer-reviewed journal articles and published press reports.

2except for when it doesn’t – “In contrast, effects on perceived positivity of the described experiences or perceived pleasure in scenario protagonists were not significant” (Mischkowski et al., 2019).

3 Yes, I made this up too. It is entirely fictitious; no one has ever claimed this, to the best of my knowledge.


References

Mallet C, Eschalier A, Daulhac L. Paracetamol: update on its analgesic mechanism of action (2017). Pain relief–From analgesics to alternative therapies.

Mischkowski D, Crocker J, Way BM. (2019). A Social Analgesic? Acetaminophen(Paracetamol) Reduces Positive Empathy. Front Psychol. 10:538.

The Secret Lives of Goats

$
0
0
Goats Galore (May 2019)


If you live in a drought-ridden, wildfire-prone area on the West Coast, you may see herds of goats chomping on dry grass and overgrown brush. This was initially surprising for many who live in urban areas, but it's become commonplace where I live. Announcements appear on local message boards, and families bring their children.


Goats Goats Goats (June 2017)


Goats are glamorous, and super popular on social media now (e.g. Instagram, more Instagram, and Twitter). Over 41 million people have watched Goats Yelling Like Humans - Super Cut Compilation on YouTube. We all know that goats have complex vocalizations, but very few of us know what they mean.





For the health and well-being of livestock, it's advantageous to understand the emotional states conveyed by vocalizations, postures, and other behaviors. A 2015 study measured the acoustic features of different goat calls, along with their associated behavioral and physiological responses. Twenty-two adult goats were put in four situations:
(1) control (neutral)
(2) anticipation of a food reward (positive)
(3) food-related frustration (negative)
(4) social isolation (negative)
Dr. Elodie Briefer and colleagues conducted the study at a goat sanctuary in Kent, UK (Buttercups Sanctuary for Goats). The caprine participants had lived at the sanctuary for at least two years and were fully habituated to humans. Heart rate and respiration were recorded as indicators of arousal, so this dimension of emotion could be considered separately from valence (positive/negative). For conditions #1-3, the goats were tested in pairs (adjacent pens) to avoid the stress of social isolation. They were habituated to the general set-up, to the Frustration and Isolation scenarios, and to the heart rate monitor before the actual experimental sessions, which were run on separate days. Additional details are presented in the first footnote.1





Audio A1. One call produced during a negative situation (food frustration), followed by a call produced during a positive situation (food reward) by the same goat (Briefer et al., 2015).


Behavioral responses during the scenarios were timed and scored; these included tail position, locomotion, rapid head movement, ear orientation, and number of calls. The investigators recorded the calls and produced spectograms that illustrated the frequencies of the vocal signals.



The call on the left (a) was emitted during food frustration (first call in Audio A1). The call on the right (b) was produced during food reward; it has a lower fundamental frequency (F0) and smaller frequency modulations. Modified from Fig. 2 (Briefer et al., 2015).


Both negative and positive food situations resulted in greater goat arousal (measured by heart rate) than the neutral control condition and the low arousal negative condition (social isolation). Behaviorally speaking, arousal and valence had different indicators:
During high arousal situations, goats displayed more head movements, moved more, had their ears pointed forwards more often and to the side less often, and produced more calls. ... In positive situations, as opposed to negative ones, goats had their ears oriented backwards less often and spent more time with the tail up.
Happy goats have their tails up, and do not point their ears backwards. I think I would need a lot more training to identify the range of goat emotions conveyed in my amateur video. At least I know not to stare at them, but next time I should read more about their reactions to human head and body postures.


Do goats show a left or right hemisphere advantage for vocal perception?

Now that the researchers have characterized the valence and arousal communicated by goat calls, another study asked whether goats show a left hemisphere or right hemisphere “preference” for the perception of different calls (Baciadonna et al., 2019). How is this measured, you ask?

Head-Turning in Goats and Babies

The head-turn preference paradigm is widely used in studies of speech perception in infants.

Figure from Prosody cues word order in 7-month-old bilingual infants (Gervain & Werker, 2013).




However, I don't know whether this paradigm is used to assess lateralization of speech perception in babies. In the animal literature, a similar head-orienting response is a standard experimental procedure. For now, we will have to accept the underlying assumption that orienting left or right may be an indicator of a contralateral hemispheric “preference” for that specific vocalization (i.e., orienting to the left side indicates a right hemisphere dominance, and vice versa).
The experimental procedure usually applied to test functional auditory asymmetries in response to vocalizations of conspecifics and heterospecifics is based on a major assumption (Teufel et al. 2007; Siniscalchi et al. 2008). It is assumed that when a sound is perceived simultaneously in both ears, the head orientation to either the left or right side is an indicator of the side of the hemisphere that is primarily involved in the response to the stimulus presented. There is strong evidence that this is the case in humans ... The assumption is also supported by the neuroanatomic evidence of the contralateral connection of the auditory pathways in the mammalian brain (Rogers and Andrew 2002; Ocklenburg et al. 2011).

The experimental set-up to test this in goats is shown below.



A feeding bowl (filled with a tasty mixture of dry pasta and hay) was fixed at the center of the arena opposite to the entrance. The speakers were positioned at a distance of 2 meters from the right and left side of the bowl and were aligned to it. 'X' indicates the position of the Experimenter. Modified from Fig. 2 (Baciadonna et al., 2019).


Four types of vocalizations were played over the speakers: food anticipation, food frustration, isolation, and dog bark (presumably a negative stimulus). Three examples of each vocalization were played, each from a different and unfamiliar goat (or dog).

The various theories of brain lateralization of emotion predicted different results. The right hemisphere model predicts right hemisphere dominance (head turn to the left) for high-arousal emotion regardless of valence (food anticipation, food frustration, dog barks). In contrast, the valence model predicts right hemisphere dominance for processing negative emotions (food frustration, isolation, dog barks), and left hemisphere dominance for positive emotions (food anticipation). The conspecific model predicts left hemisphere dominance for all goat calls (“familiar and non-threatening”) and right hemisphere dominance for dog barks. Finally, a general emotion model predicts right hemisphere dominance for all of the vocalizations, because they're all emotion-laden.

The results sort of supported the conspecific model (according to the authors), if we now accept that dog barks are actually “familiar and non-threatening” [if I understand correctly]. The head-orienting response did not differ significantly between the four vocalizations, and there was a slight bias for head orienting to the right (p=.046 vs. chance level), when collapsed across all stimulus types. 2

The time to resume feeding after hearing a vocalization (a measure of fear) didn't differ between goat calls and dog barks, so the authors concluded that “goats at our study site may have been habituated to dog barks and that they did not perceive dog barks as a serious threat.” However, if a Siberian Husky breaks free of its owner and runs around a fenced-in rent-a-goat herd, chaos may ensue.





Footnotes

1 Methodological details:
“(1) During the control situation, goats were left unmanipulated in a pen with hay (‘Control’). This situation did not elicit any calls, but allowed us to obtain baseline values for physiological and behavioural data. (2) The positive situation was the anticipation of an attractive food reward that the goats had been trained to receive during 3 days of habituation (‘Feeding’). (3) After goats had been tested with the Feeding situation, they were tested with a food frustration situation. This consisted of giving food to only one of the goats in the pair and not to the subject (‘Frustration’). (4) The second negative situation was brief isolation, out of sight from conspecifics behind a hedge. For this situation, goats were tested alone and not in a pair (‘Isolation’).”

2 The replication police will certainly go after such a marginal significance level, but I would like to see them organize a “Many Goats in Many Goat Sanctuaries” replication project.


References

Baciadonna L, Nawroth C, Briefer EF, McElligott AG. (2019). Perceptual lateralization of vocal stimuli in goats. Curr Zool. 65(1):67-74. [PDF]

Briefer EF, Tettamanti F, McElligott AG. (2015). Emotions in goats: mapping physiological, behavioural and vocal profiles. Animal Behaviour 99:131-43. [PDF]


'I Do Not Exist' - Pathological Loss of Self after a Buddhist Retreat

$
0
0

Eve is plagued by a waking nightmare.

‘I do not exist. All you see is a shell with no being inside, a mask covering nothingness. I am no one and no thing. I am the unborn, the non-existent.’


– from Pickering (2019).

Dr. Judith Pickering is a psychotherapist and Jungian Analyst in Sydney, Australia. Her patient ‘Eve’ is an “anonymous, fictionalised amalgam of patients suffering disorders of self.”   Eve had a psychotic episode while attending a Tibetan Buddhist retreat.
“She felt that she was no more than an amoeba-like semblance of pre-life with no form, no substance, no past, no future, no sense of on-going being.”



Eve's fractured sense of self preceded the retreat. In fact, she was drawn to Buddhist philosophy precisely because of its negation of self. In the doctrine of non-being (anātman), “there is no unchanging, permanent self, soul, or essence in living beings.” The tenet of emptiness (śūnyatā) that “all things are empty [or void] of intrinsic existence” was problematic as well. When applied and interpreted incorrectly, śūnyatā and anātman can resemble or precipitate disorders of the self.

Dr. Pickering noted:
‘Eve’ is representative of a number of patients suffering both derealisation and depersonalisation. They doubt the existence of the outer world (derealisation) and fear that they do not exist. In place of a sense of self, they have but an empty core inside (depersonalisation).

How do you find your way back to your self after that? Will the psychotic episode respond to neuroleptics or mood stabilizers?

The current article takes a decidedly different approach from this blog's usual themes of neuroimaging, cognitive neuroscience, and psychopharmacology. Spirituality, dreams, and the unconscious play an important role in Jungian psychology. Pickering mentions the Object Relations School, Attachment Theory, Field Theory, The Relational School, the Conversational Model, Intersubjectivity Theory and Infant Research. She cites Winnicott, Bowlby, and Bion (not Blanke & Arzy 2005, Kas et al. 2014, or Seth et al. 2012).

Why did I read this paper? Sometimes it's useful to consider the value of alternate perspectives. Now we can examine the potential hazards of teaching overly Westernized conceptions of Buddhist philosophy.1 


When Westerners Attend Large Buddhist Retreats

Eve’s existential predicament exemplifies a more general area of concern found in situations involving Western practitioners of Buddhism, whether in traditional settings in Asia, or Western settings ostensibly adapted to the Western mind. Have there been problems of translation in regard to Buddhist teachings on anātman (non-self) as implying the self is completely non-existent, and interpretations of śūnyatā (emptiness) as meaning all reality is non-existent, or void?
. . .

This relates to another issue concerning situations where Westerners attend large Buddhist retreats in which personalised psycho-spiritual care may be lacking. Traditionally, a Buddhist master would know the student well and carefully select appropriate teachings and practices according to a disciple’s psychological, physical and spiritual predispositions, proficiency and maturity. For example, teaching emptiness or śūnyatā to someone who is not ready can be extremely harmful. As well as being detrimental for the student, it puts the teacher at risk of a major ethical infringement...

I found Dr. Pickering's discussion of Nameless Dread to be especially compelling.




Nameless Dread

I open the door to a white, frozen mask. I know immediately that Eve has disappeared again into what she calls ‘the void’. She sits down like an automaton, stares in stony silence at the wall as if staring into space. I do not exist for her, she is totally isolated in her own realm of non-existence.

The sense of deadly despair pervades the room. I feel myself fading into nothingness, this realm of absence, unmitigated, bleakness and blankness.We sit in silence, sometimes for session after session. I wonder what on earth do I have to offer her? Nothing, it seems.




ADDENDUM (June 18 2019): A reader alerted me to a tragic story two years ago in Pennsylvania, where a young woman ultimately died by suicide after experiencing a psychotic episode during an intensive 10-day meditation retreat. The article noted:
"One of the documented but rare adverse side effects from intense meditation retreats can be depersonalization disorder. People need to have an especially strong ego, or sense of self, to be able to withstand the strictness and severity of the retreats."

Case reports of extreme adverse events are rare, but a 2017 study documented "meditation-related challenges" in Western Buddhists. The authors conducted detailed qualitative interviews in 60 people who engaged in a variety of Buddhist meditation practices (Lindahl et al., 2017). Thematic analysis revealed a taxonomy of 59 experiences across seven domains (I've appended a table at the end of the post). The authors found a wide range of responses: "The associated valence ranged from very positive to very negative, and the associated level of distress and functional impairment ranged from minimal and transient to severe and enduring." The paper is open access, and Brown University issued an excellent press release.


Footnote

1This is especially important given the appropriation of semi-spiritual versions of yoga and mindfulness, culminating in inanities such as tech bro eating disorders.


References

Blanke O, Arzy S. (2005). The out-of-body experience: disturbed self-processing at the temporo-parietal junction. Neuroscientist 11:16-24.

Kas A, Lavault S, Habert MO, Arnulf I. (2014) Feeling unreal: a functional imaging study in patients with Kleine-Levin syndrome. Brain 137: 2077-2087.

Lindahl JR, Fisher NE, Cooper DJ, Rosen RK, Britton WB. (2017). The varieties of contemplative experience: A mixed-methods study of meditation-related challenges  in Western Buddhists. PLoS One 12(5):e0176239.

Pickering J. (2019). 'I Do Not Exist': Pathologies of Self Among Western Buddhists.J Relig Health 58(3):748-769.

Seth AK, Suzuki K, Critchley HD. (2012). An interoceptive predictive coding model of conscious presence. Front Psychol. 2:395.


Further Reading

Derealization / Dying

Feeling Mighty Unreal: Derealization in Kleine-Levin Syndrome

A Detached Sense of Self Associated with Altered Neural Responses to Mirror Touch



Phenomenology coding structure (Table 4, Lindahl et al., 2017).

- click table for a larger view -

The Shock of the Unknown in Aphantasia: Learning that Visual Imagery Exists

$
0
0

Qualia are private. We don’t know how another person perceives the outside world: the color of the ocean, the sound of the waves, the smell of the seaside, the exact temperature of the water. Even more obscure is how someone else imagines the world in the absence of external stimuli. Most people are able to generate an internal “representation1 of a beach — to deploy imagery — when asked, “picture yourself at a relaxing beach.” We can “see” the beach in our mind’s eye even when we’re not really there. But no one else has access to these private images, thoughts, narratives. So we must rely on subjective report.

The hidden nature of imagery (and qualia more generally)2 explains why a significant minority of humans are shocked and dismayed when they learn that other people are capable of generating visual images, and the request to “picture a beach” isn’t metaphorical. This lack of imagery often extends to other sensory modalities (and to other cognitive abilities, such as spatial navigation and autobiographical memories), which will be discussed another time. For now, the focus is on vision.

Redditors and their massive online sphere of influence were chattering the other day about this post in r/TIFU: A woman was explaining her synesthesia to her boyfriend when he discovered that he has aphantasia, the inability to generate visual images.

TIFU by explaining my synesthesia to my boyfriend

“I have grapheme-color synesthesia. Basically I see letters and numbers in colors. The letter 'E' being green for example. A couple months ago I was explaining it to my boyfriend who's a bit of a skeptic. He asked me what colour certain letters and numbers were and had me write them down.  ...

Tonight we were laying in bed and my boyfriend quized me again. I tried explaining to him I just see the colors automatically when I visualize the letters in my head. I asked him what colour are the letters in his head. He looked at me weirdly like what do you mean in "my head, that's not a thing"

My boyfriend didnt understand what I meant by visualizing the letters. He didn't believe me that I can visualize letters or even visualize anything in my head.

Turns out my boyfriend has aphantasia. When he tries to visualize stuff he just sees blackness. He can't picture anything in his mind and thought that everyone else had it the same way. He thought it was just an expression to say "picture this" or etc...

There are currently 8652 comments on this post, many from individuals whowerestunnedto learn that the majority of people do have imagery. Other comments were from knowledgeable folks with aphantasia who described what the world is like for them, the differences in how they navigate through life, and how they compensate for what is thought of as "a lack" by the tyranny of the phantasiacs.






There's even a subreddit for people with aphantasia:



How did I find out about this? 3  It was because my 2016 post was suddenly popular again!





That piece was spurred by an eloquent essay on what's it's like to discover that all your friends aren't speaking metaphorically when they say, “I see a beach with waves and sand.” Research on this condition blossomed once more and more people realized they had it. Onlinecommunities developed and grew, including resourcesfor researchers. This trajectory is akin to the formation of chat groups for individuals with synesthesia and developmental prosopagnosia (many years ago). Persons with these neuro-variants have always existed,4 but they were much harder to locate pre-internet. Studies of these neuro-unique individuals have been going on for a while, but widespread popular dissemination of their existence alerts others – “I am one, too.”

The Vividness of Visual Imagery Questionnaire (VVIQ) “is a proven psychometric measurement often used to identify whether someone is aphantasic or not, albeit not definitive.” But it's still a subjective measure that relies on self-report. Are there more “objective” methods for determining your visual imagery abilities? I'm glad you asked. An upcoming post will discuss a couple of cool new experiments.


Footnotes

1 This is a loaded term that I won’t explain – or debate – right now.

2Somepeople don’t believe that qualia exist (as such), but I won’t elaborate on that, either.

3 I don’t hang out on Reddit, and my Twitter usage has declined.

4 Or at least, they've existed for quite some time.


Further Reading

Aphantasia Index

The Eye's Mind

Bonus Episode: What It's Like to Have no Mind's Eye, a recent entry of BPS Research Digest. There's an excellent collection of links, as well as a 30 minute podcast (download here).

Imagine These Experiments in Aphantasia (my 2016 post).

Involuntary Visual Imagery (if you're curious about what has been haunting me).

In fact, while I was writing this post, intrusive imagery of the Tsawwassen Ferry Terminal in Delta BC (the ferry from Vancouver to Victoria Island) appeared in my head. I searched Google Images and can show you the approximate view.



I was actually standing a little further back, closer to where the cars are parked. But I couldn't quite capture that view. Here is the line of cars waiting to get on the ferry.



During this trip two years ago (with my late wife), this sign had caught my eye so I ran across the street for coffee...

Is there an objective test for Aphantasia?

$
0
0



How well do we know our own inner lives? Self-report measures are a staple of psychiatry, neuroscience, and all branches of psychology (clinical, cognitive, perceptual, personality, social, etc.). Symptom scales, confidence ratings, performance monitoring, metacognitive efficiency (meta-d'/d'), vividness ratings, preference/likeability judgements, and affect ratings are all examples. Even monkeys have an introspective side! 1

In the last post we learned about a condition called aphantasia, the inability to generate visual images. Although the focus has been on visual imagery, many people with aphantasia cannot form “mental images” of any sensory experience. Earworms, those pesky songs that get stuck in your head, are not a nuisance for some individuals with aphantasia (but many others do get them). Touch, smell, and taste are even less studied mental imagery of these senses is generally more muted, if it occurs at all (even in the fully phantasic).

The Vividness of Visual Imagery Questionnaire (VVIQ, Marks 1973)2 is the instrument used to identify people with poor to non-existent visual imagery (i.e., aphantasia). For each item on the VVIQ, the subject is asked to “try to form a visual image, and consider your experience carefully. For any image that you do experience, rate how vivid it is using the five-point scale described below. If you do not have a visual image, rate vividness as ‘1’. Only use ‘5’ for images that are truly as lively and vivid as real seeing.” By its very nature, it's a subjective measure that relies on introspection.

But how well do we really know the quality of our private visual imagery? Eric Schwitzgebel has argued that it's really quite poor:3
“...it is observed that although people give widely variable reports about their own experiences of visual imagery, differences in report do not systematically correlate with differences on tests of skills that [presumably] require visual imagery, such as mental rotation, visual creativity, and visual memory.”

And it turns out that many of these cognitive skills do not require visual imagery. A recent study found that participants with aphantasia were slower to perform a mental rotation task (relative to controls), but they were more accurate (Pounder et al., 2018). The test asked participants to determine whether a pair of objects is identical, or mirror images of each other. Response times generally increase as a function of the angular difference in the orientations of the two objects. The overall slowing and accuracy advantage in those with aphantasia held across all levels of difficulty, so these participants must be using a different strategy than those without aphantasia.




Another study found that people with aphantasia were surprisingly good at reproducing the details of a complex visual scene from memory (Bainbridge et al., 2019).4

What test does require visual imagery? The phenomenon of binocular rivalry involves the presentation of two different images to each eye using specialized methods or simple 3D glasses. Instead of forming a unified percept, the images presented to the left and right eye seem to alternate. Thus, binocular rivalry involves perceptual switching. The figure below was taken from the informative video of Carmel and colleagues (2010) in JoVE. I highly recommend the video, which I've embedded at the end of this post.


A recent study examined binocular rivalry in aphantasia using the setup shown in Fig 1 (Keogh & Pearson, 2018). The key trick is that participants were cued to imagine one of two images for 6 seconds. Then they performed a vividness rating, followed by a brief presentation of the binocular rivalry display. Finally, the subjects had to report which color they saw.

- click for larger view -



The study population included 15 self-identified aphantasics recruited via Facebook, direct contact with the investigators, or referral from Professor Adam Zeman, and 209 control participants recruited from the general population. The VVIQ verified poor or non-existent visual imagery in the aphantasia group.

For the binocular rivalry test, the general population showed a priming effect from the imagined stimulus they were more likely to report that the subsequent test display matched the color of the imagined stimulus (green or red) at a greater than chance level (better than guessing). As a group, the individuals with aphantasia did not show priming that was greater than chance. However, as can be seen in Fig. 2E, results from this test were not completely diagnostic. Some with aphantasia showed better-than-chance priming, while a significant percentage of the controls did not show the binocular rivalry priming effect.


Fig. 2E (Keogh & Pearson, 2018). Frequency histogram for imagery priming scores for aphantasic participants (yellow bars and orange line) and general population (grey bars and black dashed line). The green dashed line shows chance performance (50% priming).


Furthermore, scores on the VVIQ in the participants with aphantasia did not correlate with their priming scores (although n=15 would make this hard to detect). Earlier work by these investigators suggested that the VVIQ does correlate with overall priming scores in controls, and binocular rivalry priming on an individual trial is related to self-reported vividness on that trial. Correlations for the n=209 controls in the present paper were not reported, however. This would be quite informative, since the earlier study had a much lower number of participants (n=20).

What does this mean? I would say that binocular rivalry priming can be a useful “objective” measure of aphantasia, but it's not necessarily diagnostic at an individual level.


Related Posts

The Shock of the Unknown in Aphantasia: Learning that Visual Imagery Exists

Imagine These Experiments in Aphantasia


Footnotes

1 see Mnemonic introspection in macaques is dependent on superior dorsolateral prefrontal cortex but not orbitofrontal cortex.

2 The VVIQ is not without its detractors...

3 Thanks to Rolf Degan for bringing this paper to my attention.

4 Also see this reddit thread on Sketching from memory.


References

Bainbridge WA, Pounder Z, Eardley A, Baker CI (2019). Characterizing aphantasia through memory drawings of real-world images. Cognitive Neuroscience Society Annual Meeting.

Keogh R, Pearson J. (2018). The blind mind: No sensory visual imagery in aphantasia. Cortex 105:53-60.

Marks DF. (1973). Visual imagery differences in the recall of pictures. British journal of Psychology 64(1): 17-24.

Pounder Z, Jacob J, Jacobs C, Loveday C, Towell T, Silvanto J. (2018). Mental rotation performance in aphantasia. Vision Sciences Society Annual Meeting.

Schwitzgebel E. (2002). How well do we know our own conscious experience? The case of visual imagery. Journal of Consciousness Studies 9(5-6):35-53.  {PDF}

Shepard RN, Metzler J. Mental rotation of three-dimensional objects. (1971) Science 171(3972): 701-3.



Viewing all 329 articles
Browse latest View live




Latest Images