Quantcast
Are you the publisher? Claim or contact us about this channel


Embed this content in your HTML

Search

Report adult content:

click to rate:

Account: (login)

More Channels


Showcase


Channel Catalog



Channel Description:

Deconstructing the most sensationalistic recent findings in Human Brain Imaging, Cognitive Neuroscience, and Psychopharmacology
    0 0


    Dr. Bernard Carroll (Nov 21, 1940 – Sep 10, 2018)




    I was friends with Dr. Carroll (“Barney”) on Twitter, and always enjoyed his wit.



    Before that he was an early commenter and supporter of my blog, The Neurocritic. Which pleased me to no end, given this brief biography from his blogger site.


    My blogs Health Care Renewal
    Occupation Psychopharmacology
    Introduction Past chairman FDA Psychopharmacologic Drugs Advisory Committee.
    Past chairman, department of psychiatry Duke University Medical Center.
    Interests Professional ethics, medicine


    He didn't know who I was and didn't care. He assessed me by the quality of my writing, and allowed me entrée into a world I would have no access to otherwise.1

    As I'm facing the most catastrophic loss of my life, I will miss him too. He was a brilliant, principled, and compassionate man.


    Remembrance from Health Care RenewalRemembering Dr Bernard Carroll


    Obituary in BMJ by Dr. Allen Frances (and Dr. Barney Carroll):

    Barney Carroll: the conscience of psychiatry
    A pioneer in biological psychiatry, more recently Bernard Carroll (‘‘Barney’’) became a withering critic of its compromised ethics and corruption by industry. Shortly before his death, he helped prepare this obituary—his last chance to help correct the perverse incentives that too often influence the conduct and reporting of scientific research.
    . . .

    Barney rejected grand biological theories that offered neat, simple-but-wrong explanations of psychopathology. Ever aware of the complexity of the human brain, he was an early rejecter of blind optimism that any simple imbalance of monoamine transmitters could account for the wide variety of mental disorders. More recently, he deplored the ubiquitous hype that suggested that genetics or neuroimaging or big data mining could provide simple answers to deeply complex questions. He predicted—presciently—that these powerful new tools would have great difficulty in producing solid, replicable findings that could be translated to clinical practice.





    Footnote

    1 i.e., Very senior male psychiatrists. When I wrote my blog post about being female, and my wife's diagnosis of stage 4 cancer...

    So yeah, think of this as my “coming out”. Sorry if I've offended anyone with my ability to blend into male-dominated settings.

    Thank you for reading, and for your continued support during this difficult time.

    ...Barney was the first to comment, with his usual wit and grace: “I am pretty sure we can handle that. Bless you both.”

    0 0

    With profound grief, I announce that Sandra’s journey has come to an end.



    Gardens at Government House, Victoria BC (June 2017)


    Sandra Dawson was taken from this earth by the indiscriminate brutality of metastatic cancer. She died on October 2, 2018 at the age of 51. This horrific experience was not a “fight.” She did NOT lose a battle against the unchecked proliferation of malignant cells. Instead, Sandra saw the final phase of her life a journey. She was incredibly brave while facing the ravages of this terrible disease, and she was ultimately accepting of her fate. She was gracious and generous in sharing the final stages of her journey with friends and family, and also with nearly 25,000 followers of her @unsuicide Twitter account.1There was an outpouring of love and support and visitors and flowers, which buoyed her spirits and made her feel loved.

    She really loved flowers.




    Sandra was many things – a writer, a blogger, a jewelry designer, a crochet artist, a mental health advocate, a board member of the Mental Health Commission of Canada, and the 2016 winner of a Sovereign's Medal for Volunteers from The Governor General of Canada, for over a decade of work in suicide prevention.



    Government House, Victoria BC (June 2017)



    September 10 was World Suicide Prevention Day, and Dr. Erin Michalak of CREST.BD wrote a touching tribute to Sandra’s work.

    Sandra Dawson’s Legacy

    . . .

    “Most significantly, Sandra created the Unsuicide directory of online and mobile crisis supports, as well as a popular corresponding Twitter feed (@Unsuicide) with close to 25,000 followers. Her Unsuicide online supports are authentically grounded in her lived experience of bipolar disorder, but also unfailingly focused on helping people, regardless of their geography, to access credible and safe online and mobile support tools. In 2016, she was awarded the Sovereign’s Medal for Volunteers from the Governor General of Canada in acknowledgement of the impact of her work as an advocate for people facing mental health challenges and in suicide prevention.”

    But mostly I think of her as a writer.



    Radar Queer Reading Series, SF Public Library (October 2016)


    She was also my partner and wife of nearly 12 years.


    December 2017


    We met in 2006 through our respective blogs, The Neurocritic and Neurofuture. The neuroblogging community was quite small then. Neurofuture started in January 2006 a blog about Brain Science and Neurofuturism that was ahead of its time (so to speak):
    The future is now, in many ways. Neuroscience and psychiatry are fields that have experienced tremendous growth, especially in the last few decades, and these advances already have practical applications. … At the same time, much is still unknown…
    . . .

    Neuroscience, psychiatry, neuroethics and transhumanism are the four areas of focus for this blog. They have applications in a broad range of fields, and I'll be aggregating diverse information. Expect a lot of interesting links. I invite your comments.

    In June 2006, she started a video blog, Channel N, that shared interesting content related to neuroscience, psychology, and mental health. Channel N eventually moved to Psych Central, a trusted mega-site for mental health, depression, bipolar, ADHD & psychology information. Sandra also wrote posts for World of Psychology, the main PsychCentral blog, including many Top 10 lists, which were always popular.

    Along with Steve Higgins, she blogged for Omni Brain (from December 2006 – January 2008), which was “an exploration of the serious, fun, ridiculous / past, present, future of the brain and the science that loves it” – as part of the long-defunct Science Blogs network.


    But Sandra’s real love was writing fiction (mostly under the pseudonym S. Kay). She wrote an unpublished novel (or two), flash fiction, and a novella that was published by Maudlin House (ironically titled Joy).





    The advent of Twitter really changed her writing. She started writing microfiction, ultra-short stories in the form of Tweets (140 characters or less). Sometime they were standalone zaps that told an entire tiny tale.





    Other times, she crafted a number of tweets together to tell a longer story. These were published in various venues and included pieces such as Neurotech Light and DarkCloud Glitches, Facebook Algorithm of Death,2 and her final piece, Goth Robots (robots were always a favorite theme; see the interview Weird words with S. Kay). Her blueberrio tumblr has a comprehensive list of her published work.




    Her masterwork was Reliant, “an apocalypse in tweets” published in 2015 by the late tNY Press (but still available for purchase at Amazon):
    “Selfies, sexbots, and drones collide in these interwoven nanofictions about a society before, during, and after its collapse. With dazzling humor and insight, debut author S. Kay reveals a future that looks disconcertingly like the present. Beautifully illustrated by Thoka Maer, Reliant is a bold examination of society's unrequited love for technology.”
    There was a nice review in Entropy by Christopher Iacono.




    But my proudest literary-moment-by-proxy was when Sandra read at Writers With Drinks, a long-standing, monthly series of readings by spectacular writers, held in a bar and hosted by the talented and amusing Charlie Jane Anders. It was a fun evening and the ideal crowd for reading Reliant.



    Writers With Drinks (Nov. 14, 2015)


    Sandra's next book, Lost in the Land of Bears (designed and published by Reality Hands), had a truly unique limited edition faux fur cover, but it's still available as an e-book.


    James Knight wrote a great review at Sabotage Reviews.


    Sandra was an early adopter of all forms of online communication. She was an avid blogger, social media user, and before that an online diarist. She was prescient about the future of social media:
    I have no optimism that social media will bring the world together with mutual empathy improving society. Sheep are still sheep and their bleatings still need shepherds to make them a coherent flock. An important lesson for the next decade. The media is still the media and if anything, is more segregated than ever.

    Sandra Dawson, January 4, 2007


    I could go on and on about her other wildly creative projects, like her Spambot Psychosis origami text cube, her beachpunk jewelry, her minibook necklaces (sample here), her upcycled cashmere brooches, her Postcards from the Post-Apocalypse, and her exhibit of crocheted art hats (and bonus EEG cap) at Femina Potens (the Cultivating Cozy exhibition).



    January 18th, 2008



    But what I can't express in words right now is how much I'll miss her.






    Footnotes

    1 Like me, she had many Twitter accounts and blogs and pseudonyms; the latter included Sandra K, Sandra Kiume, and S. Kay.

    2Sadly, this was based on a true story that had an even more tragic ending.




    I love you.
    RIP.

    0 0
  • 10/31/18--01:59: Survey Skeleton


  • Karger Medical and Scientific Publishers has a lovely Survey Skeleton peeking out enticingly on some of their journal websites now.




    It's to lure you to take their survey, where you can win attractive prizes....




    ...such as the unique Vesalius: The Fabric of the Human Body (value CHF 1,500).





    Just thought you should know.



    0 0


    Frontispiece from: Blicke in die Traum- und Geisterwelt (A look into the dream and spirit world), by Friedrich Voigt (1854).


    What are you most afraid of? Not finding a permanent job? Getting a divorce and losing your family? Losing your funding? Not making this month's rent? Not having a roof over your head? Natural disasters? Nuclear war? Cancer? Having a loved one die of cancer?

    FAILURE?

    There are many types of specific phobias (snakes, spiders, heights, enclosed spaces, clowns, mirrors, etc.), but that's not what I'm talking about here.

    What are you really afraid of? Death? Pain? A painful death?

    Devils, demons, ghosts, witches, and other supernatural apparitions? This latter category (haunting, demon possession) is common among many cultures with religious or spiritual practices, and can evoke primal fear. As a former Catholic, I am still frightened by movies or TV shows that involve demonic possession, like American Horror Story: Asylum.




    I used this show as an exemplar in a post about Possession Trance Disorder in DSM-5.

    A fantastic long-form article by Nike Mariani has just appeared in The Atlantic. The author intermixes the individual case study of Louisa Muskovits with the history of exorcism and facts about its modern-day resurgence.

    American Exorcism
    Priests are fielding more requests than ever for help with demonic possession, and a centuries-old practice is finding new footing in the modern world.
     . . .
    • The official exorcist for Indianapolis has received 1,700 requests so far in 2018.
    • Father Thomas said that as many as 80 percent of the people who come to him seeking an exorcism are sexual-abuse survivors.
    • Some abused children are subjected to such agonizing experiences that they adopt a coping mechanism in which they force themselves into a kind of out-of-body experience. As they mature, this extreme psychological measure develops into a disorder that may manifest unpredictably. “There is a high prevalence of childhood abuse of different kinds with dissociative disorders,” Roberto Lewis-Fernández, a Columbia University psychiatry professor who studies dissociation, told me.

    This brings me to another topic I've been meaning to write about for weeks. Sleep paralysis is the terrifying condition of being half awake but unable to move (or speak or scream). It can feel like you're frozen in bed, aware of your surroundings yet completely paralyzed. This is because the complete muscle atonia typically experienced during REM sleep has oozed into lighter stages of non-REM sleep. Scary dream imagery can intrude while in this state, making it even worse.

    A fascinating new paper covers interpretations of this frightening phenomenon across different cultures (Olunu et al., 2018). A common theme is being attacked, visited, or sat upon by supernatural beings, such as demons, witches, ghosts, and spirits.


    -- click on image for a larger view --


    The eerie presences are called Jinn in Egypt, Kabus in Iran, Phi Um in Thailand, Old Hag in lots of places, and the especially horrifying Kokma in Saint Lucia, which are “attacks by dead spirits or unbaptized babies that jump into a body and squeeze the throat”. In Nigeria, believers in supernatural explanations exist alongside others who hold rational explanations:
    Nigerians describe it as “visitation of an evil spirit, witches, or some form of spiritual attack.” Others have beliefs that it may be due to anxiety or emotions associated with family problems.
    The Wikipedia page on the folklore of the night hag also has a pretty good listing.


    Interestingly, sleep paralysis was considered as a partial explanation for “demonic possession” in the case of Louisa Muskovits (Atlantic):
    Louisa seemed to vacillate between this unhinged state and her normal self. One minute she would snarl and bare her teeth, and the next she would beg for help. “It definitely had this appearance where she was fighting within herself,” Harp [her former therapist] told me.

    . . .
    [Another time] Louisa ... woke up abruptly, only to find her body locked in place—but with the added shock of what seemed to be visual hallucinations, including one of a giant spider crawling into her bedroom. Louisa was so jolted that she barely ate or slept for three days. “I didn’t feel safe,” she said. “I felt violated.”

    . . .
    Sleep paralysis seemed like a promising explanation. A phenomenon in which sufferers move too quickly in and out of rem sleep for the body to keep up, sleep paralysis causes a person’s mind to wake up before the body can shake off the effects of sleep. Hovering near full consciousness, the person can experience paralysis and hallucinations.

    But Louisa didn’t think this could account for the hand on her collarbone, which she swore she’d felt while she was completely awake [oh of course it can account for this phenomenology!].


    What are your experiences of sleep paralysis?


    Further Reading

    When Waking Up Becomes the Nightmare: Hypnopompic Hallucinatory Pain

    The Phenomenology of Pain During REM Sleep

    The Neurophysiology of Pain During REM Sleep

    Possession Trance Disorder in DSM-5

    Spirit Possession as a Trauma-Related Disorder in Uganda

    "The spirit came for me when I went to fetch firewood" - Personal Narrative of Spirit Possession in Uganda

    Possession Trance Disorder Caused by Door-to-Door Sales

    Fatal Hypernatraemia from Excessive Salt Ingestion During Exorcism

    Diagnostic Criteria for Demonic Possession

    The Devilish Side of Psychiatry


    Reference

    Olunu E, Kimo R, Onigbinde EO, Akpanobong MU, Enang IE, Osanakpo M, Monday IT, Otohinoyi DA, John Fakoya AO. (2018). Sleep Paralysis, a Medical Condition with a Diverse Cultural Interpretation. Int J Appl Basic Med Res. 8(3):137-142.



    Scene from The Wailing. Although it's certainly not for everybody, it is an amazing film.



    0 0



    Nothing says home for the holidays like a series of murders committed by family members with a shared delusion. So sit back, sip your hot apple cider or spiked egg nog, and revel in family dysfunction worse than your own.

    {Well! There is an actual TV show called Homicide for the Holidays, which I did not know. Kind of makes my title seem derivative... but it was coincidental.}


    “Folie à deux”, or Shared Psychotic Disorder, was a diagnosis in DSM-IV-TR:

    (A) A delusion develops in an individual in the context of a close relationship with another person(s), who has an already-established delusion. 

    (B) The delusion is similar in content to that of the person who already has the established delusion. 

    (C) The disturbance is not better accounted for by another Psychotic Disorder (e.g., Schizophrenia) or a Mood Disorder With Psychotic Features and is not due to the direct physiological effects of a substance (e.g., a drug of abuse, a medication) or a general medical condition.

    Folie à deux occurs in the secondary partner, who shares a delusion with the primary partner (diagnosed with schizophrenia, delusional disorder, or psychotic depression). In in DSM-5, folie à deux no longer exists as a specific disorder. Instead, the secondary partner is given a diagnosis of “other specified schizophrenia spectrum and other psychotic disorder” with a specifier: “delusional symptoms in the partner of individual with delusional disorder” (APA, 2013).

    The first cases were reported in the 19th century by the French psychiatrists Baillarger (1860) and Lasègue & Falret (1877). The latter authors note that insanity isn't contagious, but under special circumstances...
    a) In the “folie à deux” one individual is the active element; being more intelligent than the other he creates the delusion and gradually imposes it upon the second one who is the passive element. At the beginning the latter resists but later, little by little, he suffers the pressure of his associate, although at the some degree he also reacts and influences the former to correct, modify, and coordinate the delusion that then becomes their common delusion, to be repeated to all in an almost identical fashion and with the same words.
    The two individuals are in a close relationship and typically live in an isolated environment.

    A recent paper by Guivarch et al. (2018) covered the history of the disorder, and performed a literature review on folie à deux and homicide. They found 17 articles:
    In the cases examined, homicides were committed with great violence, usually against a victim in the family circle, and were sometimes followed by suicide. The main risk factor for homicide was the combination of mystical and persecutory delusions. The homicides occurred in response to destabilization of the delusional dyads.

    Body mutilation is not uncommon: “These features appear in the reported case of a mother who was delusional and killed her young son by hitting him on the head 3 times with a hatchet.”

    The authors presented a detailed history of induced psychosis involving Mr. A (the secondary) and Mrs. A (the primary, who had a family history of delusion). Shortly after getting married, they had a child who was removed by social services due to inadequate parenting.
    Subsequently, the couple engaged in several years of delusional wandering in France and Italy, traveling from village to village to accomplish “a divine mission”, during which time they were hosted in monasteries or abbeys. They expressed delusional feelings but never visited a psychiatrist and were never confronted by the police. The couple's relationship transformed; the partners stopped having sexual relations and quickly established a delusional hierarchical relation, with Mrs. A being called “Your Majesty” and Mr. A considering himself “King of Australia, Secretary of Her Majesty”.

    After about 20 years of this, in a fit of overkill, Mr. A murdered a random 11 year old child by inflicting 44 stab wounds. Earlier, he had felt humiliated and persecuted at a police check point, which provoked an “incident.” The murder of the child was part of their delusional divine mission, to make a necessary sacrifice that would restore balance.


    Paranoia of the exalted type in a setting of folie à deux

    The famous case of Pauline P (“a dark, rather sulky looking but not unattractive girl of stocky build”) and Juliet H (“a tall, willowy, frail, attractive blonde with large blue eyes”) was also mentioned (Medlicott, 1955). The two girls established a very close bond, constructed an elaborate make-believe world of fictional characters, withdrew from all others, became sexually involved, and developed a superiority complex. They killed Pauline's mother “because one of the girls was going to move with her parents, which would have led to the separation of the delusional dyad (Medlicott, 1955).” This formed the basis of the fantastic 1994 film, Heavenly Creatures, featuring Melanie Lynskey and Kate Winslet.




    Granted, their indissoluble bond was pathological, but laughable 1955 views of same-sex relationships were on display in this analysis:
    There is of course no doubt that the relationship between these two girls was basically homosexual in nature. Pauline made attempts in 1953 of establishing heterosexual relationships, but in spite of intercourse on one occasion there was no evidence of real erotic involvement. All her escapades were fully discussed with Juliet which is a common feature amongst people basically homosexual in orientation.

    Yes, we can generalize and say that all teenage girls in the 1950s commonly bragged about their heterosexual exploits with their lesbian lovers.

    From Pauline's 1953 diary:
    “To-day Juliet and I found the key to the 4th World.  ... We have an extra part of our brain which can appreciate the 4th World. Only about 10 people have it. When we die we will go to the 4th World, but meanwhile on two days every year we may use the key and look in to that beautiful world which we have been lucky enough to be allowed to know of, on this Day of Finding the Key to the Way through the Clouds.”

    Your family gatherings may not always be harmonious, but presumably your delusional children are not plotting to kill you. Happy Holidays.





    References

    Baillarger J. (1860). Quelques exemples de folie communiquée. Gazette Des Hôpitaux Civils et Militaires 38: 149-151.

    Guivarch J, Piercecchi-Marti MD, Poinso F. (2018). Folie à deux and homicide: Literature review and study of a complex clinical case. International Journal of Law and Psychiatry 61:30-39.

    Lasègue C, Falret J. (1877). La folie à deux (ou folie communiquée). Annales Médico Psychologiques 18: 321-355. English translation (Dialogues in Philosophy, Mental and Neuro Sciences, December 2016).

    Medlicott R. (1955). Paranoia of the exalted type in a setting of folie à deux; a study of two adolescent homicides. The British journal of medical psychology 28:205-223.


    Warnning: Do NOT Get Caught While Searching!!
    Your IP : - Country : - City:
    Your ISP TRACKS Your Online Activity! Hide your IP ADDRESS with a VPN!
    Before you searching always remember to change your IP adress to not be followed!
    PROTECT YOURSELF & SUPPORT US! Purchase a VPN Today!
    0 0

    Our memory for the details of real-life events is poor, according to a recent study.

    Seven MIT students took a one hour walk through Cambridge, MA. A day later, they were presented with one second video clips they may or may not have seen during their walk (the “foils” were taken from another person's recording). Mean recognition accuracy was 55.7%, barely better than guessing.1


    Minimal recognition memory for detailed events. Dashed line is chance performance. Adapted from Fig. 2 of
    Misra et al. (2018).


    How did the researchers capture the details of what was seen during each person's stroll about town (2.1 miles / 3.5 km)? They were fitted with eye tracking glasses to follow their eye movements (because you can't remember what you don't see), and a GoPro camera was mounted on a helmet.


    from Fig. 1 (Misra et al., 2018).


    One problem with this setup, however, was that the eye tracking data had to be excluded. The overwhelmingly bright summer sun prevented the eye tracker from obtaining accurate images of the pupil. Thus, Experiment 2 was performed inside the Boston Museum of Fine Arts with a separate group of 10 students.


    from Fig. 1 (Misra et al., 2018).


    Recognition performance was better in Experiment 2. Mean accuracy was 63.2% well above chance (p=.0005) but still not great. Participants correctly identified clips they had seen 59% of the time, and correctly rejected clips they hadn't seen 67% of the time. One participant (#4) was really good, and you'll notice the individual differences below.

    Dashed line is chance performance. Adapted from Fig. 2 of Misra et al. (2018).


    In Exp. 2, the investigators were able to look at the influence of eye fixations on memory performance. Not surprisingly, people were better at remembering what they looked at (fixated on), but this only held for certain categories of items: talking people, objects rated as “distinctive” (but not distinctive faces), and paintings (but not sculptures).




    How do the authors interpret this finding? We don't necessarily pay attention to everything we look at.
    “What subjects fixated on also correlated with performance (Fig. 4), but it is clear that subjects did not remember everything that they laid eyes on. There is extensive literature showing that subjects may not pay attention or be conscious of what they are fixating on. Therefore, it is quite likely that, in several instances, subjects may have fixated on an object without necessarily paying attention to that object. Additionally, attention is correlated with the encoding of events into memory. Thus, the current results are consistent with the notion that eye fixations correlate with episodic memory but they are neither necessary nor sufficient for successful episodic memory formation.”

    For me personally, 2018 was a year to forget.2 Yet, certain tragic images are etched into my mind, cropping up at inopportune times to traumatize me all over again. That's a very different topic for another time and place.


    May your 2019 brighten the sky.


    The number 2019 is written in the air with a sparkler near a tourist camp outside Krasnoyarsk, Russia, on January 1, 2019. (The Atlantic)


    Footnotes

    1 However:
    “Two subjects from Experiment I were excluded from the analyses. One of these subjects had a score of 96%, which was well above the performance of any of the other subjects (Figure 2). The weather conditions on the day of the walk for this subject were substantially different, and this subject could thus easily recognize his own video clips purely from assessing the weather conditions. Another subject was excluded 260 because he responded 'yes'>90% of the trials.”

    2 See:

    I should have done this by now...

    The Lie of Precision Medicine

    Derealization / Dying

    There Is a Giant Hole Where My Heart Used To Be

    How to Reconstruct Your Life After a Major Loss


    Reference

    Misra P, Marconi A, Peterson M, Kreiman G. (2018). Minimal memory for details in real life events. Sci Rep. 8(1):16701.





    0 0


    Before answering that question, I'll tell you about an incredibly impressive ethnographic study and field survey. For a one year period, the investigators (Pretus, Hamid et al., 2018) conducted field work within the community of young Moroccan men in Barcelona, Spain. As the authors explain, the Moroccan diaspora is an immigrant community susceptible to jihadist forms of radicalization:
    Spain hosts Europe’s second largest Moroccan diaspora community (after France) and its least integrated, whereas Catalonia hosts the largest and least integrated Moroccan community in Spain. Barcelona ... was most recently the site of a mass killing ... by a group of young Moroccan men pledging support for the Islamic State. According to a recent Europol’s latest annual report on terrorism trends, Spain had the second highest number of jihadist terrorism-related arrests in Europe (second only to France) in 2016...

    After months of observation in selected neighborhoods, the researchers approached prospective participants about completing a survey, with the assurance of absolute anonymity. No names were exchanged, and informed consent procedures were performed orally, to prevent any written record of participation. The very large sample included 535 respondents (average age 23.47 years, range 18–42), who were all Sunni Muslim Moroccan men.

    The goal of the study was to look at sacred values in these participants, and whether these values might affect their willingness to engage in violent extremism. “Sacred values are immune or resistant to material tradeoffs and are associated with deontic (duty-bound) reasoning...” (Pretus, Hamid et al., 2018). The term sacred values doesn't necessarily refer to religious beliefs. One of the most common is the basic human value, “it is wrong to kill another human being.” But theoretically speaking, we could include statements such as, “it is wrong to kill endangered species for sport (or for any other reason).”

    In this study, Sacred Values included:
    • Palestinian right of return
    • Western military forces being expelled from all Muslim lands
    • Strict sharia as the rule of law in all Muslim countries
    • Armed jihad being waged against enemies of Muslims
    • Forbidding of caricatures of Prophet Mohammed
    • Veiling of women in public

    What were the Nonsacred Values? We don't know. I couldn't find examples anywhere in the paper. It's crucial that we know what these were, to help understand the “sacralization” of nonsacred values, which was observed in an fMRI experiment (described later). So I turned to the Supplemental Material of Berns et al. (2012), inferring that the statements below are good examples of nonsacred values in a population of adults in Atlanta.
    • You are a dog person.
    • You are a cat person.
    • You are a Pepsi drinker.
    • You are a Coke drinker.
    • You believe that Target is superior to Walmart.
    • You believe that Walmart is superior to Target.

    But what if the nonsacred values in the present study of violent extremism were a little more contentious and meaningful?
    • You are a fan of FC Barcelona.
    • You are a fan of AC Milan.

    Anyway, to choose participants for the fMRI experiment, the investigators first divided the entire group into those who were more (n=267) or less (n=268) vulnerable to recruitment into violent extremism (see Appendix for details). An important comparison would have been to directly contrast brain activity in these two groups, but that wasn't done here. Out of the 267 men more vulnerable to violent extremism, 38 agreed to participate in the fMRI study. These 38 were more likely to Endorse Militant Jihadism (score 4.24 out of 7) than the general fMRI pool (3.35) and the non-fMRI pool (2.43).1 

    A battery of six sacred and six nonsacred values was constructed individually for each person and presented in the scanner, along with a number of grammatical variants, for a list of 50 different items per condition. The 38 participants were randomly assigned to one of two manipulations in a between-subjects design: exclusion (n=19) and inclusion (n=19) in the ever-popular ball-tossing video game of Cyberball. [PDF]2



    Unfortunately, this reduced the study's statistical power. Nonetheless, a major goal of the experiment was to examine how social exclusion affects the processing of sacred values. I don't know if Cyberball studies are ever conducted in a within-subjects design (perhaps with an intervening task), or if exposure to one of the two conditions is too “contaminating”. At any rate, in real life, discrimination against Muslim immigrants is isolating and causes exclusion from social and economic benefits. Feelings of marginalization can result in greater radicalization and support for (and participation in) extremist groups. At this point in time, I don't think neuroimaging can add to the extensive knowledge gained from years of field work.

    Nevertheless, the investigators wanted to extend the findings of Berns et al. (2012) to a very different population. The earlier study wanted to determine whether sacred values are processed in a deontological way (based on strict rules of right and wrong) or in a utilitarian fashion (based on cost/benefit analysis of outcome). As interpreted by those authors, processing sacred values was associated with increased activation of left temporoparietal junction (semantic storage) and left ventrolateral prefrontal cortex (semantic retrieval). Berns et al. suggested that “sacred values affect behaviour through the retrieval and processing of deontic rules and not through a utilitarian evaluation of costs and benefits.” Based on those results, the obvious prediction in the present study is that sacred values should activate left temporoparietal junction (L TPJ) and left ventrolateral prefrontal cortex (L VLPFC).


    Fig. 3A (Pretus, Hamid et al., 2018).


    Fig. 3A shows that only the latter half of that prediction was observed, and there was no explanation for the lack of activation in L TPJ. Instead, there was a finding in R TPJ in the excluded group which I won't discuss further.

    Of note, the excluded participants rated themselves as being more likely to fight and die for nonsacred values, compared to the included participants. This was termed “sacralization” and now you can see why it's so important to know the nonsacred values. Are we talking about fighting and dying for Pepsi vs. Coke? For FC Barcelona vs. AC Milan? Not to be glib, but this would help us understand why social exclusion (in an artificial experimental setting) would radicalize these participants (in an artificial experimental setting).



    Fig. 3B (Pretus, Hamid et al., 2018).Nonsacred values activate Left Inferior Frontal Gyrus (IFG, aka VLPFC) in the excluded group, but not in the included group. This was interpreted as a neural correlate of “sacralization”.


    Another interpretation of Fig. 3B is that the exclusion manipulation was distracting, making it more difficult for these participants to process stimuli expressing nonsacred values (due to increased encoding demands, syntactic processing, etc.). Exclusion increased emotional intensity ratings, and decreased feelings of belongingness and being in control. This distraction could have carried over to the task of rating one's willingness to fight and die in defense of values.

    Even if we say the brain imaging results weren't especially informative, the extensive ethnographic study and field surveys were a highly valuable source of data on a marginalized group of young Muslim men at risk of recruitment by violent extremist groups. It's a vicious cycle: terrorist attacks result in greater discrimination and persecution of innocent Muslim men, which has the unintended effect of further radicalization in some of the most vulnerable individuals. To conclude, I acknowledge that my comments may be out of turn because I have no authority or expertise, and because I'm from a country with an appalling record of discriminating against Muslims.


    Footnotes

    1I was a bit confused by some of these scores, because they changed from one paragraph to the next, and differed from what was in Table 1. Perhaps one was a composite score, and the other from an individual questionnaire.

    2I've written extensively about whether Cyberball is a valid proxy for social exclusion, but I won't get into that here.


    References

    Berns GS, Bell E, Capra CM, Prietula MJ, Moore S, Anderson B, Ginges J, Atran S. (2012). The price of your soul: neural evidence for the non-utilitarian representation of sacred values. Philos Trans R Soc Lond B Biol Sci. 367(1589):754-62.

    Pretus C, Hamid N, Sheikh H, Ginges J, Tobeña A, Davis R, Vilarroya O, Atran S. (2018). Neural and Behavioral Correlates of Sacred Values and Vulnerability to Violent Extremism. Front Psychol. 9:2462.


    Appendix


    Modified from Table 1 (Pretus, Hamid et al., 2018).

    [The] measures included (1) a modified inventory on general radicalization (support for violence as a political tactic) based on a prior longitudinal study on violent extremist attitudes among Swiss adolescents (Nivette et al., 2017); (2) a scale on personal grievances and previously used on imprisoned Islamist militants in the Philippines, and Tamil Tigers in Sri Lanka (Webber et al., 2018); (3) a scale on collective narcissism which has been shown to shape in-group authoritarian identity and support for military aggression against outgroups (de Zavala et al., 2009); (4) a self-report delinquency inventory adapted from Elliott et al. (1985), based on the disproportionate number of Muslim European delinquents who join jihadist terrorist groups (Basra and Neumann, 2016); and (5) a series of items assessing endorsement of militant jihadism (“The fighting of the Taliban, Al Qaida, ISIS is justified,” “The means of jihadist groups are justified,” “Attacks against Western nations by jihadist groups are justified,” “Attacks against Muslim nations by jihadist groups are justified,” “Attacks against civilians by jihadist groups are justified,” “Spreading Islam using force is every part of the world is an act of justifiable jihad,” and “A Caliphate must be resurrected even by force”) that we combined into a reliable composite score, “Endorsement of Militant Jihadism”...

    0 0
  • 01/27/19--22:46: Unlucky Thirteen


  • Today is the 13th anniversary of this blog. I wanted to write a sharp and subversive post.1 Or at least compose a series of self-deprecating witticisms about persisting this long. Alas, it has been an extremely  difficult year.

    Instead, I drew inspiration from Twitter (@neuroecology) and a blogger who's been at it even longer than I (@DoctorZen). Very warily I might add, because I knew the results would not be flattering or pretty.

    Behold my scores on the “Big Five” personality traits (and weep). Some of the extremes are partly situational, and that's why I'm presenting these traits separately. Sure, negative emotionality is a relative fixed part of my personality, but the 100% scores on depression and anxiety are influenced by grief (due to the loss of my spouse of 12 years). Personality psychologists would turn this around and say that someone high in trait negative emotionality (formerly known as the more disparaging “neuroticism”) would be predisposed to depression and anxiety.




    Another fun trait score is shown below. This one might be even sadder. Yeah, I'm introverted, but people in my situation often tend to withdraw from friends, family, and society.2Again, reverse the causality if you wish, but social isolation is not an uncommon response.





    But hey, I am pretty conscientious, as you can see from my overall test results on the Big Five. You too can take the test HERE.




    I'll have something more interesting for you next time.



    Footnotes

    1Why? To prove to myself that I can still do it? To impress the dwindling number of readers? To show how the blog has not exceeded its expiry date it still has relevance in its own modest and quirky way.

    2Hey, I actually had two social engagements this weekend! My lack of assertiveness is disturbing, however. But I absolutely do not want to take the lead on anything right now.





    0 0

    It ended in a tie!




    Granted, this is a small and biased sample, and I don't have a large number of followers. The answers might have been different had @russpoldrack (Yes in a landslide) or @Neuro_Skeptic (n=12,458 plus 598 wacky write-in votes) posed the question.

    Before the poll I facetiously asked:
    Other hypothetical questions (that you don't need to answer) might include:
    • Are you a clinical neuropsychologist? 
    • Do you use computational modeling in your work?1
    • What is your age?
    Here, I was thinking:
    • Clinical neuropsychologists would say No
    • Computational researchers would say Yes
    • On average, older people would be more likely to say No than younger people

    After the poll I asked, “So what ARE the differences between executive function and cognitive control? Or are the terms arbitrary, and their usage a matter of context / subfield?”

    No one wanted to expound on the differences between the terms.2
    I answered No, because I think the terms are arbitrary, and their usage a matter of context and subfield. Not that Wikipedia is the ultimate authority, but I was amused to see this:

    Executive functions

    From Wikipedia, the free encyclopedia
      (Redirected from Cognitive control)
    Executive functions (collectively referred to as executive function and cognitive control) are a set of cognitive processes that are necessary for the cognitive control of behavior: selecting and successfully monitoring behaviors that facilitate the attainment of chosen goals. Executive functions include basic cognitive processes such as attentional control, cognitive inhibition, inhibitory control, working memory, and cognitive flexibility

    Nature said this:

    Cognitive control

    Cognitive control is the process by which goals or plans influence behaviour. Also called executive control, this process can inhibit automatic responses and influence working memory. Cognitive control supports flexible, adaptive responses and complex goal-directed thought. Some disorders, such as schizophrenia and ADHD, are associated with impairments of executive function.

    They're using the terms interchangeably! The terms cognitive control, executive control, executive function, and executive control functions are not well-differentiated, except in specific contexts. For instance, the Carter Lab definition below sounds specific at first, but then branches out to encompass many “executive functions” not named as such.

    Cognitive Control

    "Cognitive control" is a construct from contemporary cognitive neuroscience that refers to processes that allow information processing and behavior to vary adaptively from moment to moment depending on current goals, rather than remaining rigid and inflexible. Cognitive control processes include a broad class of mental operations including goal or context representation and maintenance, and strategic processes such as attention allocation and stimulus-response mapping. Cognitive control is associated with a wide range of processes and is not restricted to a particular cognitive domain. For example, the presence of impairments in cognitive control functions may be associated with specific deficits in attention, memory, language comprehension and emotional processing. ...

    Actually, the term Cognitive Control dates back to the 1920s, if not further. Two quick examples.

    (1) When talking about Charles Spearman and his theory of intelligence and his three qualitative principles, Charles S. Slocombe (1928) said:
    “To these he adds five quantitative principles, cognitive control (attention), fatigue, retentivity, constancy of output, and primordial potency...”
    Simple! Cognitive Control = Attention.

    (2) Frederick Anderson (1942), in The Relational Theory of Mind:
    “Meanings, then, are mental processes which, although not themselves objects for consciousness, actively modify and characterize that of which we are for the moment conscious. They differ from other subconscious processes in this respect, that we have cognitive control over them and can at any moment bring them to light if we choose.”
    Cognitive Control = having the capacity of “bringing things into consciousness” — is this different from attention, or “paying attention” to something by making it the focus of awareness?


    Moving into the 21st century, two of the quintessential contemporary cognitive control papers that [mostly] banish executives from their midst are:

    Miller and Cohen (2001):
    “The prefrontal cortex has long been suspected to play an important role in cognitive control, in the ability to orchestrate thought and action in accordance with internal goals.”

    Botvinick et al. (2001):
    “A remarkable feature of the human cognitive system is its ability to configure itself for the performance of specific tasks through appropriate adjustments in perceptual selection, response biasing, and the on-line maintenance of contextual information. The processes behind such adaptability, referred to collectively as cognitive control, have been the focus of a growing research program within cognitive psychology.”

    I originally approached this topic during research for a future post on Mindstrong and their “digital phenotyping” technology. Two of their five biomarkers are Executive Function and Cognitive Control. How do they differ? There's an awful lot of overlap, as we'll see in a future post.


    Footnotes

    1Another fun (and related) determinant might be, “does your work focus on the dorsal anterior cingulate cortex? In which case, the respondent would answer Yes.

    2 except for one deliberately obfuscatory response.


    References

    Anderson F. (1942). The Relational Theory of Mind. The Journal of Philosophy 39(10):253-60.

    Botvinick MM, Braver TS, Barch DM, Carter CS, Cohen JD. (2001). Conflict monitoring and cognitive control. Psychol Rev. 108(3):624-52.

    Miller EK, Cohen JD. (2001). An integrative theory of prefrontal cortex function. Annual Rev Neurosci. 2001;24:167-202.

    Slocombe CS. (1928). Of mental testing—a pragmatic theory. Journal of Educational Psychology 19(1):1-24.


    Appendix

    Many, many articles use the terms interchangeably. I won't single out anyone in particular. Instead, here is a valiant attempt by Nigg (2017) to make a slight differentiation between them in a review paper entitled:
    On the relations among self-regulation, self-control, executive functioning, effortful control, cognitive control, impulsivity, risk-taking, and inhibition for developmental psychopathology.
    But in the end he concludes, “Executive functioning, effortful control, and cognitive control are closely related.”

    0 0


    Mood Monitoring via Invasive Brain Recordings or Smartphone Swipes

    Which Would You Choose?


    That's not really a fair question. The ultimate goal of invasive recordings is one of direct intervention, by delivering targeted brain stimulation as a treatment. But first you have to establish a firm relationship between neural activity and mood. Well, um, smartphone swipes (the way you interact with your phone) aim to establish a firm relationship between your “digital phenotype” and your mood. And then refer you to an app for a precision intervention. Or to your therapist / psychiatrist, who has to buy into use of the digital phenotyping software.

    On the invasive side of the question, DARPA has invested heavily in deep brain stimulation (DBS) as a treatment for many disorders– Post-Traumatic Stress Disorder (PTSD), Major Depression, Borderline Personality Disorder, General Anxiety Disorder, Traumatic Brain Injury, Substance Abuse/Addiction, Fibromyalgia/Chronic Pain, and memory loss. None of the work has led to effective treatments (yet?), but the DARPA research model has established large centers of collaborating scientists who record from the brains of epilepsy patients. And a lot of very impressive papers have emerged – some promising, others not so much.

    One recent study (Kirkby et al., 2018) used machine learning to discover brain networks that encode variations in self-reported mood. The metric was coherence between amygdala and hippocampal activity in the β-frequency (13-30 Hz). I can't do justice to their work in the context of this post, but I'll let the authors' graphical abstract speak for itself (and leave questions like, why did it only work in 13 out of 21 of your participants? for later).




    Mindstrong

    Then along comes a startup tech company called Mindstrong, whose Co-Founder and President is none other than Dr. Thomas Insel, former director of NIMH, and one of the chief architects1 of the Research Domain Criteria (RDoC), “a research framework for new approaches to investigating mental disorders” that eschews the DSM-5 diagnostic bible. The Appendix chronicles the timeline of Dr. Insel's evolution from “mindless” RDoC champion to “brainless” wearables/smartphone tech proselytizer.2


    From Wired:
    . . .

    At Mindstrong, one of the first tests of the [“digital phenotype”] concept will be a study of how 600 people use their mobile phones, attempting to correlate keyboard use patterns with outcomes like depression, psychosis, or mania. “The complication is developing the behavioral features that are actionable and informative,” Insel says. “Looking at speed, looking at latency or keystrokes, looking at error—all of those kinds of things could prove to be interesting.”

    Curiously, in their list of digital biomarkers, they differentiate between executive function and cognitive control — although their definitions were overlapping (see my previous post, Is executive function different from cognitive control? The results of an informal poll).
    Mindstrong tracks five digital biomarkers associated with brain health: Executive function, cognitive control, working memory, processing speed, and emotional valence. These biomarkers are generated from patterns in smartphone use such as swipes, taps, and other touchscreen activities, and are scientifically validated to provide measurements of cognition and mood.

    Whither RDoC?

    NIMH established a mandate requiring that all clinical trials should postulate a neural circuit “mechanism” that would be responsible for any efficacious response. Thus, clinical investigators were forced to make up simplistic biological explanations for their psychosocial interventions:

    “I hypothesize that the circuit mechanism for my elaborate new psychotherapy protocol which eliminates fear memories (e.g., specific phobias, PTSD) is implemented by down-regulation of amygdala activity while participants view pictures of fearful faces using the Hariri task.”



    [a fictitious example]


    I'm including a substantial portion of the February 27, 2014 text here because it's important.
    NIMH is making three important changes to how we will fund clinical trials.

    First, future trials will follow an experimental medicine approach in which interventions serve not only as potential treatments, but as probes to generate information about the mechanisms underlying a disorder. Trial proposals will need to identify a target or mediator; a positive result will require not only that an intervention ameliorated a symptom, but that it had a demonstrable effect on a target, such as a neural pathway implicated in the disorder or a key cognitive operation. While experimental medicine has become an accepted approach for drug development, we believe it is equally important for the development of psychosocial treatments. It offers us a way to understand the mechanisms by which these treatments are leading to clinical change.

    OK, so the target could be a key cognitive operation. But let's say your intervention is a Housing First initiative in homeless individuals with severe mental illness and co-morbid substance abuse. Your manipulation is to compare quality of life outcomes for Housing First with Assertive Community Treatment vs. Congregate Housing with on-site supports vs. treatment as usual. What is the key cognitive operation here? Fortunately, this project was funded by the Canadian government and did not need to compete for NIMH funding.

    I think my ultimate issue is one of fundamental fairness. Is it OK to skate away from the wreckage and profit by making millions of dollars? From Wired:
    “I spent 13 years at NIMH really pushing on the neuroscience and genetics of mental disorders, and when I look back on that I realize that while I think I succeeded at getting lots of really cool papers published by cool scientists at fairly large costs—I think $20 billion—I don’t think we moved the needle in reducing suicide, reducing hospitalizations, improving recovery for the tens of millions of people who have mental illness,” Insel says. “I hold myself accountable for that.”

    But how? You've admitted to spending $20 billion on cool projects and cool papers and cool scientists who do basic research. This has great value. But the big mistakes were an unrealistic promise of treatments and cures, and the charade of forcing scientists who study C. elegans to explain how they're going to cure psychiatric disorders.


    Footnotes

    1Dr. Bruce Cuthbert was especially instrumental, as well as a large panel of experts. But since this post is about digital biomarkers, the former director of NIMH is the focus of RDoC here.

    2 The Insel archives of the late Dr. Mickey Nardo in his prolific blog, 1boringoldman.com, are a must-read. I also wish the late Dr. Barney Carroll was still here to issue his trenchant remarks and trademark witticisms.


    Reference

    Kirkby LA, Luongo FJ, Lee MB, Nahum M, Van Vleet TM, Rao VR, Dawes HE, Chang EF, Sohal VS. (2018). An Amygdala-Hippocampus Subnetwork that Encodes Variation in Human Mood. Cell 175(6):1688-1700.e14.


    Additional Reading - Digital Phenotyping

    Jain SH, Powers BW, Hawkins JB, Brownstein JS. (2015). The digital phenotype. Nat Biotechnol. 33(5):462-3. [usage of the term here means data mining of content such as Twitter and Google searches, rather than physical interactions with a smartphone]

    Insel TR. (2017). Digital Phenotyping: Technology for a New Science of Behavior. JAMA 318(13):1215-1216. [smartphone swipes, NOT content:Who would have believed that patterns of typing and scrolling could reveal individual fingerprints of performance, capturing our neurocognitive function continuously in the real world?”]

    Insel TR. (2017). Join the disruptors of health science. Nature 551(7678):23-26. [conversion to the SF Bay Area/Silicon Valley mindset]. Key quote:
    “But what struck me most on moving from the Beltway to the Bay Area was that, unlike pharma and biotech, tech companies enter biomedical and health research with a pedigree of software research and development, and a confident, even cocky, spirit of disruption and innovation. They have grown by learning how to move quickly from concept to execution. Software development may generate a minimally viable product within weeks. That product can be refined through ‘dogfooding’ (testing it on a few hundred employees, families or friends) in a month, then released to thousands of users for rapid iterative improvement.”
    [is ‘dogfooding’ a real term?? if that's how you're going to test technology designed to help people with severe mental illnesses — without the input of the consumers themselves — YOU WILL BE DOOMED TO FAILURE.]

    Philip P, De-Sevin E, Micoulaud-Franchi JA. (2018). Technology as a Tool for Mental Disorders. JAMA 319(5):504.

    Insel TR. (2018). Technology as a Tool for Mental Disorders-Reply. JAMA  319(5):504.

    Insel TR. (2018). Digital phenotyping: a global tool for psychiatry. World Psychiatry 17(3):276-277.


    Appendix - a selective history of RDoC publications























    Post-NIMH Transition (articles start appearing less than a month later) 









    Warnning: Do NOT Get Caught While Searching!!
    Your IP : - Country : - City:
    Your ISP TRACKS Your Online Activity! Hide your IP ADDRESS with a VPN!
    Before you searching always remember to change your IP adress to not be followed!
    PROTECT YOURSELF & SUPPORT US! Purchase a VPN Today!
    0 0
  • 03/22/19--16:54: #CNS2019


  • It's March, an odd-numbered year, must mean.... it's time for the Cognitive Neuroscience Society Annual Meeting to be in San Francisco!

    I only started looking at the schedule yesterday and noticed the now-obligatory David Poeppel session on BIG stuff 1 on Saturday (March 23, 2019):

    Special Session -The Relation Between Psychology and Neuroscience, David Poeppel, Organizer,  Grand Ballroom

    Then I clicked on the link and saw a rare occurrence: an all-female slate of speakers!



    Whether we study single cells, measure populations of neurons, characterize anatomical structure, or quantify BOLD, whether we collect reaction times or construct computational models, it is a presupposition of our field that we strive to bridge the neurosciences and the psychological/cognitive sciences. Our tools provide us with ever-greater spatial resolution and ideal temporal resolution. But do we have the right conceptual resolution? This conversation focuses on how we are doing with this challenge, whether we have examples of successful linking hypotheses between psychological and neurobiological accounts, whether we are missing important ideas or tools, and where we might go or should go, if all goes well. The conversation, in other words, examines the very core of cognitive neuroscience.

    Also on the schedule tomorrow is the public lecture and keynote address by Matt Walker Why Sleep?
    Can you recall the last time you woke up without an alarm clock feeling refreshed, not needing caffeine? If the answer is “no,” you are not alone. Two-thirds of adults fail to obtain the recommended 8 hours of nightly sleep. I doubt you are surprised by the answer to this question, but you may be surprised by the consequences. This talk will describe not only the good things that happen when you get sleep, but the alarmingly bad things that happen when you don’t get enough. The presentation will focus on the brain (learning, memory aging, Alzheimer’s disease, education), but further highlight disease-related consequences in the body (cancer, diabetes, cardiovascular disease). The take-home: sleep is the single most effective thing we can do to reset the health of our brains and bodies.

    Why sleep, indeed.

    Meanwhile, Foals are playing tonight at The Fox Theater in Oakland. Tickets are still available.




    view video on YouTube.


    Footnote

    1 See these posts:

    The Big Ideas in Cognitive Neuroscience, Explained #CNS2017

    Big Theory, Big Data, and Big Worries in Cognitive Neuroscience #CNS2018

    0 0


    People like conflict (the interpersonal kind, not BLUE).1 Or at least, they like scientific debate at conferences. Panel discussions that are too harmonious seem to be divisive. Some people will say, “well, now THAT wasn't very controversial.” But as I mentioned last time, one highlight of the 2019 Cognitive Neuroscience Society Annual Meeting was a Symposium organized by Dr. David Poeppel.2

    Special Session -The Relation Between Psychology and Neuroscience, David Poeppel, Organizer, Grand Ballroom
    Whether we study single cells, measure populations of neurons, characterize anatomical structure, or quantify BOLD, whether we collect reaction times or construct computational models, it is a presupposition of our field that we strive to bridge the neurosciences and the psychological/cognitive sciences. Our tools provide us with ever-greater spatial resolution and ideal temporal resolution. But do we have the right conceptual resolution? This conversation focuses on how we are doing with this challenge, whether we have examples of successful linking hypotheses between psychological and neurobiological accounts, whether we are missing important ideas or tools, and where we might go or should go, if all goes well. The conversation, in other words, examines the very core of cognitive neuroscience.

    Conversation. Not debate. So first, let me summarize the conversation. Then I'll get back to the merits (demerits) of debate. In brief, many of the BIG IDEAS motifs of 2017 were revisited...
    • David Marr and the importance of work at all levels of analysis 
    • What are the “laws” that bridge these levels of analysis?
    • Emergent properties” – a unique higher-level entity (e.g., consciousness, a flock of birds) emerges from the activity of lower-level activity (e.g., patterns of neuronal firing, the flight of individual birds)... the sum is greater than its parts
    • Generative Models – formal models that make computational predictions
    ...with interspersed meta-commentary on replication, publishing, and Advice to Young Neuroscientists. Without further ado:

    Dr. David Poeppel – Introductory Remarks that examined the very core of cognitive neuroscience (i.e., “we have to face the music”).
    • the conceptual basis of cognitive neuroscience shouldn't be correlation 
    For example, fronto-parietal network connectivity (as determined by resting state fMRI) is associated with some cognitive function, but that doesn't mean it causes or explains the behavior (or internal thought). We all know this, and we all know that “we must want more!” But we haven't the vaguest idea of how to relate complex psychological constructs such as attention, volition, and emotion to ongoing biological processes involving calcium channels, dendrites, and glutamatergic synapses.
    • but what if the psychological and the biological are categorically dissimilar??
    In their 2003 book, Philosophical Foundations of Neuroscience, Bennett and Hacker warned that cognitive neuroscientists make the cardinal error of “...commit[ting] the mereological fallacy, the tendency to ascribe to the brain psychological concepts that only make sense when ascribed to whole animals.”
    For the characteristic form of explanation in contemporary cognitive neuroscience consists in ascribing psychological attributes to the brain and its parts in order to explain the possession of psychological attributes and the exercise (and deficiencies in the exercise) of cognitive powers by human beings.” (p. 3)

    On that optimistic note, the four panelists gave their introductory remarks.

    (1) Dr. Lila Davachi asked, “what is the value of the work we do?” Uh, well, that's a difficult question. Are we improving society in some way? Adding to a collective body of knowledge that may (or may not) be the key to explaining behavior and curing disease? Although still difficult, Dr. Davachi posed an easier question, “what are your goals?” To describe behavior, predict behavior (correlation), explain behavior (causation), change behavior (manipulation)? But “what counts as an explanation?” I don't think anyone really answered that question. Instead she mentioned the recurring themes of levels of analysis (without invoking Marr by name), emergent properties (the flock of birds analogy), and bridging laws (that link levels of analysis). The correct level of analysis is/are the one(s) that advance your goals. But what to do about “level chauvinism” in contemporary neuroscience? This question was raised again and again.

    (2) Dr. Jennifer Groh jumped right out of the gate with this motif. There are competing narratives in neuroscience we can call the electrode level (recording from neurons) vs. the neuroimaging level (recording large-scale brain activations or “network” interactions based on an indirect measure of neural activity). They make different assumptions about what is significant or worth studying. I found this interesting, since her lab is the only one that records from actual neurons. But there are ever more reductionist scientists who always throw stones at those above them. Neurobiologists (at the electrode level and below) are operating at ever more granular levels of detail, walking away from cognitive neuroscience entirely (who wants to be a dualist, anyway?). I knew exactly where she was going with this: the field is being driven by techniques, doing experiments merely because you can (cough — OPTOGENETICS— cough). Speaking for myself, however, the fact that neurobiologists can control mouse behavior by manipulating highly specific populations of cells raises the specter of insecurity... certain areas of research might not be considered “neuroscience” any more by a bulk of practitioners in the field (just attend the Society for Neuroscience annual meeting).

    (3) Dr. Catherine Hartley continued with the recurring theme that we need both prediction and explanation to reach our ultimate goal of understanding behavior. Is a prediction system enough? No, we must know how the black box functions by studying “latent processes” such as representation and computation. But what if we're wrong about representations, I thought? The view of @PsychScientists immediately came to mind. Sorry to interrupt Dr. Hartley, but here's Golonka and Wilson in Ecological Representations:
    Mainstream cognitive science and neuroscience both rely heavily on the notion of representation in order to explain the full range of our behavioral repertoire. The relevant feature of representation is its ability to designate (stand in for) spatially or temporally distant properties ... While representational theories are a potentially a powerful foundation for a good cognitive theory, problems such as grounding and system-detectable error remain unsolved. For these and other reasons, ecological explanations reject the need for representations and do not treat the nervous system as doing any mediating work. However, this has left us without a straight-forward vocabulary to engage with so-called 'representation-hungry' problems or the role of the nervous system in cognition.

    They go on to invoke James J Gibson's ecological information functions. But I can already hear Dr. Poeppel's colleague @GregoryHickok and others on Twitter debating with @PsychScientists. Oh. Wait. Debate.

    Returning to The Conversation that I so rudely interrupted, Dr. Hartley gave some excellent examples of theories that link psychology and neuroscience. The trichromatic theory of color vision— the finding that three independent channels convey color information — was based on psychophysics in the early-mid 1800s (Young–Helmholtz theory). This was over a century before the discovery of cones in the retina, which are sensitive to three different wavelengths. She also mentioned the more frequently used examples of Tolman's cognitive maps (which predated The Hippocampus as a Cognitive Map by 30 years) and error-driven reinforcement learning (Bush–Mosteller [23, 24] and Rescorla–Wagner, both of which predate knowledge of dopamine neurons). To generate good linking hypotheses in the present, we need to construct formal models that make quantitative predictions (generative models).

    (4) Dr. Sharon Thompson-Schill gave a brief introduction with no slides, which is good because this post has gotten very long. For this reason, I won't cover the panel discussion and the Q&A period, which continued the same themes outlined above and expanded on “predictivism” (predictive chauvinism and data-driven neuroscience) and raised new points like the value (or not) of introspection in science. When the Cognitive Neuroscience Society updates their YouTube channel, I'll let you know. Another source is the excellent live tweeting of @VukovicNikola. But to wrap up, Dr. Thompson-Schill asked members of the audience whether they consider themselves psychologists or neuroscientists. Most identified as neuroscientists (which is a relative term, I think). Although more people will talk to you on a plane if you say you're a psychologist, “neuroscience is easy, psychology is hard,” a surprising take-home message.


    Debating Debates

    I've actually wanted to see more debating at the CNS meeting. For instance, the Society for the Neurobiology of Language (SNL) often features a lively debate at their conferences.3 Several examples are listed below.

    2016:
    Debate: The Consequences of Bilingualism for Cognitive and Neural Function
    Ellen Bialystok & Manuel Carreiras

    2014:
    What counts as neurobiology of language – a debate
    Steve Small, Angela Friederici

    2013: Panel Discussions
    The role of semantic information in reading aloud
    Max Coltheart vs Mark Seidenberg

    2012: Panel Discussions
    What is the role of the insula in speech and language?
    Nina F. Dronkers vs Julius Fridriksson


    This one-on-one format has been very rare at CNS. Last year we saw a panel of four prominent neuroscientist address/debate...
    Big Theory versus Big Data: What Will Solve the Big Problems in Cognitive Neuroscience?


    Added-value entertainment was provided by Dr. Gary Marcus, which speaks to the issue of combative personalities dominating the scene.4


    Gary Marcus talking over Jack Gallant. Eve Marder is out of the frame.
    image by @CogNeuroNews


    I'm old enough to remember the most volatile debate in CNS history, which was held (sadly) at the New York Marriott World Trade Center Hotel in 2001. Dr. Nancy Kanwisher and Dr. Isabel Gauthier debated whether face recognition (and activation of the fusiform face area) is a 'special' example of domain specificity (and perhaps an innate ability), or a manifestation of plasticity due to our exceptional expertise at recognizing faces:
    A Face-Off on Brain Studies / How we recognize people and objects is a matter of debate
    . . .

    At the Cognitive Neuroscience Society meeting in Manhattan last week, a panel of scientists on both sides of the debate presented their arguments. On one side is Nancy Kanwisher of MIT, who first proposed that the fusiform gyrus was specifically designed to recognize faces–and faces alone–based on her findings using a magnetic resonance imaging device. Then, Isabel Gauthier, a neuroscientist at Vanderbilt, talked about her research, showing that the fusiform gyrus lights up when looking at many different kinds of objects people are skilled at recognizing.
    Kudos to Newsday for keeping this article on their site after all these years.


    Footnotes

    1 This is the color-word Stroop task: name the font color, rather than read the word. BLUE elicits conflict between the overlearned response ("read the word blue") and the task requirment (say "red").

    2 aka the the now-obligatory David Poeppel session on BIG STUFF. See these posts:
    3 Let me now get on my soapbox to exhort the conference organizers to keep better online archives  — with stable urls— so I don't have to hunt through archive.org to find links to past meetings.

    4 Although this is really tangential, I'm reminded of the Democratic Party presidential contenders in the US. Who deserves more coverage, Beto O'Rourke or Elizabeth Warren? Bernie Sanders or Kamala Harris?


    0 0


    Bravado SPRAVATO™ (esketamine)
    © Janssen Pharmaceuticals, Inc. 2019.


    Ketamine is the miracle drug that cures depression:
    “Recent studies report what is arguably the most important discovery in half a century: the therapeutic agent ketamine that produces rapid (within hours) antidepressant actions in treatment-resistant depressed patients (4, 5). Notably, the rapid antidepressant actions of ketamine are associated with fast induction of synaptogenesis in rodents and reversal of the atrophy caused by chronic stress (6, 7).”

    – Duman & Aghajanian (2012). Synaptic Dysfunction in Depression: Potential Therapeutic Targets. Science 338: 68-72.

    Beware the risks of ketamine:
    “While ketamine may be beneficial to some patients with mood disorders, it is important to consider the limitations of the available data and the potential risk associated with the drug when considering the treatment option.”

    – Sanacora et al. (2017). A Consensus Statement on the Use of Ketamine in the Treatment of Mood Disorders. JAMA Psychiatry 74: 399-405.

    Ketamine, dark and light:
    Is ketamine a destructive club drug that damages the brain and bladder? With psychosis-like effects widely used as a model of schizophrenia? Or is ketamine an exciting new antidepressant, the “most important discovery in half a century”?

    For years, I've been utterly fascinated by these separate strands of research that rarely (if ever) intersect. Why is that? Because there's no such thing as “one receptor, one behavior.” And because like most scientific endeavors, neuro-pharmacology/psychiatry research is highly specialized, with experts in one microfield ignoring the literature produced by another...

    – The Neurocritic (2015). On the Long Way Down: The Neurophenomenology of Ketamine

    Confused?? You're not alone.


    FDA Approval

    The animal tranquilizer and club drug ketamine now known as a “miraculous” cure for treatment resistant depression has been approved by the FDA in a nasal spray formulation. No more messy IV infusions at shady clinics.

    Here's a key Twitter thread that marks the occasion:


    How does it work?

    A new paper in Science (Moda-Sava et al., 2019) touts the importance of spine formation and synaptogenesis basically, the remodeling of synapses in microcircuits  in prefrontal cortex, a region important for the top-down control of behavior. Specifically, ketamine and its downstream actions are involved in the creation of new spines on dendrites, and in the formation of new synapses. But it turns out this is NOT linked to the rapid improvement in 'depressive' symptoms observed in a mouse model.



    So I think we're still in the dark about why some humans can show immediate (albeit short-lived) relief from their unrelenting depression symptoms after ketamine infusion. Moda-Sava et al. say:
    Ketamine’s acute effects on depression-related behavior and circuit function occur rapidly and precede the onset of spine formation, which in turn suggests that spine remodeling may be an activity-dependent adaptation to changes in circuit function (83, 88) and is consistent with theoretical models implicating synaptic homeostasis mechanisms in depression and the stress response (89, 90). Although not required for inducing ketamine’s effects acutely, these newly formed spines are critical for sustaining the antidepressant effect over time.

    But the problem is, depressed humans require constant treatment with ketamine to maintain any semblance of an effective clinical response, because the beneficial effect is fleeting. If we accept the possibility that ketamine acts through the mTOR signalling pathway, in the long run detrimental effects on the brain (and non-brain systems) may occur (e.g., bladder damage, various cancers, psychosis, etc).

    But let's stay isolated in our silos, with our heads in the sand.


    Thanks to @o_ceifero for alerting me to this study.

    Further Reading

    Ketamine for Depression: Yay or Neigh?

    Warning about Ketamine in the American Journal of Psychiatry

    Chronic Ketamine for Depression: An Unethical Case Study?

    still more on ketamine for depression

    Update on Ketamine in Palliative Care Settings

    Ketamine - Magic Antidepressant, or Expensive Illusion? - by Neuroskeptic

    Fighting Depression with Special K - by Scicurious

    On the Long Way Down: The Neurophenomenology of Ketamine


    Reference

    Moda-Sava RN, Murdock MH, Parekh PK, Fetcho RN, Huang BS, Huynh TN, Witztum J, Shaver DC, Rosenthal DL, Alway EJ, Lopez K, Meng Y, Nellissen L, Grosenick L, Milner TA, Deisseroth K, Bito H, Kasai H, Liston C. (2019). Sustained rescue of prefrontal circuit dysfunction by antidepressant-induced spine formation. Science 364(6436). pii: eaat8078.

    0 0
  • 04/27/19--23:23: The Paracetamol Papers

  • I have secretly obtained a large cache of files from Johnson & Johnson, makers of TYLENOL®, the ubiquitous pain relief medication (generic name: acetaminophen in North America, paracetamol elsewhere). The damaging information contained in these documents has been suppressed by the pharmaceutical giant, for reasons that will become obvious in a moment.1

    After a massive upload of materials to Wikileaks, it can now be revealed that Tylenol not only...
    ...but along with the good comes the bad. Acetaminophen (paracetamol) also has ghastly negative effects that tear at the very fabric of society. These OTC tablets...

    In a 2018 review of the literature, Ratner and colleagues warned:
    “In many ways, the reviewed findings are alarming. Consumers assume that when they take an over-the-counter pain medication, it will relieve their physical symptoms, but they do not anticipate broader psychological effects.”

    In the latest installment of this alarmist saga, we learn that acetaminophen blunts positive empathy, i.e. the capacity to appreciate and identify with the positive emotions of others (Mischkowski et al., 2019). I'll discuss those findings another time.

    But now, let's evaluate the entire TYLENOL® oeuvre by taking a step back and examining the plausibility of the published claims. To summarize, one of the most common over-the-counter, non-narcotic, non-NSAID pain-relieving medications in existence supposedly alleviates the personal experience of hurt feelings and social pain and heartache (positive outcomes). At the same time, TYLENOL® blunts the phenomenological experiences of positive emotion and diminishes empathy for others' people's experiences, both good and bad (negative outcomes). Published articles have reported that many of these effects can be observed after ONE REGULAR DOSE of paracetamol. These findings are based on how undergraduates judge a series of hypothetical stories. One major problem (which is not specific to The Paracetamol Papers) concerns the ecological validity of laboratory tasks as measures of the cognitive and emotional constructs of interest. This issue is critical, but outside the main scope of our discussion today. More to the point, an experimental manipulation may cause a statistically significant shift in a variable of interest, but ultimately we have to decide whether a circumscribed finding in the lab has broader implications for society at large.


    Why TYLENOL® ?

    Another puzzling element is, why choose acetaminophen as the exclusive pain medication of interest? Its mechanisms of action for relieving fever, headache, and other pains are unclear. Thus, the authors don't have a specific, principled reason for choosing TYLENOL® over Advil (ibuprofen) or aspirin. Presumably, the effects should generalize, but that doesn't seem to be the case. For instance, ibuprofen actually Increases Social Pain in men.

    The analgesic effects of acetaminophen are mediated by a complex series of cellular mechanisms (Mallet et al., 2017). One proposed mechanism involves descending serotonergic bulbospinal pathways from the brainstem to the spinal cord. This isn't exactly Prozac territory, so the analogy between Tylenol and SSRI antidepressants isn't apt. The capsaicin receptor TRPV1 and the Cav3.2 calcium channel might also be part of the action (Mallet et al., 2017). A recently recognized player is the CB1cannabinoid receptor. AM404, a metabolite of acetaminophen, indirectly activates CB1 by inhibiting the breakdown and reuptake of anandamide, a naturally occurring cannabinoid in the brain (Mallet et al., 2017).



    Speaking of cannabinoids, cannabidiol (CBD) the non-intoxicating cousin of THC has a high profile now because of its soaring popularity for many ailments. Ironically, CBD has a very low affinity for CBandCB2 receptors and may act instead via serotonergic 5-HT1A receptors {PDF}, as a modulator of μ- and δ-opioid receptors, and as an antagonist and inverse agonist at several G protein-coupled receptors. Most CBD use seems to be in the non-therapeutic (placebo) range, because the effective dose for, let's say, anxiety is 10-20 times higher than the average commercial product. You'd have to eat 3-6 bags of cranberry gummies for 285-570 mg of CBD (close to the 300-600 mg recommended dose). Unfortunately, you would also ingest 15-30 mg of THC, which would be quite intoxicating.



    Words Have Meanings

    If acetaminophen were so effective in “mending broken hearts”, “easing heartaches”, and providing a “cure for a broken heart”, we would be a society of perpetually happy automatons, wiping away the suffering of breakup and divorce with a mere OTC tablet. We'd have Tylenol epidemics and Advil epidemics to rival the scourge of the present Opioid Epidemic.

    Meanwhile, social and political discourse in the US has reached a new low. Ironically, the paracetamol “blissed-out” population is enraged because they can't identify with the feelings or opinions of the masses who are 'different' than they are. Somehow, I don't think it's from taking too much Tylenol. A large-scale global survey could put that thought to rest for good.




    Footnotes

    1 This is not true, of course, I was only kidding. All of the information presented here is publicly available in peer-reviewed journal articles and published press reports.

    2except for when it doesn’t – “In contrast, effects on perceived positivity of the described experiences or perceived pleasure in scenario protagonists were not significant” (Mischkowski et al., 2019).

    3 Yes, I made this up too. It is entirely fictitious; no one has ever claimed this, to the best of my knowledge.


    References

    Mallet C, Eschalier A, Daulhac L. Paracetamol: update on its analgesic mechanism of action (2017). Pain relief–From analgesics to alternative therapies.

    Mischkowski D, Crocker J, Way BM. (2019). A Social Analgesic? Acetaminophen(Paracetamol) Reduces Positive Empathy. Front Psychol. 10:538.


    0 0
  • 05/19/19--16:09: The Secret Lives of Goats
  • Goats Galore (May 2019)


    If you live in a drought-ridden, wildfire-prone area on the West Coast, you may see herds of goats chomping on dry grass and overgrown brush. This was initially surprising for many who live in urban areas, but it's become commonplace where I live. Announcements appear on local message boards, and families bring their children.


    Goats Goats Goats (June 2017)


    Goats are glamorous, and super popular on social media now (e.g. Instagram, more Instagram, and Twitter). Over 41 million people have watched Goats Yelling Like Humans - Super Cut Compilation on YouTube. We all know that goats have complex vocalizations, but very few of us know what they mean.





    For the health and well-being of livestock, it's advantageous to understand the emotional states conveyed by vocalizations, postures, and other behaviors. A 2015 study measured the acoustic features of different goat calls, along with their associated behavioral and physiological responses. Twenty-two adult goats were put in four situations:
    (1) control (neutral)
    (2) anticipation of a food reward (positive)
    (3) food-related frustration (negative)
    (4) social isolation (negative)
    Dr. Elodie Briefer and colleagues conducted the study at a goat sanctuary in Kent, UK (Buttercups Sanctuary for Goats). The caprine participants had lived at the sanctuary for at least two years and were fully habituated to humans. Heart rate and respiration were recorded as indicators of arousal, so this dimension of emotion could be considered separately from valence (positive/negative). For conditions #1-3, the goats were tested in pairs (adjacent pens) to avoid the stress of social isolation. They were habituated to the general set-up, to the Frustration and Isolation scenarios, and to the heart rate monitor before the actual experimental sessions, which were run on separate days. Additional details are presented in the first footnote.1





    Audio A1. One call produced during a negative situation (food frustration), followed by a call produced during a positive situation (food reward) by the same goat (Briefer et al., 2015).


    Behavioral responses during the scenarios were timed and scored; these included tail position, locomotion, rapid head movement, ear orientation, and number of calls. The investigators recorded the calls and produced spectograms that illustrated the frequencies of the vocal signals.



    The call on the left (a) was emitted during food frustration (first call in Audio A1). The call on the right (b) was produced during food reward; it has a lower fundamental frequency (F0) and smaller frequency modulations. Modified from Fig. 2 (Briefer et al., 2015).


    Both negative and positive food situations resulted in greater goat arousal (measured by heart rate) than the neutral control condition and the low arousal negative condition (social isolation). Behaviorally speaking, arousal and valence had different indicators:
    During high arousal situations, goats displayed more head movements, moved more, had their ears pointed forwards more often and to the side less often, and produced more calls. ... In positive situations, as opposed to negative ones, goats had their ears oriented backwards less often and spent more time with the tail up.
    Happy goats have their tails up, and do not point their ears backwards. I think I would need a lot more training to identify the range of goat emotions conveyed in my amateur video. At least I know not to stare at them, but next time I should read more about their reactions to human head and body postures.


    Do goats show a left or right hemisphere advantage for vocal perception?

    Now that the researchers have characterized the valence and arousal communicated by goat calls, another study asked whether goats show a left hemisphere or right hemisphere “preference” for the perception of different calls (Baciadonna et al., 2019). How is this measured, you ask?

    Head-Turning in Goats and Babies

    The head-turn preference paradigm is widely used in studies of speech perception in infants.

    Figure from Prosody cues word order in 7-month-old bilingual infants (Gervain & Werker, 2013).




    However, I don't know whether this paradigm is used to assess lateralization of speech perception in babies. In the animal literature, a similar head-orienting response is a standard experimental procedure. For now, we will have to accept the underlying assumption that orienting left or right may be an indicator of a contralateral hemispheric “preference” for that specific vocalization (i.e., orienting to the left side indicates a right hemisphere dominance, and vice versa).
    The experimental procedure usually applied to test functional auditory asymmetries in response to vocalizations of conspecifics and heterospecifics is based on a major assumption (Teufel et al. 2007; Siniscalchi et al. 2008). It is assumed that when a sound is perceived simultaneously in both ears, the head orientation to either the left or right side is an indicator of the side of the hemisphere that is primarily involved in the response to the stimulus presented. There is strong evidence that this is the case in humans ... The assumption is also supported by the neuroanatomic evidence of the contralateral connection of the auditory pathways in the mammalian brain (Rogers and Andrew 2002; Ocklenburg et al. 2011).

    The experimental set-up to test this in goats is shown below.



    A feeding bowl (filled with a tasty mixture of dry pasta and hay) was fixed at the center of the arena opposite to the entrance. The speakers were positioned at a distance of 2 meters from the right and left side of the bowl and were aligned to it. 'X' indicates the position of the Experimenter. Modified from Fig. 2 (Baciadonna et al., 2019).


    Four types of vocalizations were played over the speakers: food anticipation, food frustration, isolation, and dog bark (presumably a negative stimulus). Three examples of each vocalization were played, each from a different and unfamiliar goat (or dog).

    The various theories of brain lateralization of emotion predicted different results. The right hemisphere model predicts right hemisphere dominance (head turn to the left) for high-arousal emotion regardless of valence (food anticipation, food frustration, dog barks). In contrast, the valence model predicts right hemisphere dominance for processing negative emotions (food frustration, isolation, dog barks), and left hemisphere dominance for positive emotions (food anticipation). The conspecific model predicts left hemisphere dominance for all goat calls (“familiar and non-threatening”) and right hemisphere dominance for dog barks. Finally, a general emotion model predicts right hemisphere dominance for all of the vocalizations, because they're all emotion-laden.

    The results sort of supported the conspecific model (according to the authors), if we now accept that dog barks are actually “familiar and non-threatening” [if I understand correctly]. The head-orienting response did not differ significantly between the four vocalizations, and there was a slight bias for head orienting to the right (p=.046 vs. chance level), when collapsed across all stimulus types. 2

    The time to resume feeding after hearing a vocalization (a measure of fear) didn't differ between goat calls and dog barks, so the authors concluded that “goats at our study site may have been habituated to dog barks and that they did not perceive dog barks as a serious threat.” However, if a Siberian Husky breaks free of its owner and runs around a fenced-in rent-a-goat herd, chaos may ensue.





    Footnotes

    1 Methodological details:
    “(1) During the control situation, goats were left unmanipulated in a pen with hay (‘Control’). This situation did not elicit any calls, but allowed us to obtain baseline values for physiological and behavioural data. (2) The positive situation was the anticipation of an attractive food reward that the goats had been trained to receive during 3 days of habituation (‘Feeding’). (3) After goats had been tested with the Feeding situation, they were tested with a food frustration situation. This consisted of giving food to only one of the goats in the pair and not to the subject (‘Frustration’). (4) The second negative situation was brief isolation, out of sight from conspecifics behind a hedge. For this situation, goats were tested alone and not in a pair (‘Isolation’).”

    2 The replication police will certainly go after such a marginal significance level, but I would like to see them organize a “Many Goats in Many Goat Sanctuaries” replication project.


    References

    Baciadonna L, Nawroth C, Briefer EF, McElligott AG. (2019). Perceptual lateralization of vocal stimuli in goats. Curr Zool. 65(1):67-74. [PDF]

    Briefer EF, Tettamanti F, McElligott AG. (2015). Emotions in goats: mapping physiological, behavioural and vocal profiles. Animal Behaviour 99:131-43. [PDF]



    Warnning: Do NOT Get Caught While Searching!!
    Your IP : - Country : - City:
    Your ISP TRACKS Your Online Activity! Hide your IP ADDRESS with a VPN!
    Before you searching always remember to change your IP adress to not be followed!
    PROTECT YOURSELF & SUPPORT US! Purchase a VPN Today!
    0 0


    Eve is plagued by a waking nightmare.

    ‘I do not exist. All you see is a shell with no being inside, a mask covering nothingness. I am no one and no thing. I am the unborn, the non-existent.’


    – from Pickering (2019).

    Dr. Judith Pickering is a psychotherapist and Jungian Analyst in Sydney, Australia. Her patient ‘Eve’ is an “anonymous, fictionalised amalgam of patients suffering disorders of self.”   Eve had a psychotic episode while attending a Tibetan Buddhist retreat.
    “She felt that she was no more than an amoeba-like semblance of pre-life with no form, no substance, no past, no future, no sense of on-going being.”



    Eve's fractured sense of self preceded the retreat. In fact, she was drawn to Buddhist philosophy precisely because of its negation of self. In the doctrine of non-being (anātman), “there is no unchanging, permanent self, soul, or essence in living beings.” The tenet of emptiness (śūnyatā) that “all things are empty [or void] of intrinsic existence” was problematic as well. When applied and interpreted incorrectly, śūnyatā and anātman can resemble or precipitate disorders of the self.

    Dr. Pickering noted:
    ‘Eve’ is representative of a number of patients suffering both derealisation and depersonalisation. They doubt the existence of the outer world (derealisation) and fear that they do not exist. In place of a sense of self, they have but an empty core inside (depersonalisation).

    How do you find your way back to your self after that? Will the psychotic episode respond to neuroleptics or mood stabilizers?

    The current article takes a decidedly different approach from this blog's usual themes of neuroimaging, cognitive neuroscience, and psychopharmacology. Spirituality, dreams, and the unconscious play an important role in Jungian psychology. Pickering mentions the Object Relations School, Attachment Theory, Field Theory, The Relational School, the Conversational Model, Intersubjectivity Theory and Infant Research. She cites Winnicott, Bowlby, and Bion (not Blanke & Arzy 2005, Kas et al. 2014, or Seth et al. 2012).

    Why did I read this paper? Sometimes it's useful to consider the value of alternate perspectives. Now we can examine the potential hazards of teaching overly Westernized conceptions of Buddhist philosophy.1 


    When Westerners Attend Large Buddhist Retreats

    Eve’s existential predicament exemplifies a more general area of concern found in situations involving Western practitioners of Buddhism, whether in traditional settings in Asia, or Western settings ostensibly adapted to the Western mind. Have there been problems of translation in regard to Buddhist teachings on anātman (non-self) as implying the self is completely non-existent, and interpretations of śūnyatā (emptiness) as meaning all reality is non-existent, or void?
    . . .

    This relates to another issue concerning situations where Westerners attend large Buddhist retreats in which personalised psycho-spiritual care may be lacking. Traditionally, a Buddhist master would know the student well and carefully select appropriate teachings and practices according to a disciple’s psychological, physical and spiritual predispositions, proficiency and maturity. For example, teaching emptiness or śūnyatā to someone who is not ready can be extremely harmful. As well as being detrimental for the student, it puts the teacher at risk of a major ethical infringement...

    I found Dr. Pickering's discussion of Nameless Dread to be especially compelling.




    Nameless Dread

    I open the door to a white, frozen mask. I know immediately that Eve has disappeared again into what she calls ‘the void’. She sits down like an automaton, stares in stony silence at the wall as if staring into space. I do not exist for her, she is totally isolated in her own realm of non-existence.

    The sense of deadly despair pervades the room. I feel myself fading into nothingness, this realm of absence, unmitigated, bleakness and blankness.We sit in silence, sometimes for session after session. I wonder what on earth do I have to offer her? Nothing, it seems.




    ADDENDUM (June 18 2019): A reader alerted me to a tragic story two years ago in Pennsylvania, where a young woman ultimately died by suicide after experiencing a psychotic episode during an intensive 10-day meditation retreat. The article noted:
    "One of the documented but rare adverse side effects from intense meditation retreats can be depersonalization disorder. People need to have an especially strong ego, or sense of self, to be able to withstand the strictness and severity of the retreats."

    Case reports of extreme adverse events are rare, but a 2017 study documented "meditation-related challenges" in Western Buddhists. The authors conducted detailed qualitative interviews in 60 people who engaged in a variety of Buddhist meditation practices (Lindahl et al., 2017). Thematic analysis revealed a taxonomy of 59 experiences across seven domains (I've appended a table at the end of the post). The authors found a wide range of responses: "The associated valence ranged from very positive to very negative, and the associated level of distress and functional impairment ranged from minimal and transient to severe and enduring." The paper is open access, and Brown University issued an excellent press release.


    Footnote

    1This is especially important given the appropriation of semi-spiritual versions of yoga and mindfulness, culminating in inanities such as tech bro eating disorders.


    References

    Blanke O, Arzy S. (2005). The out-of-body experience: disturbed self-processing at the temporo-parietal junction. Neuroscientist 11:16-24.

    Kas A, Lavault S, Habert MO, Arnulf I. (2014) Feeling unreal: a functional imaging study in patients with Kleine-Levin syndrome. Brain 137: 2077-2087.

    Lindahl JR, Fisher NE, Cooper DJ, Rosen RK, Britton WB. (2017). The varieties of contemplative experience: A mixed-methods study of meditation-related challenges  in Western Buddhists. PLoS One 12(5):e0176239.

    Pickering J. (2019). 'I Do Not Exist': Pathologies of Self Among Western Buddhists.J Relig Health 58(3):748-769.

    Seth AK, Suzuki K, Critchley HD. (2012). An interoceptive predictive coding model of conscious presence. Front Psychol. 2:395.


    Further Reading

    Derealization / Dying

    Feeling Mighty Unreal: Derealization in Kleine-Levin Syndrome

    A Detached Sense of Self Associated with Altered Neural Responses to Mirror Touch



    Phenomenology coding structure (Table 4, Lindahl et al., 2017).

    - click table for a larger view -

    0 0


    Qualia are private. We don’t know how another person perceives the outside world: the color of the ocean, the sound of the waves, the smell of the seaside, the exact temperature of the water. Even more obscure is how someone else imagines the world in the absence of external stimuli. Most people are able to generate an internal “representation1 of a beach — to deploy imagery — when asked, “picture yourself at a relaxing beach.” We can “see” the beach in our mind’s eye even when we’re not really there. But no one else has access to these private images, thoughts, narratives. So we must rely on subjective report.

    The hidden nature of imagery (and qualia more generally)2 explains why a significant minority of humans are shocked and dismayed when they learn that other people are capable of generating visual images, and the request to “picture a beach” isn’t metaphorical. This lack of imagery often extends to other sensory modalities (and to other cognitive abilities, such as spatial navigation and autobiographical memories), which will be discussed another time. For now, the focus is on vision.

    Redditors and their massive online sphere of influence were chattering the other day about this post in r/TIFU: A woman was explaining her synesthesia to her boyfriend when he discovered that he has aphantasia, the inability to generate visual images.

    TIFU by explaining my synesthesia to my boyfriend

    “I have grapheme-color synesthesia. Basically I see letters and numbers in colors. The letter 'E' being green for example. A couple months ago I was explaining it to my boyfriend who's a bit of a skeptic. He asked me what colour certain letters and numbers were and had me write them down.  ...

    Tonight we were laying in bed and my boyfriend quized me again. I tried explaining to him I just see the colors automatically when I visualize the letters in my head. I asked him what colour are the letters in his head. He looked at me weirdly like what do you mean in "my head, that's not a thing"

    My boyfriend didnt understand what I meant by visualizing the letters. He didn't believe me that I can visualize letters or even visualize anything in my head.

    Turns out my boyfriend has aphantasia. When he tries to visualize stuff he just sees blackness. He can't picture anything in his mind and thought that everyone else had it the same way. He thought it was just an expression to say "picture this" or etc...

    There are currently 8652 comments on this post, many from individuals whowerestunnedto learn that the majority of people do have imagery. Other comments were from knowledgeable folks with aphantasia who described what the world is like for them, the differences in how they navigate through life, and how they compensate for what is thought of as "a lack" by the tyranny of the phantasiacs.






    There's even a subreddit for people with aphantasia:



    How did I find out about this? 3  It was because my 2016 post was suddenly popular again!





    That piece was spurred by an eloquent essay on what's it's like to discover that all your friends aren't speaking metaphorically when they say, “I see a beach with waves and sand.” Research on this condition blossomed once more and more people realized they had it. Onlinecommunities developed and grew, including resourcesfor researchers. This trajectory is akin to the formation of chat groups for individuals with synesthesia and developmental prosopagnosia (many years ago). Persons with these neuro-variants have always existed,4 but they were much harder to locate pre-internet. Studies of these neuro-unique individuals have been going on for a while, but widespread popular dissemination of their existence alerts others – “I am one, too.”

    The Vividness of Visual Imagery Questionnaire (VVIQ) “is a proven psychometric measurement often used to identify whether someone is aphantasic or not, albeit not definitive.” But it's still a subjective measure that relies on self-report. Are there more “objective” methods for determining your visual imagery abilities? I'm glad you asked. An upcoming post will discuss a couple of cool new experiments.


    Footnotes

    1 This is a loaded term that I won’t explain – or debate – right now.

    2Somepeople don’t believe that qualia exist (as such), but I won’t elaborate on that, either.

    3 I don’t hang out on Reddit, and my Twitter usage has declined.

    4 Or at least, they've existed for quite some time.


    Further Reading

    Aphantasia Index

    The Eye's Mind

    Bonus Episode: What It's Like to Have no Mind's Eye, a recent entry of BPS Research Digest. There's an excellent collection of links, as well as a 30 minute podcast (download here).

    Imagine These Experiments in Aphantasia (my 2016 post).

    Involuntary Visual Imagery (if you're curious about what has been haunting me).

    In fact, while I was writing this post, intrusive imagery of the Tsawwassen Ferry Terminal in Delta BC (the ferry from Vancouver to Victoria Island) appeared in my head. I searched Google Images and can show you the approximate view.



    I was actually standing a little further back, closer to where the cars are parked. But I couldn't quite capture that view. Here is the line of cars waiting to get on the ferry.



    During this trip two years ago (with my late wife), this sign had caught my eye so I ran across the street for coffee...


    0 0




    How well do we know our own inner lives? Self-report measures are a staple of psychiatry, neuroscience, and all branches of psychology (clinical, cognitive, perceptual, personality, social, etc.). Symptom scales, confidence ratings, performance monitoring, metacognitive efficiency (meta-d'/d'), vividness ratings, preference/likeability judgements, and affect ratings are all examples. Even monkeys have an introspective side! 1

    In the last post we learned about a condition called aphantasia, the inability to generate visual images. Although the focus has been on visual imagery, many people with aphantasia cannot form “mental images” of any sensory experience. Earworms, those pesky songs that get stuck in your head, are not a nuisance for some individuals with aphantasia (but many others do get them). Touch, smell, and taste are even less studied mental imagery of these senses is generally more muted, if it occurs at all (even in the fully phantasic).

    The Vividness of Visual Imagery Questionnaire (VVIQ, Marks 1973)2 is the instrument used to identify people with poor to non-existent visual imagery (i.e., aphantasia). For each item on the VVIQ, the subject is asked to “try to form a visual image, and consider your experience carefully. For any image that you do experience, rate how vivid it is using the five-point scale described below. If you do not have a visual image, rate vividness as ‘1’. Only use ‘5’ for images that are truly as lively and vivid as real seeing.” By its very nature, it's a subjective measure that relies on introspection.

    But how well do we really know the quality of our private visual imagery? Eric Schwitzgebel has argued that it's really quite poor:3
    “...it is observed that although people give widely variable reports about their own experiences of visual imagery, differences in report do not systematically correlate with differences on tests of skills that [presumably] require visual imagery, such as mental rotation, visual creativity, and visual memory.”

    And it turns out that many of these cognitive skills do not require visual imagery. A recent study found that participants with aphantasia were slower to perform a mental rotation task (relative to controls), but they were more accurate (Pounder et al., 2018). The test asked participants to determine whether a pair of objects is identical, or mirror images of each other. Response times generally increase as a function of the angular difference in the orientations of the two objects. The overall slowing and accuracy advantage in those with aphantasia held across all levels of difficulty, so these participants must be using a different strategy than those without aphantasia.




    Another study found that people with aphantasia were surprisingly good at reproducing the details of a complex visual scene from memory (Bainbridge et al., 2019).4

    What test does require visual imagery? The phenomenon of binocular rivalry involves the presentation of two different images to each eye using specialized methods or simple 3D glasses. Instead of forming a unified percept, the images presented to the left and right eye seem to alternate. Thus, binocular rivalry involves perceptual switching. The figure below was taken from the informative video of Carmel and colleagues (2010) in JoVE. I highly recommend the video, which I've embedded at the end of this post.


    A recent study examined binocular rivalry in aphantasia using the setup shown in Fig 1 (Keogh & Pearson, 2018). The key trick is that participants were cued to imagine one of two images for 6 seconds. Then they performed a vividness rating, followed by a brief presentation of the binocular rivalry display. Finally, the subjects had to report which color they saw.

    - click for larger view -



    The study population included 15 self-identified aphantasics recruited via Facebook, direct contact with the investigators, or referral from Professor Adam Zeman, and 209 control participants recruited from the general population. The VVIQ verified poor or non-existent visual imagery in the aphantasia group.

    For the binocular rivalry test, the general population showed a priming effect from the imagined stimulus they were more likely to report that the subsequent test display matched the color of the imagined stimulus (green or red) at a greater than chance level (better than guessing). As a group, the individuals with aphantasia did not show priming that was greater than chance. However, as can be seen in Fig. 2E, results from this test were not completely diagnostic. Some with aphantasia showed better-than-chance priming, while a significant percentage of the controls did not show the binocular rivalry priming effect.


    Fig. 2E (Keogh & Pearson, 2018). Frequency histogram for imagery priming scores for aphantasic participants (yellow bars and orange line) and general population (grey bars and black dashed line). The green dashed line shows chance performance (50% priming).


    Furthermore, scores on the VVIQ in the participants with aphantasia did not correlate with their priming scores (although n=15 would make this hard to detect). Earlier work by these investigators suggested that the VVIQ does correlate with overall priming scores in controls, and binocular rivalry priming on an individual trial is related to self-reported vividness on that trial. Correlations for the n=209 controls in the present paper were not reported, however. This would be quite informative, since the earlier study had a much lower number of participants (n=20).

    What does this mean? I would say that binocular rivalry priming can be a useful “objective” measure of aphantasia, but it's not necessarily diagnostic at an individual level.


    Related Posts

    The Shock of the Unknown in Aphantasia: Learning that Visual Imagery Exists

    Imagine These Experiments in Aphantasia


    Footnotes

    1 see Mnemonic introspection in macaques is dependent on superior dorsolateral prefrontal cortex but not orbitofrontal cortex.

    2 The VVIQ is not without its detractors...

    3 Thanks to Rolf Degan for bringing this paper to my attention.

    4 Also see this reddit thread on Sketching from memory.


    References

    Bainbridge WA, Pounder Z, Eardley A, Baker CI (2019). Characterizing aphantasia through memory drawings of real-world images. Cognitive Neuroscience Society Annual Meeting.

    Keogh R, Pearson J. (2018). The blind mind: No sensory visual imagery in aphantasia. Cortex 105:53-60.

    Marks DF. (1973). Visual imagery differences in the recall of pictures. British journal of Psychology 64(1): 17-24.

    Pounder Z, Jacob J, Jacobs C, Loveday C, Towell T, Silvanto J. (2018). Mental rotation performance in aphantasia. Vision Sciences Society Annual Meeting.

    Schwitzgebel E. (2002). How well do we know our own conscious experience? The case of visual imagery. Journal of Consciousness Studies 9(5-6):35-53.  {PDF}

    Shepard RN, Metzler J. Mental rotation of three-dimensional objects. (1971) Science 171(3972): 701-3.




    0 0



    What Color is Monday? This video on synesthesia is one of the Top Ten videos in the Society for Neuroscience Brain Awareness Video Contest.  

    Voting for the 2019 People's Choice Award closes 12 p.m. Eastern time on August 30, 2019.

    However, it wasn't immediately apparent to me how you're supposed to cast your vote...

    The entire playlist is on YouTube.  


    4:09 Now playing

    Multitasking


    2
    4:43 Now playing

    How Ketamine Treats Depression


    3
    4:40 Now playing

    Procrastination: I'll Think of a Title Later


    4
    3:55 Now playing

    Seeing Culture in Our Brain


    5
    3:58 Now playing

    Theory of Mind


    6
    4:00 Now playing

    How Neuroscience Informs Behavioural Economics


    7
    4:13 Now playing

    What Color is Monday


    8
    4:13 Now playing

    Why do adolescents go to sleep late?


    9
    3:00 Now playing

    An Inside Look: Alzheimer's Disease

    10
     
    3:57 Now playing

    Technology Makes Us Bigger


    0 0




    What is a hallucination? The question seems simple enough. “A hallucination is a perception in the absence of external stimulus that has qualities of real perception. Hallucinations are vivid, substantial, and are perceived to be located in external objective space.” When we think of visual hallucinations, we often think of trippy colorful images induced by psychedelic drugs (hallucinogens).

    Are dreams hallucinations? How about visual imagery? Optical illusions of motion from viewing a non-moving pattern? No, no, and no (according to this narrow definition). Hallucinations are subjective and inaccessible to others, much as my recent posts discussed the presence or absence of visual imagery in individual humans. However, people can tell us what they're seeing (unlike animals).

    Visual hallucinations can occur in psychotic disorders such as schizophrenia and schizoaffective disorder, although auditory hallucinations are more common in those conditions. Visual hallucinations are more often associated with neurodegenerative disorders. Among patients with Parkinson's Disease, 33% to 75% experience visual hallucinations, usually related to dopaminergic or anticholinergic drug therapy.

    In contrast, hallucinations in dementia with Lewy Bodies (DLB) are diagnostic of the disease, and not related to pharmacological treatment. “Recurrent complex visual hallucinations ... are typically well-formed, often consisting of figures, such as people or animals.” The cause may be related to pathology in subcortical visual structures such as the superior colliculus and the pulvinar, rather than the visual cortex itself. A more specific hypothesis is that loss of α7 nicotinic receptors in the thalamic reticular nucleus could lead to hallucinations in DLB.


    Charles Bonnet Syndrome (CBS)

    Visual hallucinations are also caused by certain types of visual impairment, e.g. age-related macular degeneration, which leads to the loss of central vision. Damage to the macular portion of the retina can cause people to “see” simple patterns of colors or shapes that aren't there, or even images of people, animals, flowers, planets, and scary figures. Individuals with CBS know that the hallucinations aren't real, but they're distressing nonetheless.


    image from the Macular Society 1


    “Why are you discussing DLB and CBS here,” you might ask, “because these conditions don't involve abnormal stimulation of the visual cortex.” I brought them up because visual hallucinations in humans can occur for any number of reasons, not just from manipulation of highly specific cell types in primary visual cortex (which only occurs in optogenetic experiments with animals).



    Electrical Stimulation Studies in Humans

    A typical starting point here would be Wilder Penfield and the history of surgical epileptology, but I'll skip ahead to the modern day. Patients with intractable epilepsy present teams of neurosurgeons, neurologists, neurophysiologists, and neuroscience researchers with a unique opportunity to probe the inner workings of the human brain. Stimulating and recording from regions thought to be the seizure focus (or origin) guide neurosurgeons to the precise tissue to remove, and data acquired from neighboring brain bits is used to make inferences about neural function and electrophysiological mechanisms.




    An exciting study by Dr. Joseph Parzivi and colleagues (2012) stimulated regions of the fusiform face area (FFA) in the inferior temporal cortex while a patient was undergoing surgical monitoring. Two FFA subregions were identified using both fMRI and electrocorticography (ECoG).



    The location of the face-selective regions converged across ECoG and fMRI studies that presented various stimuli and recorded brain responses in the FFA and nearby regions (1 = posterior fusiform; 2 = medial fusiform). Then the investigators stimulated these two focal points while the patient viewed faces, objects, and photos of famous faces and places. Electrical brain stimulation (EBS) of the FFA produced visual distortions while the patient viewed real faces. Sham stimulation, and EBS of nearby regions, did not produce these perceptual distortions. The article included a video of the experiment, which is worth watching.




    Another patient viewed pictures of faces during FFA stimulation and reported the persistence of facial images once they were gone, and the mixing of facial features, but no distortions (this is known as palinopsia). A third study induced the scary phenomenon of seeing yourself (self-face hallucination, or autoscopic hallucination), upon EBS of a non-FFA region (right medial occipitoparietal cortex). A video of this experiment is on YouTube.

    “But wait,” you say, “you've been describing complex visual hallucinations and distortions of the face because the EBS was in higher-order visual areas that are specialized for faces. What happens when you stimulate primary visual cortex?” The answer is less exciting (but not unexpected): phosphenes, those non-specific images of light that appear when you close your eyes and press on your eyeballs (Winawer & Parvizi, 2016). These can be mapped retinotopically according to their location in the visual field. {also see this 1930 article by Foerster & Penfield: 2
    "Stimulation of the occipital pole in area 17 produces an attack which is ushered in by an optic aura such as light, flames, stars, usually in the opposite visual field."}

    But EBS of primary visual cortex is a coarse instrument. Here's where the latest refinements in optogenetics finally enter the picture (Marshel et al., 2019).



    I won't attempt to cover the complex and novel techniques in Panel 1 and Panel 2 above. So I'll quote others who rave about what a breakthrough they are (and they are): amazing work, incredible breakthrough, Key advances in current paper include multiSLM to stimulate neurons based on function, and a red-shifted opsin allowing simultaneous 2p. And one day (hypothetically speaking), I'd like to present more than direct quotes and my cartoonish version of the optogenetic ensemble and behavioral training methods. But today isn't that day.
    Using ChRmine [a fancy new opsin] together with custom holographic devices to create arbitrarily specified light patterns [horizontally or vertically drifting gratings], we were able to measure naturally occurring large-scale 3D ensemble activity patterns during visual experience and then replay these natural patterns at the level of many individually specified cells. We found that driving specific ensembles of cells on the basis of natural stimulus-selectivity resulted in recruitment of a broad network with dynamical patterns corresponding to those elicited by real visual stimuli and also gave rise to the correctly selective behaviors even in the absence of visual input.

    Briefly, the investigators captured patterns of activity in V1 layer 2/3 neurons and layer 5 neurons that responded to horizontal or vertical gratings, and then played back the same patterns to those neurons in the absence of a visual stimulus. There goes the coarseness of EBS-induced phosphenes in humans... But obviously, the one great advantage of human studies is that your subjects can tell you what they see. Nonetheless, everyone wants to say that laser-activated nerve cells cause the mice to hallucinate vertical bars.

    What really happened is that mice were trained to discriminate between horizontal and vertical gratings. The task required them to respond to the vertical, but not the horizontal. After training, visual stimulation with gratings was compared to optogenetic stimulation of classifier-identified neural ensembles in the absence of gratings. How well did the mice perform with optogenetic-only stimulation?

    Modified from Fig. 5 (Marshel et al., 2019).(A) Discrimination performance during visual-only stimulation (black) and tuned-ensemble stimulation (red) over several weeks. (B) Discrimination performance for tuned-ensemble stimulation versus visual trials (P > 0.1 paired t test, two-tailed, n = 112 sessions across five mice).


    Eventually the mice did just about as well on the discrimination task with optogenetic stimulation of the horizontally or vertically-tuned neurons, compared to when the horizontal or vertical stimuli were actually presented. Were these mice “hallucinating” vertical gratings?  Or did they merely learn to respond when a specific neural ensemble was activated? Isn't this somewhat like neurofeedback? During training, the mice were rewarded or punished based on their correct or incorrect response to the “vertical” ensemble stimulation. They can't tell us what, if anything, they saw under those conditions.

    And the authors themselves noted the following limitation, that “mice initially required some training involving paired optogenetic and visual stimuli before optogenetic activation alone sufficed to drive behavioral discrimination.” Marshel et al. correctly invoked the “it takes a village” explanation that many other cortical and subcortical regions are required to generate a full natural visual percept.

    My frustration with the press coverage stems from inaccurate language and overblown interpretations.3  [So what else is new?]  From the New York Times:

    Why Are These Mice Hallucinating? Scientists Are in Their Heads
    In a laboratory at the Stanford University School of Medicine, the mice are seeing things. And it’s not because they’ve been given drugs.

    With new laser technology, scientists have triggered specific hallucinations in mice by switching on a few neurons with beams of light. The researchers reported the results on Thursday in the journal Science.

    The technique promises to provide clues to how the billions of neurons in the brain make sense of the environment. Eventually the research also may lead to new treatments for psychological disorders, including uncontrollable hallucinations.

    The Stanford press release doesn't use “hallucination” in the title, but a few are sprinkled throughout the text for dramatic effect: “Hallucinations are spooky” and “Hallucinating mice.”

    Should we classify the following as a spooky hallucination: Optical stimulation of 20 vertical bar neurons in behaviorally trained mice who then perform the task as if the drifting vertical gratings were present in their visual field. I would say no. To be fair, in the Science paper the authors used the word “hallucinations” only once, and it wasn't to describe mouse percepts.
    Studying specific sensory experiences with ensemble stimulation under different conditions may help advance development of therapeutic strategies . . . for neuropsychiatric symptoms such as hallucinations or delusions. More broadly, the ability to track and control large cellular-resolution ensembles over time during learning, and to selectively link cells and ensembles together into behaviorally relevant circuitry, may have important implications for studying and leveraging plasticity underlying learning and memory in health and disease.

    I'm focusing on only one small aspect of the study, albeit the one that grabs media attention. The results were highly informative in many other ways, and I do not want to detract from the monumental technical achievements of the research team.


    Footnotes

    1 This is a terrific resource, with loads of information, additional artistic renderings, an eBook, and a must-see video.

    2 There's no escaping Penfield...

    3 See Appendix for expert opinion, since I am not an expert...


    References

    Foerster O, Penfield W. (1930). The structural basis of traumatic epilepsy and results of radical operation. Brain 53:99-119.

    Marshel JH, Kim YS, Machado TA, Quirin S, Benson B, Kadmon J, Raja C, Chibukhchyan A, Ramakrishnan C, Inoue M, Shane JC, McKnight DJ, Yoshizawa S, Kato HE, Ganguli S, Deisseroth K. (2019). Cortical layer-specific critical dynamics triggering perception. Science Jul 18.

    Parvizi J, Jacques C, Foster BL, Witthoft N, Rangarajan V, Weiner KS, Grill-Spector K. (2012). Electrical stimulation of human fusiform face-selective regions distorts face perception. J Neurosci. 32(43):14915-20.

    Winawer J, Parvizi J. (2016). Linking Electrical Stimulation of Human Primary Visual Cortex, Size of Affected Cortical Area, Neuronal Responses, and Subjective Experience. Neuron 92(6): 1213-1219.


    Appendix

    Before lodging this critique, I consulted select experts on Twitter...







    Warnning: Do NOT Get Caught While Searching!!
    Your IP : - Country : - City:
    Your ISP TRACKS Your Online Activity! Hide your IP ADDRESS with a VPN!
    Before you searching always remember to change your IP adress to not be followed!
    PROTECT YOURSELF & SUPPORT US! Purchase a VPN Today!
    0 0


    A Precog capable of predicting future crimes in the film version of Minority Report.


    In a strange twist suitable for the dystopian reality show broadcast from the West Wing dining room, a charity formed to fight pancreatic cancer has morphed into project SAFE HOME— “Stopping Aberrant Fatal Events by Helping Overcome Mental Extremes”.



    After three highly publicized mass shootings killed 34 people in the US, a variation on the “guns don't kill people...” trope was issued by President Trump: “mental illness and hatred pulls [sic] the trigger, not the gun.” He was right about hatred: two of the shooters espoused white supremacist views, the other was a misogynist. But rather than anger the NRA with tiny incremental changes to control access to firearms, a better approach is to develop a national plan to stigmatize people with mental illnesses, who are more likely to be the victims of violent crime than the perpetrators:
    White House considers new project seeking links between mental health and violent behavior

    Bob Wright, the former NBC chair and a Trump friend, is one of the proposal’s supporters.

    The White House has been briefed on a proposal to develop a way to identify early signs of changes in people with mental illness that could lead to violent behavior.

    Supporters see the plan as a way President Trump could move the ball forward on gun control following recent mass shootings as efforts seem to be flagging to impose harsher restrictions such as background checks on gun purchases.

    The proposal is part of a larger initiative to establish a new agency called the Health Advanced Research Projects Agency or HARPA, which would sit inside the Health and Human Services Department. Its director would be appointed by the president, and the agency would have a separate budget, according to three people with knowledge of conversations around the plan.

    The Suzanne Wright Foundation, started by Bob Wright to fight pancreatic cancer after his wife died from the disease, has advocated for the formation of a DARPA-like federal agency called HARPA. The original vision for HARPA was to “leverage federal research assets and private sector tools to develop capabilities for diseases, like pancreatic cancer, that have not benefited from the current system.”



    91% of pancreatic cancer patients die within 5 years– often because the cancer is too advanced to treat by the time of diagnosis. An early detection test for pancreatic cancer would be the most effective weapon to save lives from this disease. ... CodePurple advocates for HARPA ... as the most promising vehicle to develop a pancreatic cancer detection test.



    According to the Washington Post:
    The HARPA proposal was initially pitched as a project to improve the mortality rate of pancreatic cancer through innovative research to better detect and cure diseases. Despite internal support over the past two years, the model ran into what was described as “institutional barriers to progress,” according to a person familiar with the conversations. 

    So why not flip your game by seizing a tragic moment in time to transform yourself into legacy-making material?
    “[Trump is] very achievement oriented and I think all presidents have difficulties with science,” Wright said in an interview. “I think their political advisers say, ‘No that’s not a game for you,’ so they sort of back off a bit.”

    He added: “But the president has a real opportunity here to leave a legacy in health care.”

    The newly-realized HARPA would use artificial intelligence, machine learning, commercial surveillance technology (e.g., Apple Watches, Fitbits, Amazon Echo, Google Home), and “powerful tools [NOT] collected by health-care provides like fMRIs, tractography and image analysis.”
    HARPA would develop “breakthrough technologies with high specificity and sensitivity for early diagnosis of neuropsychiatric violence,” says a copy of the proposal. “A multi-modality solution, along with real-time data analytics, is needed to achieve such an accurate diagnosis.”

    And because of her vast experience in these technologies and her theoretical contributions to the neuroethics of predicting violent behavior, Ivanka Trump is the best person to lead such an effort:
    “It would be perfect for her to do it — we need someone with some horsepower — someone like her driving it. ... It could get done,” said one official familiar with the conversations.

    Further Reading

    Oh Good, White House Reportedly Considering Dystopian Plan to Try to Detect the Next Mass Shooter

    The Minority Report, by Philip K. Dick


    Further Watching

    Person of Interest, created by Jonathan Nolan (Memento)

       ( How Person of Interest Became Essential Science Fiction Television )


    0 0


    “I can guarantee that someone in the world thinks you are evil. Do you eat meat? Do you work in banking? Do you have a child out of wedlock? You will find that things that seem normal to you don't seem normal to others, and might even be utterly reprehensible. Perhaps we are all evil. Or, perhaps none of us are.”

    – Julia Shaw, Evil: The Science Behind Humanity's Dark Side

    Earlier this month, Science magazine and Fondation Ipsen co-sponsored a webinar on Impulses, intent, and the science of evil. “Can research into humankind’s most destructive inclinations help us become better people?”

    It's freely available on demand. Let the controversy commence...


    Are There Evil People or Only Evil Acts?

    Moderator (Sean Sanders, Ph.D. Science/AAAS):  “... How do we define evil? ... Are there evil people or only evil acts?”

    In brief, Dr. Abigail Marsh said no, there are absolutely not evil people; Dr. David Brucato mostly agreed with that; and Dr. Michael Stone gave an elaborate example using an offensive term ("gay pedophile"– as if anyone would refer to a male pedophile who targets little girls as a "straight pedophile").



    Dr. Marsh was not amused...

    More detail below.


    Michael Stone, M.D. Columbia University:  [I'm skipping his first response on etymology and religion.]

    Abigail Marsh, Ph.D. Georgetown University:  “... I don't think it's ever appropriate to refer to a person as evil. Actions are certainly evil and some people are highly predisposed to keep committing evil actions, but evil does has this very supernatural connotation.


    Um, and like so many supernatural ideas, I think the concept of evil is pulled in whenever we have trouble understanding why someone would do such a thing, right, we talk about evil spirits or forces because it's so difficult to understand, um, for most people why anybody would be driven to do something to cause people pain and suffering for no reason. Um... but there is an explanation, we may not know what it is yet, but there is an explanation for these behaviors, and so uh... but the use of the word 'evil' doesn't get us any closer to understanding that. It leaves us in this supernatural rut rather than thinking of these behaviors as things that do have unfortunately human motivations .. but that are not the totality of the person. Evil is a very essentialist term as well. It assumes this sort of homogeneity within the person which is not usually true.” [I'm biased in this direction.]

    Gary Brucato, Ph.D. Columbia University:  [after the moderator has implied that Stone & Brucato's book suggests that although rare, there are truly evil people.]  “...... What we have to clarify is that rarely, even in the most egregious repeat offenders, do you see somebody that from dusk to dawn is committing acts that are considered evil.  ... [I'll note here that Dr. Marsh is subtly nodding her head.]

    Dr. Stone:  “Therefore there are a very very small number of people ... who do evil things as it were from the minute they wake up in the morning until they go to sleep at night. The one who comes closest to mind is the one I interviewed for the Discovery channel program some years ago and that was uh David Paul Brown his real name, who then changed his name when he was in prison the first time to Benjamin Nathaniel Bar-Jonah [actually, it was Nathaniel Benjamin Levi Bar-Jonah] who was a gay pedophile [sic] who would seduce boys coming out of a theater and then try to capture them if he could and kill them and so on. Some of them escaped and managed to identify him.1 [He was imprisoned and then released] ... OK. So. Out in Montana, he dressed as a policeman with a fake badge... and would seduce little boys ... coming out of a school ... he would ... kill them, eat part of the boy ... [more details about cannibalism] ... He had thousands of pictures of boys and on the walls making up very bad comments and puns as if uh uh some young kid as if that were a Chinese menu item, on a menu, some young kid.” [other sources say girls were among the victims]. He could be counted on, one of the few people I know of, who was evil day in and day out. That's very rare...”



    Dr. Stone seems amused...



    Dr. Marsh looks dejected


    Labels Don't Get Us Anywhere

    Moderator:  “... I feel this disgust and you know repulsion uh thinking about this. And so I'm assuming that this is what drives people to label someone as evil. Um and I wonder if that label is useful. You know if we look at maybe the children that you Abby are doing your research with um if you see these inclinations is it helpful to put labels on them and where does that get us you know scientifically and and in terms of treatment?

    Dr. Marsh:  “I don't think it gets us anywhere, it's one of the many reasons I wouldn't ever refer to that term uh to call a human being evil. Um... the children I work with didn't make a choice to have the personalities they do or to have the life experiences that have led them to the place that they are and instead we know that psychopathy— again this condition of having very low levels of remorse and caring and compassion for other people has all the hallmarks of a mental illness — has a strong heritability component, having negative life experiences causes the prognosis to get worse, there are very clear characteristic brain and cognitive changes. It looks like any other psychological disorder in these key ways and so calling people who are affected by this condition evil is not helping us to develop treatments to try to improve their prognosis and to try to improve the odds that they won't go on to do things that affect the rest of us negatively. Um because what I what it does is calling someone evil robs us of the ability to view someone compassionately.”


    It's Nearly Impossible to Predict...

    Moderator: Do we all have the propensity to do evil deeds?

    Dr. Marsh:  “...[regarding] 'horrible and unpredicted' acts, shooting up dozens of innocent people ... I think that when acts like that are so unpredictable, it often leads us to draw the incorrect conclusion, I guess anybody is capable of an act of evil so serious because if we can't predict who it can be, I guess it can be anybody. Um it is true that it is very hard to predict accurately who will engage in acts of significant violence like that especially when dealing with young men in whom various psychological disorders may be emerging for the first time that contribute to those actions. But it's absolutely not the case that everybody is capable of actions like that...”


    Prevention, Not Prediction

    This brings us to my previous post on a proposal to predict mass shootings via Apple Watches, Fitbits, Amazon Echo, Google Home and AI, and how this effort would be futile (not to mention horribly intrusive and stigmatizing). But we wouldn't want to anger the NRA, now would we?

    An FBI study on pre-attack behaviors of 63 active shooters in the US found that only 25% had ever been diagnosed with a mental illness (only three of whom were diagnosed with a psychotic disorder).

    A Department of Defense report on Predicting Violent Behavior says:
    There is no panacea for stopping all targeted violence. Attempting to balance risks, benefits, and costs, the Task Force found that prevention as opposed to prediction should be the Department's goal. Good options exist in the near term for mitigating violence by intervening in the progression of violent ideation to violent behavior.
    It should seem obvious that...

    Dr. Stone: “...it's much more easy to get rid of the weaponry that allows these things to happen than it is to do psychotherapy, particularly on people with psychopathic tendencies who are not very amenable to psychotherapy anyway...”

    Most Americans favor stricter gun control, and many of us think that our lax gun control laws are the greatest insanity, as are the politicians who refuse to do anything about it.


    Further Reading

    Aggression Detectors: The Unproven, Invasive Surveillance Technology Schools Are Using to Monitor Students

    Trump's claims and what experts say about mental illness and mass shootings

    Ivanka Trump to Head New Agency of Precrime


    Predicting Mass Shootings via Intrusive Surveillance and Scapegoating of the Mentally Ill

    No news from the hypothetical HARPA organization (Health Advanced Research Projects Agency) or the Suzanne Wright Foundation since the initial Washington Post report on their joint proposal for project SAFE HOME— “Stopping Aberrant Fatal Events by Helping Overcome Mental Extremes.”


    Footnote

    1 I had initially included more of the gory details, then decided a truncated version was better.

    0 0



    Can we reduce the persistent, unbearable pain of losing a loved one to 15-20 voxels of brain activity in the nucleus accumbens (O'Connor et al., 2008)? No? Then what if I told you that unrelenting grief — and associated feelings of sheer panic, fear, terminal aloneness, and existential crisis — isn't “suffering”. It's actually rewarding!

    Well I'm here to tell you that it isn't.

    Looking back on a post from 2011, you never realize it's going to be you.1


    The top figure shows that activity in the nucleus accumbens was greater in response to grief-related words vs. neutral words in a group of 11 women with “Complicated” Grief (who lost a mother or sister to breast cancer in the last 5 years), compared to a group of 10 women with garden-variety Non-complicated Grief (O'Connor et al., 2008). Since the paper was published in 2008, and the standards for conducting fMRI studies have changed (larger sample sizes are necessary, no more “voodoo correlations”), I won't go on about that here.


    When Grief Gets Complicated?

    Grief is never simple, it's always complicated. The death of a cherished loved one can create a situation that seems totally intolerable. Almost everyone agrees that navigating such loss doesn't rely on one acceptable road map. Yet here it is. Normal people are supposed to move through a one year mourning period of “sorrow, numbness, and even guilt and anger. Gradually these feelings ease, and it's possible to accept loss and move forward.” If you don't, well then it's Complicated. This is a stigmatizing and limiting view of what it means to grieve the loss of a loved one.2

    But is there really such there a thing as Complicated Grief? Simply put, it's “a chronic impairing form of grief brought about by interference with the healing process.” There are “maladaptive thoughts and dysfunctional behaviors” according to The Center for Complicated Grief. However, it's not named as an actual disorder in either of the major psychiatric manuals. In ICD-11, preoccupation with and longing for the deceased, accompanied by significant emotional distress and functional impairment beyond six months, is called Prolonged Grief Disorder. In DSM-5, Complicated Grief has morphed into Persistent Complex Bereavement Disorder, a not-exactly-reified condition subject to further study.


    Dopamine Reward

    Dopamine and its putative reward circuitry are way more complex than a simple one-to-one mapping. Studies in rodents have demonstrated that the nucleus accumbens (NA) can code for negative states, as well as positive ones, as shown by the existence of “hedonic coldspots” that generate aversive reactions, in addition to the usual hotspots (Berridge & Kringelbach, 2015). These studies involved microinjections of opioids into tiny regions of the NA.




    If a chronically anguished state is portrayed as rewarding, it's time to recalibrate these terms. As I said in 2011:

    If tremendous psychological suffering and loss are associated with activity in brain regions such as the ventral tegmental area and nucleus accumbens, isn't it time to abandon the simplistic notion of dopamine as the feel-good neurotransmitter? To quote the authors of Mesolimbic Dopamine in Desire and Dread (Faure et al., 2008):
    It is important to understand how mesocorticolimbic mechanisms generate positive versus negative motivations. Dopamine (DA) in the nucleus accumbens is well known as a mechanism of appetitive motivation for reward. However, aversive motivations such as pain, stress, and fear also may involve dopamine in nucleus accumbens (at least tonic dopamine signals).

    Grief-Related Words Are Rewarding

    So what happens when you take a disputed diagnostic label and combine it with reverse inference in a neuroimaging study? (when you operate under the assumption that activity in a particular brain region must mean that a specific cognitive process or psychological state was present).

    The NA activity was observed while the participants viewed grief words vs. neutral words that were superimposed over a photograph: a photo of the participant's deceased mother or a photo of someone else's mother. And it didn't matter whose mother was pictured, the difference was due to the words, not the images.3



    Sample stimulus provides an [unintentional?] example of the emotional Stroop effect.


    That's pretty hard to explain by saying that “the pangs of grief would continue to occur with NA activity, with reward activity in response to the cues motivating reunion with the deceased” if the effect is not specific to an image of the deceased.


    Yearning and the Subgenual Cingulate

    Why beat a dead horse, you ask? Because a recent study (McConnell et al., 2018) did not heed the advice above (sample size should be increased, beware reverse inference). The participants were 9 women with Complicated Grief (CG), 7 women with Non-complicated Grief (NG), and 9 Non-Bereaved (NB). The NA finding did not replicate, nor were there any differences between CG and NG and NB (over the entire brain). A post-hoc analysis then extracted a single question from a 19-item inventory and found that yearning for the dead spouse in all 16 Bereaved participants was correlated with activity in the subgenual cingulate (“depression-land” or perhaps “rumination-land”), for the comparison of an anticipation period vs. presentation of spouse photo. There were 5 spouse photos and 5 photos of strangers (note that it was not possible to predict which would be presented). The authors recognized the limitations of the study, yet pathologized yearning in Complicated and Non-complicated Grief alike.

    I realize that the general motivation behind these experiments might be admirable, but you really can't come to any conclusions about how grief — a highly complex emotional response unique to each individual — might be represented in the brain.


    Footnotes

    1See There Is a Giant Hole Where My Heart Used To Be from October 2, 2018.

    The posts on illness and death that I never wrote:
    (yes, I was really serious about these)

    2I was skeptical when someone sent me this book, It's OK That You're Not OK: Meeting Grief and Loss in a Culture That Doesn't Understand (by Megan Devine). I thought it was going to be overly 'self-helpy'. But it's actually been immensely helpful.

    3 The idea of creating a self-relevant stimulus set was utterly horrifying to me.


    References

    Berridge KC, Kringelbach ML. (2015). Pleasure systems in the brain. Neuron 86(3):646-64.

    Faure A, Reynolds SM, Richard JM, Berridge KC. (2008). Mesolimbic dopamine in desire and dread: enabling motivation to be generated by localized glutamate disruptions in nucleus accumbens. J Neurosci. 28:7184-92.

    McConnell MH, Killgore WD, O'Connor MF. (2018). Yearning predicts subgenual anterior cingulate activity in bereaved individuals. Heliyon 4(10):e00852.

    O'Connor MF, Wellisch DK, Stanton AL, Eisenberger NI, Irwin MR, Lieberman MD. (2008). Craving love? Enduring grief activates brain's reward center. Neuroimage 42:969-72.



    0 0



    November 2nd is the Day of the Dead, a Mexican holiday to honor the memory of lost loved ones. If you subscribe to certain paranormal belief systems, the ability to communicate with the dearly departed is possible via séance, which is conducted by a Medium who channels the spirit of the dead.

    Since I do not subscribe to a paranormal belief system, I do not think it's possible to communicate with my dead wife. Nor am I especially knowledgeable about the differences between mediumship vs. channeling:
    Mediumship is mostly about receiving and interpreting messages from other worlds.

    Mediums often deliver messages from loved ones and spirit guides during readings.
    . . .

    ...channeling is often about receiving messages from other types of entities, such as nature spirits, spirit guides, or even angels.

    In short, Channels can communicate with a broader class of non-corporeal entities, for instance Mahatma Ghandi or Cleopatra (not only the dead relatives of paying clients).

    What seems to be uncontroversial, however, is that Channels who enter into a trance state to convey the wisdom of Gandhi may experience an altered or “expanded” state of consciousness (regardless of the veracity of their communications). This permuted state of arousal should be manifest in the electroencephalogram (EEG) as an alteration in spectral power across the range of frequency bands (e.g., theta, alpha, beta etc.) that have been associated with different states of consciousness.

    A group of researchers at the Institute of Noetic Sciences adopted this view in a study of persons who claimed the ability to channel (Wahbeh et al., 2019). The participants (n=13; 11 ♀, 2 ) were on average 57 year old white women of upper middle class socioeconomic status, representative of the study site in Marin County, California. The authors screened 155 individuals to arrive at their final sample size.1 Among the stringent inclusion criteria was the designation of being a Channel who directly and actively conveys the communications of a discarnate entity or spirit (rather than being a passive relay).2The participants were free of major psychiatric disorders, including psychosis and dissociation (according to self-report). Oh, and they had the ability to remain still during the channeling episodes, which was advantageous for the physiological measurements.

    The participants alternated between channeling and no-channeling in 5 minute blocks while EEG and peripheral physiological signals (skin conductance, heart rate, respiration, temperature) were recorded. At the end of each counterbalanced session (run on separate days), voice recordings were obtained while the participants read stories.




    Contrary to the authors' predictions, they found no significant differences between the channeling and no-channeling conditions for any of the physiological measures, nor for the EEG analyzed in standard frequency bands (theta 3–7 Hz; alpha 8–12 Hz; beta 13–20 Hz and low gamma 21–40 Hz) across 64 electrodes. I'll note here that the data acquisition and analysis methods were top-notch. The senior author (Arnaud Delorme) developed the widely used EEGLAB toolbox for data analysis, which was described in one of the most highly cited articles in neuroscience.3

    Modest differences in voice parameters were observed: the channeled readings were softer in volume and slower in pace. The authors acknowledged that the participants could have impersonated an alternate voice during the channeling segments, whether consciously or unconsciously.

    So does this mean that channeling is a sham? The authors don't think so. Instead, they recommended further investigation: “future studies should include other measures such as EEG connectivity analyses, fMRI and biomarkers.”


    Footnotes

    1This is a rather esoteric population, so I won't fault the researchers for having a small sample size.

    2“The channeler goes into a trance state at will (the depth of the trance may vary) and the disincarnate entity/spirit uses the channeler’s body with permission to communicate directly through the channeler's voice, body movements, etc. (rather than the channeler receiving information mentally or otherwise and then relaying what is being received).”

    3 I was rather critical of a previous study by this research group, which was ultimately retracted from Frontiers in Neuroscience. See Scientific Study Shows Mediums Are Wrong 46.2% of the Time.


    Reference

    Wahbeh H, Cannard C, Okonsky J, Delorme A. (2019). A physiological examination of perceived incorporation during trance. F1000Research 8:67.



    Bev Tull, the fake medium on Bad Girls.


    0 0


    Smell Dating, an interactive exhibit by Tega Brain and Sam Lavigne


    A conceptual art installation, an extended olfactory performance piece, an elaborate participatory project, or an actual smell-based dating service? Smell Dating is all of these and more!




    How it works
    1. We send you a t-shirt
    2. You wear the shirt for three days and three nights without deodorant.
    3. You return the shirt to us in a prepaid envelope.
    4. We send you swatches of t-shirts worn by a selection of other individuals.
    5. You smell the samples and tell us who you like.
    6. If someone whose smell you like likes the smell of you too, we'll facilitate an exchange of contact information.
    7. The rest is up to you.

    My initial view of the project was based a recent showing of the interactive exhibit, where the participants could sniff small swatches of cloth, rate the unknown wearer's attractiveness (UNATTRACTIVE — NEUTRAL — ATTRACTIVE), learn how others voted, and see basic background information about the wearer (e.g., 30 year old female bisexual pescatarian). The first two I sniffed were odorless, but then there was #8...

    The art installation is part of Useless Press, “a publishing collective that creates eclectic Internet things.” I assumed it was an elaborate joke, not an actual matchmaking service, but the artists must have had a grant to implement the idea in real life.





    In Shanghai, people signed up over a two week period and paid ¥100 to become a “member.”
    Smell Dating @ Shanghai [culminated] in the Sweat Lab, a participatory installation event... Visitors are invited to volunteer in the Smell Dating Sweat Lab and intimately experience the smells of strangers. During this event we will prepare the smell samples from our members t-shirts. Shirts will be meticulously cut up and batched to be sent back to Smell Dating members.

    Smell Dating premiered in New York in March 2016 and received extensive press coverage, most of which took it seriously. Young female writers at The Guardian, Business Insider, Time, Racked, and a gay man at HuffPo tried out the service. The Buzzfeed reporter realized, “Yes, this is mostly a stunt-y gag” but also touched on the science behind smell and attraction. The health reporter at Time wrote about the underlying science in detail (e.g., major histocompatibility complex) and interviewed smell scientists, including Dr. Noam Sobel (founder of SmellSpace.com), Dr. Richard Doty (author of The Great Pheromone Myth), and Dr. Gary Beauchamp (Emeritus Director of the Monell Chemical Senses Center).

    The creators of Smell Dating (Tega Brain and Sam Levine) consulted with olfactory scientists and provided an extensive reading list on the web site.

    Most everyone agrees that odors evoke emotion, and the sense of smell has a unique relationship to autobiographical memory. But, as Richard Doty asks, do human pheromones exist?
    While it is apparent that, like music and lighting, odors and fragrances can alter mood states and physiological arousal, is there evidence that unique agents exist, namely pheromones, which specifically alter such states?

    It turns out that scientific opinion on this matter is decidedly mixed, even polarizing, as I'll discuss in the next post.


    Reference

    Doty RL. (2014). Human Pheromones: Do They Exist? In: Mucignat-Caretta C, editor. Neurobiology of Chemical Communication. Boca Raton (FL): CRC Press/Taylor & Francis; Chapter 19.




    Smell Dating from Tega Brain.


    Warnning: Do NOT Get Caught While Searching!!
    Your IP : - Country : - City:
    Your ISP TRACKS Your Online Activity! Hide your IP ADDRESS with a VPN!
    Before you searching always remember to change your IP adress to not be followed!
    PROTECT YOURSELF & SUPPORT US! Purchase a VPN Today!