https://www.biosciencetechnology.com/news/2017/10/brain-imaging-science-identifies-individuals-suicidal-thoughts?
Mon, 10/30/2017 - 12:44pm
by Carnegie Mellon University
Researchers
led by Carnegie Mellon University's Marcel Just and the University of
Pittsburgh's David Brent have developed an innovative and promising
approach to identify suicidal individuals by analyzing the alterations
in how their brains represent certain concepts, such as death, cruelty
and trouble.
Suicidal risk is notoriously difficult to assess and predict, and suicide is the second-leading cause of death among young adults in the United States. Published in Nature Human Behaviour, the study offers a new approach to assessing psychiatric disorders.
"Our latest work is unique insofar as it identifies concept alterations that are associated with suicidal ideation and behavior, using machine-learning algorithms to assess the neural representation of specific concepts related to suicide. This gives us a window into the brain and mind, shedding light on how suicidal individuals think about suicide and emotion related concepts. What is central to this new study is that we can tell whether someone is considering suicide by the way that they are thinking about the death-related topics," said Just, the D.O. Hebb University Professor of Psychology in CMU's Dietrich College of Humanities and Social Sciences.
For the study, Just and Brent, who holds an endowed chair in suicide studies and is a professor of psychiatry, pediatrics, epidemiology and clinical and translational science at Pitt, presented a list of 10 death-related words, 10 words relating to positive concepts (e.g. carefree) and 10 words related to negative ideas (e.g. trouble) to two groups of 17 people with known suicidal tendencies and 17 neurotypical individuals.
They applied the machine-learning algorithm to six word-concepts that best discriminated between the two groups as the participants thought about each one while in the brain scanner. These were death, cruelty, trouble, carefree, good and praise. Based on the brain representations of these six concepts, their program was able to identify with 91 percent accuracy whether a participant was from the control or suicidal group.
Then, focusing on the suicidal ideators, they used a similar approach to see if the algorithm could identify participants who had made a previous suicide attempt from those who only thought about it. The program was able to accurately distinguish the nine who had attempted to take their lives with 94 percent accuracy.
"Further testing of this approach in a larger sample will determine its generality and its ability to predict future suicidal behavior, and could give clinicians in the future a way to identify, monitor and perhaps intervene with the altered and often distorted thinking that so often characterizes seriously suicidal individuals," said Brent.
To further understand what caused the suicidal and non-suicidal participants to have different brain activation patterns for specific thoughts, Just and Brent used an archive of neural signatures for emotions (particularly sadness, shame, anger and pride) to measure the amount of each emotion that was evoked in a participant's brain by each of the six discriminating concepts. The machine-learning program was able to accurately predict which group the participant belonged to with 85 percent accuracy based on the differences in the emotion signatures of the concepts.
"The benefit of this latter approach, sometimes called explainable artificial intelligence, is more revealing of what discriminates the two groups, namely the types of emotions that the discriminating words evoke," Just said. "People with suicidal thoughts experience different emotions when they think about some of the test concepts. For example, the concept of 'death' evoked more shame and more sadness in the group that thought about suicide. This extra bit of understanding may suggest an avenue to treatment that attempts to change the emotional response to certain concepts."
Just and Brent are hopeful that the findings from this basic cognitive neuroscience research can be used to save lives.
"The most immediate need is to apply these findings to a much larger sample and then use it to predict future suicide attempts," said Brent.
Just and his CMU colleague Tom Mitchell first pioneered this application of machine learning to brain imaging that identifies concepts from their brain activation signatures. Since then, the research has been extended to identify emotions and multi-concept thoughts from their neural signatures and also to uncover how complex scientific concepts are coded as they are being learned.
Suicidal risk is notoriously difficult to assess and predict, and suicide is the second-leading cause of death among young adults in the United States. Published in Nature Human Behaviour, the study offers a new approach to assessing psychiatric disorders.
"Our latest work is unique insofar as it identifies concept alterations that are associated with suicidal ideation and behavior, using machine-learning algorithms to assess the neural representation of specific concepts related to suicide. This gives us a window into the brain and mind, shedding light on how suicidal individuals think about suicide and emotion related concepts. What is central to this new study is that we can tell whether someone is considering suicide by the way that they are thinking about the death-related topics," said Just, the D.O. Hebb University Professor of Psychology in CMU's Dietrich College of Humanities and Social Sciences.
For the study, Just and Brent, who holds an endowed chair in suicide studies and is a professor of psychiatry, pediatrics, epidemiology and clinical and translational science at Pitt, presented a list of 10 death-related words, 10 words relating to positive concepts (e.g. carefree) and 10 words related to negative ideas (e.g. trouble) to two groups of 17 people with known suicidal tendencies and 17 neurotypical individuals.
They applied the machine-learning algorithm to six word-concepts that best discriminated between the two groups as the participants thought about each one while in the brain scanner. These were death, cruelty, trouble, carefree, good and praise. Based on the brain representations of these six concepts, their program was able to identify with 91 percent accuracy whether a participant was from the control or suicidal group.
Then, focusing on the suicidal ideators, they used a similar approach to see if the algorithm could identify participants who had made a previous suicide attempt from those who only thought about it. The program was able to accurately distinguish the nine who had attempted to take their lives with 94 percent accuracy.
"Further testing of this approach in a larger sample will determine its generality and its ability to predict future suicidal behavior, and could give clinicians in the future a way to identify, monitor and perhaps intervene with the altered and often distorted thinking that so often characterizes seriously suicidal individuals," said Brent.
To further understand what caused the suicidal and non-suicidal participants to have different brain activation patterns for specific thoughts, Just and Brent used an archive of neural signatures for emotions (particularly sadness, shame, anger and pride) to measure the amount of each emotion that was evoked in a participant's brain by each of the six discriminating concepts. The machine-learning program was able to accurately predict which group the participant belonged to with 85 percent accuracy based on the differences in the emotion signatures of the concepts.
"The benefit of this latter approach, sometimes called explainable artificial intelligence, is more revealing of what discriminates the two groups, namely the types of emotions that the discriminating words evoke," Just said. "People with suicidal thoughts experience different emotions when they think about some of the test concepts. For example, the concept of 'death' evoked more shame and more sadness in the group that thought about suicide. This extra bit of understanding may suggest an avenue to treatment that attempts to change the emotional response to certain concepts."
Just and Brent are hopeful that the findings from this basic cognitive neuroscience research can be used to save lives.
"The most immediate need is to apply these findings to a much larger sample and then use it to predict future suicide attempts," said Brent.
Just and his CMU colleague Tom Mitchell first pioneered this application of machine learning to brain imaging that identifies concepts from their brain activation signatures. Since then, the research has been extended to identify emotions and multi-concept thoughts from their neural signatures and also to uncover how complex scientific concepts are coded as they are being learned.
No comments:
Post a Comment