Published in News

AI can tell if you are suicidal

by on13 June 2017


I think you ought to know that you are feeling very depressed

AI boffins have come up with a machine-learning algorithm that can accurately predict the likelihood that a person will attempt suicide

Colin Walsh, data scientist at Vanderbilt University Medical Centre hopes his software can give friends and family a chance to intervene.

In trials, results have been 80-90 percent accurate when predicting whether someone will attempt suicide within the next two years, and 92 percent accurate in predicting whether someone will attempt suicide within the next week.

The AI crunches numbers widely available from all hospital admissions, including age, gender, zip codes, medications, and prior diagnoses.

Walsh and his team gathered data on 5,167 patients from Vanderbilt University Medical Center that had been admitted with signs of self-harm or suicidal ideation. They read each of these cases to identify the 3,250 instances of suicide attempts.

This set of more than 5,000 cases was used to train the machine to identify those at risk of attempted suicide compared to those who committed self-harm but showed no evidence of suicidal intent.

The researchers also built algorithms to predict attempted suicide among a group 12,695 randomly selected patients with no documented history of suicide attempts. It proved even more accurate at making suicide risk predictions within this large general population of patients admitted to the hospital.

Walsh is now working to establish whether his algorithm is effective with a completely different data set from another hospital. And, once confident that the model is sound, Walsh hopes to work with a larger team to establish a suitable method of intervening.

One thing which is surprising is that taking melatonin seemed to be a significant factor in calculating the risk. It is not that melatonin is causes people to have suicidal thinking, but sleep disorders can be a factor.

Researchers still have to work out how much AI based decisions will determine patient care. As a practicing primary care doctor, Walsh says it’s unnerving to recognise that he could effectively follow orders from a machine.

“Is there a problem with the fact that I might get a prediction of elevated risk when that’s not part of my clinical picture. Are you changing the way I have to deliver care because of something a computer’s telling me to do?”

Last modified on 13 June 2017
Rate this item
(0 votes)

Read more about: