Science AMA Series: I’m Dr. Jessica Ribeiro, a professor at Florida State University, and I’m here with NBC News MACH. I'll be answering questions about my research in using AI to predict suicide for about an hour beginning at 12:00 p.m. ET.

Abstract

The mission of my research program is to accurately detect risk, especially for suicidal behavior, for all people at all points in time. To this end, there are four major elements of my research: (1) discovery and assessment of novel constructs; (2) prediction in the short-term; (3) prediction on a large scale; and (4) the conceptualization of suicide as a complex classification problem. My approach represents a radical shift from the status quo, with the aim of substantially advancing risk identification, especially for suicidal behavior. My goal is to make major progress on this front over the next 10 years.

For more information, you could check out this NBC News MACH article written about this kind of research https://www.nbcnews.com/mach/innovation/ai-coming-help-doctors-predict-suicide-n763166 or my lab’s website here www.risklabfsu.com.

Hi everyone! Thank you so much for taking the time to ask questions. I really enjoyed answering your questions, and very much appreciated the interest in this massive public health problem. It's time for me to sign out, but I'll check back later and answer a few more. Have a nice afternoon!

Thank you for doing this AMA! I'm sorry if this is a dumb question, but I was wondering how the AI would be able to tell the difference between someone who is contemplating suicide seriously versus someone who is just down and feels that what he/she does is "worthless" and "meaningless"? What are the traits that show the distinction between these two situations that people may be in? Additionally, how do you think this research will aid in counselling and therapy and even conditioning of the mind prior to various conditions, instead of just "preventive" measures? Thank you so much :)

xkamekame

These are great questions! Our algorithms in this study were designed to detect nonfatal suicide attempts. To do this, my colleage, Joe Franklin, and I (who are experts in suicide research) read through thousands of electronic health records that had been labeled with diagnostic codes for self-injury and determined which patients did and did not attempt suicide. The algorithms were the trained to identify those kinds of cases. This helps us ensure that what we’re predicting is actually suicide attempts and not something similar or related, such as feelings of worthlessness. We didn’t directly test differences between individuals who attempted versus individuals who felt some of the states you’re describing, so I can’t speak to what differentiates those cases.

In terms of your final question, the goal of this work is to develop tools that can be integrated into clinical systems that can improve a clinicians ability to detect risk and take appropriate actions. Ideally, that means pairing accurate risk detection with highly effective, low cost, and scalable interventions.


As someone who had suicidal thoughts in the past, how are you going to find early signs? Will the program be able to tell if the person is trying to hide signs from their friends/family?

tdub2217

Thanks for your question! The timing of risk detection is something my colleagues and I are very interested in. In this study, we designed our approach so that we could examine how well our tool could predict suicide attempt from 2 years to 1 week prior to the event. What we know based on this study, is that it’s possible to detect risk of a suicide attempt a few years prior to engaging in the behavior. This may help us bridge the gap to effective prevention strategies among individuals at very high risk of eventual suicide attempt.

Regarding your second question, our machine learning algorithms relied on a lot of information available in electronic health records, so they do not exclusively rely on self-report. This can help get around the difficulty in assessing or detecting risk among individuals who may be motivated to conceal it and/or may be unaware of their true level of risk.


Did working with a machine learning algorithm show new (surprising) patterns or factors to look at in suicidal peoples' behavior (compared to what psychologists would look at)? What were the most influential factors?

-witchye-

Thanks for your question! I actually get this kind of question a lot. It’s makes a lot of sense to ask it. But, the power in machine learning is its ability to model complexity – that is, the complex interactions between a large number of factors. With that, when we ask about what particular factors mattered, it’s kind of asking what brush strokes are the most important in a painting or what notes are the most important in a song. Before machine learning, we generally approached prediction of suicidal behaviors in this way – what single factor or small number of factors predict? What our results suggest is that the reason why machine learning is so effective in predicting is that it can generate an algorithm that combines factors in complex way. Once you start considering factors in isolation, predictions falls back down to near chance levels (which is where the field had been prior to machine learning).


Hi Dr. Riberio,

I have a couple of questions:

  • What is the dataset that you're using for your problem? Is it publically available?

  • What algorithms do you employ for your problem? Are they an extension of standard machine learning techniques?

  • How do you keep track of prediction error over time? In other words, if a person who is not (as indicated by the present data) inclined to be suicidal what triggers the predictor to indicate that the person would be suicidal as new data arrives?

  • Is this a continuous problem, where streams of data are analyzed continuously to update prediction of risk?

sudarshan85

The dataset was drawn from a large curated data repository at Vanderbilt University Medical Center. Our primary ML approach was random forest; however, we also found similar results with LASSO and SVM. We haven’t yet applied the algorithm in vivo so I can’t speak to the exact question you’re asking on point 3-4. However, our precision and recall estimates were strong (>.90 across time for both metrics).


After someone has been predicted to commit suicide by your AI, what sort of outcomes are you hoping to have going forward? Will this be something documented for that person's medical history?

novakedy

Our ultimate goal is to pair accurate and scalable risk detection tools with effective, low-cost (or free), and scalable interventions. Right now, our work really represents the first steps toward developing tools that are accurate and can potentially be integrated as clinical decision support tools within electronic health records across health care settings. Improving our ability to detect risk is critical to prevention (at least, that’s how we view it). Before the integration of this kind of approach, our ability to predict suicide was at near-chance levels. So, we see this as an important first step. But, it’s by no means the final step – there’s a long way to go toward seeing large-scale reductions in suicidal behaviors. We’re hopeful, though, that this kind of work can start making inroads on this massive public health problem.


Hi ! Thanks for all your hard work and research into this.

My question is regarding the military and the high rates of suicide in veterans. Is there anyway your research could be used to detect potential suicidal tendencies in personnel as they exit the military and begin to transition into civilian life ?

Honeycombz99

Great extension! Military suicide is something that I have also researched for a considerable portion of my career. My team, along with a few others, are applying machine learning to predict suicide (and other unwanted outcomes) among military personnel. Research does suggest that the transition you mention can be a really critical time, and so focusing on detecting risk among servicemembers in those periods would be an important next step. Thanks for your thoughts!


My grandmother committed suicide in her early 40s (I never knew her), as well as her father before her. My father at 45, who never exhibited any signs of depression, nor suicidal behavior, hung himself (with absolutely no warning) when I was 22. I am now 38, and have suffered from depression since an early age. It's obvious the indicators are there. I am aware of it, take medication to control it, talk about it, but am scared to death one day I'll "snap" and nothing can stop the "inevitable". Is there a way to predict far enough in advance, when the subject is already fully aware of their condition, the possible outcome, and has done everything to stop it? How "sensitive" are these predictions, and is there anything in the works to help stop the cycle? I hope this makes sense. Thank you for what you are doing.

cwittyprice

Thanks for your questions, and I'm sorry for the loss of your father and family members. Using the methods we had in the paper, we could predict with fairly high accuracy who would attempt suicide as far as two years before the event. Many of these individuals were aware of their suicidal thoughts or had histories of self-injurious behaviors. Outside of general accuracy, our algorithms also demonstrated strong positive predictive value, so that we were highly accurate at predicting the positive events (i.e., when someone would attempt vs just predicting when someone would not).

In terms of intervention, there are a few things out that I think hold a lot of promise as potentially scalable, low-cost interventions. Among them, some of the work led by my co-author and colleague, Dr. Joe Franklin -- a freely available app called Tec-Tec, which is available for free on the app store. Here's a link to the website if you're interested: http://psytaplab.com/tec-sui/


I lost my mother to suicide. At the end of the day what are you hoping to accomplish from this? The person with suicidal ideations already knows it. Is this supposed to be used on just anyone we know? Like I can do a risk assessment on someone I know?

I suppose if somehow it alerts friends and family the majority of them aren't equipped to deal with someone they know is suicidal. In my family, I felt like I was the only person that cared she was in such a terrible place mentally.

In my case I knew and the family knew she had these issues but at the end of the day with the depression even she couldn't help herself. If a person knows far in advance, what are they supposed to do? Know they aren't allowed to get depressed? Get therapy? What if they don't have insurance?

tocamix90

These are all great points, and I'm very sorry for your loss. The work here really focused on answering the question of "how can we get better at detecting risk, even among individuals who are already at high risk?" The latter part of the question is crucial -- it's actually even more difficult to detect who will attempt suicide when they've been chronically suicidal for a while. The other critical part of the equation is answering "and then what?" That's where work on interventions come in. Sadly, our field has a lot of room to improve on that end too. There are a few researchers who have started to focus on developing interventions that (a) work, (b) are low cost or free, (c) are scalable, and (d) are private. Pairing large scale risk detection with large scale, effective treatments is critical, in my view, for actually effecting change and seeing large scale reductions in suicide rates.


As someone who withdraws more into chat programs with my friends when my depression gets really bad and info scary territory, are you looking through social media activity? I know you'd find more lurker and commenting behavior for me, but also that's been a trend for me lately anyway.

How is AI going to learn to read between the lines for the cry for help behavior? Occasionally there are red flag words or an increase of stilted phrasing but I recognize my own isolation pattern as I cry for attention without feeling like I deserve it when I'm really feeling alone.

EryduMaenhir

Thanks for your questions! I think they bring up really good issues. The algorithms we developed for this paper use electronic health record data; however, information that can be collected from other large data sources (e.g., social media) can also provide rich context. We’re especially interested in the prospect of pairing social media information with things like electronic health record data.

On your second issue, our algorithm was not designed to rely solely on information directly taken from a person’s reporting; however, such information (e.g., saying that you’re thinking about or intending to engage in suicidal behavior) may very well be informative and can be integrated more directly into algorithms, if we have access to that kind of data in the future.

There are special forms of machine learning techniques that are designed to pick up patterns in data – these techniques fall into an umbrella called Natural Language Processing. Our team is working on a few projects that integrates more of those techniques.


Thank you for your work and research Dr. Ribeiro.

How are the questions administered? Is it a written (typed) or oral test? Does one passively or actively interact with the AI?

Will you please have some Pitaria for me? I'm hundreds of miles away and it's what I miss from Tallahassee the most.

NegativityVent

Thanks for your questions! No questions were administered – our algorithms relied just on data available in electronic health records. This was so that we could develop a tool that would be readily scalable across healthcare settings.

Pitaria is great! I actually went there last week, and I’ll happily go back this week. :)


When I think of FSU and suicide research I think of Dr Joiner, do you collaborate with him at all? He is very open to why he pursued suicide research due to the early death of his father, do you have a personal connection to the topic?

What is your favorite restaurant in Tallahassee?

Sqk7700

Dr. Joiner is really a pioneer in suicide theory. I was fortunate to have him as my graduate school major professor, and now I'm also his colleague at FSU. :)

For me, my interest in suicide was born out of recognizing in college the seriousness, scope, and urgency of the problem. The more I learned about it, the more curious I became and the more pressure I felt to change the trajectory of its course.

On restaurants: it's hard to pick! There are so many great ones these days. I'm partial to Kool Beanz, though :)


As someone who has lost 8 family members and friends over the past 10 years to suicide, is the domino affect included in your AIs algorithm at all?

Edit: and thank you. Because this all needs to stop.

Edit2: how much does alcohol play?

themadwife

I'm sorry for your loss. We do have access to genetic data, and hope to integrate that into future iterations of our algorithms. So, although it's indirectly modeled in our current algorithms, familial history and genetics weren't directly modeled. Alcohol and substance use disorders were integrated into our algorithms; however, the power in this approach is in considering the algorithm as a whole -- no single variable was particularly useful in predicting in isolation. Thank you for your question!


Many people have tried this before and I have actually done a little work on this area. My conclusion was that as the incidence of suicide is very low then any screening tool will have a very poor positive predictive value. This is the reality for any screening tool, regardless of the approach you take.

The problem with screening tools is that their usefulness depends on BOTH their accuracy (ie sensitivity and specificity) AND the likelihood of the condition existing in the population in the first place. It's not simply the 'accuracy' that matters.

Here are some numbers:

Suicide incidence is of the order of .01% per annum so even a 98% accurate tool (impossible!) would only have a positive predictive value of 0.5%

Let's go further: even if you only apply a model to people who have attempted suicide before, where the risk if suicide is perhaps 30-40 times higher, even then a 98% accurate model would only have a lifetime positive predictive value of 15% - thus the utility of the tool at any given moment for a given patient would be terrible.

Furthermore, any screening tool involves subjective and vague criteria and usually depends on honest answers from the patient.

Determining suicide post-mortem is difficult and coroners vary considerably in the verdicts they give to individuals who probably died by suicide so validation of any prospective model is very hard.

Thus, any such effort is likely doomed to failure. And the fact that your 'goal' is to make progress over the 'next 10 years' is a flag to me that you know this and this is all rather pointless, despite using a buzzword like 'AI'.

So, my question is do you realise that your goal is probably hopeless?

ABabyAteMyDingo

The low base rates of suicide do make it much more difficult to predict; however, not impossible. So, no, I do not think that our goal is hopeless -- and I think the results presented in this study, and in several other studies that have recently come out speak to that. In this study, we recognized the problem re: base rates, and so also reported precision (PPV) and recall. Our PPVs and recall were above .90 across all time windows. I think our approach in the past was flawed, and that our ability to predict suffered; however, just because we did not have the tools or approaches to accurately predict in the past, does not mean there is no hope in the future. In my stand, there's a lot of hope -- there's a lot of work that can and should be done to advance our ability to predict and prevent suicidal behavior, and I'm confident in our ability to be able to do that.


Therapist here, and thank you so much for doing this research! In your literature reviews, what is currently the best suicide risk assessment inventory you've been able to find?

BoopsForTheSoul

For this question, I'll refer you to some of our recent meta-analytic work that examined all published studies through 2015 that have tried to use some variable to predict a suicide outcome. From that work, we know that no screeners or risk assessments are particularly accurate at predicting suicidal behaviors. The value in machine learning is its ability to integrate a large number of factors in complex ways that we as clinicians are unable to model. Because these machine learning tools are not yet available to clinicians, I would encourage using screeners and risk assessments that are empirically-informed. And, of course, asking regularly throughout treatment and acting accordingly to ensure safety (with means restriction, safety planning, etc.). Here is a link to the meta-analysis I referenced, if you're interested: https://www.apa.org/pubs/journals/releases/bul-bul0000084.pdf


What if someone absolutely refuses to use the Proxi app or insists that they are perfectly fine? How would you respond, despite them being at high risk of suicide?

SkyfireZX

Thanks for the question! Our algorithms don't rely on individuals providing additional information -- they only use information that's already available in electronic health records, so this wouldn't be an issue necessarily (though it is for other approaches).


Suicide is a problem with complex causes, many of which are cultural and/or societal in nature. I realize that, according to the linked article, you are more or less focusing on incidents in the US. How might this research be used in a county like Japan, where suicides cause a huge loss of revenue by stopping trains, with roots that may be very different from what people in the US experience?

Aino_Aida

Great question! My team and I agree – suicide is a complex problem, and accurately predicting it will likely require considering a large combination of factors and the complex relationship among them. That’s, in part, why we think machine learning is a really useful tool to detecting risk. It’s possible that different algorithms will be needed to predict suicidal behaviors in other cultural groups. Our study did include a very large pool of individuals and our results suggested our estimates were fairly reliable; however, it was representative of individuals in the United States, and so whether or not these algorithms would perform just as well outside of that group still needs to be directly tested.


Hello Dr. Ribeiro,

Is there the ability to use the AI on something like social media in order to get a more accurate picture of a person's mental health? The article mentions health records and even communication, but it seems to me like social media would only improve the accuracy of the model. Do you think this is possible to integrate in the next few years? Would it be worth adding to the algorithm?

Thank you very much!

cottonholloa

Excellent thoughts! Our position is that social media may provide useful information for prediction; however, we have not directly tested this yet. Ideally, we may want to integrate information from those platforms with other sources of data, such as electronic health records. Doing so may end up helping us answer not only who may be at risk, but when those individuals may engage in potentially lethal suicidal behavior.


Is this project to predict an individual suicide or a more general purpose societal increase in suicide?

For example, two weeks ago my town had three bridge jumpers on the same day. It seems like being able to predict spikes in suicidal behavior among communitys would be very useful!

jordanlund

This would be on the level of individual, but this kind of approach can be used to predict societal increases as well.


This is really interesting. How are you going to involve the families and support systems of the people predicted to commit suicide? How can you be so sure that the algorithms will detect a high percent of predicted suicide victims? How can you maintain accuracy?

banacha

Excellent questions re: next steps. I think the "what then?" question is a big one that we still need to research. This particular study is a good first step in advancing our ability to predict risk, but "then what?" For me and my team, the answer to that question is pairing it with low cost, highly effective, and scalable interventions. Integrating the family and support systems may be one element that fits into that intervention piece.


Grad student here studying re admission rates in hospitals using ML methods...

Are you or your colleagues working with recurrent neural networks for suicide prediction?

More generally, what ML methods have you found weren't effective and are you exploring new techniques?

Thank you!

TlGHTSHIRT

We haven't yet applied recurrent neural networks; for the study noted here, we applied Random Forest (as well as LASSO and SVM, which had comparable results). We're exploring other techniques in current projects, however... Thanks for your question! Good luck with your research!


Hello! This is amazing! As a mother this topic terrifies me. How accessible will this capability be to the general population? Eg, will parents have access to help our children?

inotamexican

Great question. As it stands, we're in the middle of developing tools that could be integrated into electronic health records across health care settings.


Additional Assets

License

This article and its reviews are distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and redistribution in any medium, provided that the original author and source are credited.