Can artificial intelligence transform psychiatry?
Listen to this article
Thanks to advances in artificial intelligence, computers can now assist doctors in diagnosing disease and help monitor patient vital signs from hundreds of miles away. Currently researchers are working to apply machine learning to psychiatry, with a speech-based mobile app that can categorize a patient's mental health status as well as, or better than, a human can.
What the researchers say: "We are not in any way trying to replace clinicians," said the co-author of a new paper in Schizophrenia Bulletin that lays out the promise and potential pitfalls of AI in psychiatry. "But we do believe we can create tools that will allow them to better monitor their patients."
Low-ball estimates claim one in five U.S. adults live with a mental illness, many in remote areas where access to psychiatrists or psychologists is scarce. Others can't afford to see a clinician frequently, don't have time or can't get in to see one.
Even when a patient does make it in for an occasional visit, therapists base their diagnosis and treatment plan largely on listening to a patient talk—an age-old method that can be subjective and unreliable.
"Humans are not perfect. They can get distracted and sometimes miss out on subtle speech cues and warning signs," the researchers claim. "Unfortunately, there is no objective blood test for mental health." (Actually, there are for some mental illnesses—researchers should do their research).
In pursuit of an AI version of a blood test, the team developed machine learning technology able to detect day-to-day changes in speech that hint at mental health decline.
“For instance, sentences that don't follow a logical pattern can be a critical symptom in schizophrenia. Shifts in tone or pace can hint at mania or depression. And memory loss can be a sign of both cognitive and mental health problems,” they say.
Again, some further research is needed before they make pronouncements. Memory loss can be a sign of impending stroke, or the result of a mini-stroke (which can be quite harmless) or even ocular migraine (which is quite common).
"Language is a critical pathway to detecting patient mental states," said the lead researcher. "Using mobile devices and AI, we are able to track patients daily and monitor these subtle changes."
The new mobile app asks patients to answer a 5- to 10-minute series of questions by talking into their phone.
Among various other tasks, they're asked about their emotional state, asked to tell a short story, listen to a story and repeat it and given a series of touch-and-swipe motor skills tests.
Their AI system that assesses those speech samples, compares them to previous samples by the same patient and the broader population and rates the patient's mental state.
In one recent study, the team asked human clinicians to listen to and assess speech samples of 225 participants—half with severe psychiatric issues; half healthy volunteers—in rural Louisiana and Northern Norway. They then compared those results to those of the machine learning system.
"We found that the computer's AI models can be at least as accurate as clinicians," they said.
The researchers envision a day when the AI systems they're developing for psychiatry could be in the room with a therapist and a patient to provide additional data-driven insight or serve as a remote-monitoring system for the severely mentally ill.
If the app detected a worrisome change, it could notify the patient's doctor to check in.
So, what? I can think of so many reasons why this is a bad idea that it would take several TR-length essays to cover them all. Firstly, as I have noted above, many physical illnesses can have cognitive consequences—heart disease is just one of them. Secondly one person’s “normal” is another’s “abnormal” and that bias is going to be built into the AI system (look at the differences in symptomologies and diagnoses of common mental problems in the various editions of the standard psychiatric reference book DSM).
AI in this area would make an efficient technology for use by a coercive state looking to detect changes in allegiance or nonconformity to their preferred norms of thought or behavior.
A friend of ours recently endured a 4-hour compulsory psychiatric exam as part of applying for a job. Pretty soon there’ll be an AI app for that too.
Finally (for now), God preserve us from AI researchers who have done insufficient research in the area they are attempting to bring AI into (like the ones who gave us the now laughed-at sentencing app relied on by judges). The lead researcher behind this study is the originator of the AI that grades school essays. Just imagine the biases incorporated in that!
More from this issue of TR
You might be interested in
Men without work face a worrying well-being crisis
The number of prime-age males without work is increasing worldwide. This development goes hand in hand with an increase in ill-being driven by high levels of stress, desperation and anger.
Severity of crime increases jury’s belief in guilt.
The more severe a crime, the more evidence you should have to prove someone did it. But a new study, appearing in Nature Human Behavior, has shown that the type of alleged crime can increase jurors’ confidence in guilt.
Our brains are obsessed with being social.
New research demonstrates how our brains consolidate new social information—even during rest. Our brains are obsessed with being social even when we’re not in social situations. The study, published in Cerebral Cortex, finds that the brain may engage in social encoding (learning from a recent social situation) even when it’s at rest.
Join our tribe
Subscribe to Dr. Bob Murray’s Today’s Research, a free weekly roundup of the latest research in a wide range of scientific disciplines. Explore leadership, strategy, culture, business and social trends, and executive health.