AI May Soon Be Trained To Diagnose Mental Illness
Scientists in multiple fields of psychology are actively gathering data and undergoing testing in an effort to teach artificial intelligence programs to diagnose mental illness in humans. This is according to a report in The Verge written by B. David Zarley, who himself has borderline personality disorder, as part of its Real World AI issue.
Zarley met with multiple scientists who are each taking their own approach to machine learning in the service of finding a better way to diagnose psychological disorders.
The current model, based on referring to the DSM to guide psychiatrists to make diagnoses around a patient’s self-reported symptoms, is inherently biased and considered by many in the field of psychology to be flawed. The current director of the National Institute of Mental Health (NIMH), Dr. Joshua Gordon, feels that way himself.
“We have to acknowledge in psychiatry that our current methods of diagnosis—based upon the DSM—our current methods of diagnosis are unsatisfactory anyway,” Gordon told Zarley in an interview.
Diagnosing people based on purely physical data is not yet within reach the way that diagnosing people with physical illness is. With advances in computer science, however, it is finally possible to train AI software to compile data and recognize patterns in a way that a human brain simply could not handle.
“Machine learning is crucial to getting [Psychologist Pearl Chiu’s] work out of the lab and to the patients they are meant to help,” Zarley writes. “‘We have too much data, and we haven’t been able to find these patterns’ without the algorithms, Chiu says. Humans can’t sort through this much data—but computers can.”
Additionally, scientists envision using MRI technology to help discover the root of certain mental illnesses or their symptoms and even treat them by allowing patients to directly see the results of their thoughts and better understand how their brains function.
“[Research coordinator Whitney] Allen was asked to project her brain into the future, or focus on the immediate present, in an attempt to help find out what goes on under the hood when thinking about instant or delayed gratification, knowledge which could then be used to help rehabilitate people who cannot seem to forgo the instant hit, like addicts.”
Many of the scientists Zarley spoke with believe that AI-diagnosed mental illness will be a reality in the space of years, not decades. However, there are both practical and ethical concerns to be considered.
AI built and taught by humans, who are biased, cannot help but be biased itself. Zarley points out that “different cultures think of certain colors or numbers differently.” Data for the AI program also must be collected from human samples, and that is much easier done from a developed nation in an area with a university. That leaves entire populations from poorer nations and even rural populations in the U.S. largely out of the picture.
There are also numerous ethical concerns any time the idea of artificial intelligence is raised. In their paper The Ethics of Artificial Intelligence, Nick Bostrom of the Future of Humanity Institute and Eliezer Yudkowsky of the Machine Intelligence Research Institute address multiple concerns.
“Responsibility, transparency, auditability, incorruptibility, predictability, and a tendency to not make innocent victims scream with helpless frustration: all criteria that apply to humans performing social functions; all criteria that must be considered in an algorithm intended to replace human judgment of social functions; all criteria that may not appear in a journal of machine learning considering how an algorithm scales up to more computers.”
Regardless, AI is on its way, and the scientists Zarley interviewed are optimistic about future results.