April 20, 2024
A.I

Rethinking personalized medicine: the limits of AI in clinical trials

Summary: A new study reveals limitations in the current use of mathematical models for personalized medicine, particularly in the treatment of schizophrenia. Although these models can predict patient outcomes in specific clinical trials, they fail when applied to different trials, testing the reliability of AI-based algorithms in various settings.

This study highlights the need for algorithms to demonstrate their effectiveness in multiple contexts before they can be truly trusted. The findings highlight a significant gap between the potential of personalized medicine and its current practical application, especially given the variability in clinical trials and real-world medical settings.

Key facts:

  1. The mathematical models currently used for personalized medicine are effective in specific clinical trials, but fail to generalize across different trials.
  2. The study raises concerns about the application of AI and machine learning in personalized medicine, especially for diseases such as schizophrenia, where response to treatment varies greatly between individuals.
  3. The research suggests that more comprehensive data sharing and the inclusion of additional environmental variables could improve the reliability and accuracy of AI algorithms in medical treatments.

Fountain: Yale

The pursuit of personalized medicine, a medical approach in which professionals use a patient’s unique genetic profile to tailor individual treatment, has become a key goal in the healthcare sector. But a new Yale-led study shows that the mathematical models currently available for predicting treatments have limited effectiveness.

In an analysis of clinical trials for multiple schizophrenia treatments, researchers found that mathematical algorithms could predict patient outcomes within the specific trials for which they were developed, but did not work for patients participating in different trials.

The findings are published January 11 in the journal. Science.

“This study really challenges the status quo of algorithm development and raises the bar for the future,” said Adam Chekroud, adjunct assistant professor of psychiatry at Yale School of Medicine and corresponding author of the paper. “Right now, I would say we need to see algorithms working in at least two different environments before we can really get excited about it.”

“I’m still optimistic,” he added, “but as medical researchers we have some important things to figure out.”

Chekroud is also president and co-founder of Spring Health, a private company that provides mental health services.

Schizophrenia, a complex brain disorder that affects about 1% of the US population, perfectly illustrates the need for more personalized treatments, researchers say. Up to 50% of patients diagnosed with schizophrenia do not respond to the first antipsychotic drug prescribed, but it is impossible to predict which patients will respond to therapies and which will not.

Researchers hope that new technologies using machine learning and artificial intelligence can generate algorithms that better predict which treatments will work for different patients and help improve outcomes and reduce costs of care.

However, due to the high cost of conducting a clinical trial, most algorithms are only developed and tested through a single clinical trial. But the researchers hoped these algorithms would work if they were tested on patients with similar profiles and receiving similar treatments.

For the new study, Chekroud and his Yale colleagues wanted to see if this hope was actually true. To do so, they aggregated data from five clinical trials of schizophrenia treatments available through the Yale Open Data Access (YODA) Project, which advocates and supports the responsible sharing of clinical research data.

They found that, in most cases, the algorithms effectively predicted patient outcomes in the clinical trial in which they were developed. However, they failed to effectively predict the outcomes of schizophrenia patients treated in different clinical trials.

“The algorithms almost always worked the first time,” Chekroud said. “But when we tested them in patients from other trials, the predictive value was no better than chance.”

The problem, according to Chekroud, is that most of the mathematical algorithms used by medical researchers were designed for use on much larger data sets. Clinical trials are expensive and time-consuming to conduct, so studies typically enroll fewer than 1,000 patients.

Applying powerful AI tools to the analysis of these smaller data sets, he said, can often result in “overfitting,” in which a model has learned response patterns that are idiosyncratic, or specific only to the data. initial tests, but they disappear when new additional data is included.

“The reality is that we need to think about developing algorithms in the same way we think about developing new drugs,” he said. “We need to see algorithms working at multiple different times or contexts before we can truly believe them.”

In the future, including other environmental variables may or may not improve the success of algorithms in analyzing clinical trial data, the researchers added. For example, does the patient abuse drugs or does he or she have personal support from family or friends? These are the types of factors that can affect treatment results.

Most clinical trials use precise criteria to improve the chances of success, such as guidelines for which patients should be included (or excluded), careful measurement of outcomes, and limits on the number of doctors administering treatments. Meanwhile, researchers say real-world settings have a much wider variety of patients and greater variation in treatment quality and consistency.

“In theory, clinical trials should be the easiest place for algorithms to work. But if algorithms can’t generalize from one clinical trial to another, it will be even more difficult to use them in clinical practice,” said co-author John Krystal, Robert L. McNeil, Jr. Professor of Translational Research and professor of psychiatry, neuroscience and psychology. from Yale School of Medicine. Krystal is also chair of the Yale Department of Psychiatry.

Chekroud suggests that greater efforts to share data between researchers and additional data storage by healthcare providers on a large scale could help increase the reliability and accuracy of AI-powered algorithms.

“Although the study addressed trials in schizophrenia, it raises difficult questions for personalized medicine in general and its application in cardiovascular disease and cancer,” said Philip Corlett, associate professor of psychiatry at Yale and co-author of the study.

Other authors of the Yale study are Hieronimus Loho; Ralitza Gueorguieva, senior research scientist at the Yale School of Public Health; and Harlan M. Krumholz, Harold H. Hines Jr. Professor of Medicine (Cardiology) at Yale.

About this news about research in AI and personalized medicine

Author: Bess Connolly
Fountain: Yale
Contact: Bess Connolly-Yale
Image: Image is credited to Neuroscience News.

Original research: Closed access.
“Illusory generalizability of clinical prediction models” by Adam Chekroud et al. Science


Abstract

Illusory generalizability of clinical prediction models.

It is widely expected that statistical models can improve decision making related to medical treatments. Due to the cost and scarcity of data on medical outcomes, this hope typically relies on researchers observing the success of a model in one or two data sets or clinical contexts.

We examined this optimism by examining how well a machine learning model performed in several independent clinical trials of antipsychotic medications for schizophrenia.

The models predicted patient outcomes with high accuracy within the trial in which the model was developed, but did not perform better than chance when applied outside the sample. Combining data across trials to predict outcomes in the excluded trial did not improve predictions.

These results suggest that models predicting treatment outcomes in schizophrenia are highly context-dependent and may have limited generalizability.

Leave a Reply

Your email address will not be published. Required fields are marked *