Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Health

Study Raises Concerns About Generalizability of Clinical Prediction Models in AI Healthcare

A recent study published in the journal Science has raised concerns about the generalizability of clinical prediction models, particularly in the context of artificial intelligence (AI) in healthcare.

The study, conducted by Adam M. Chekroud, Matt Hawrilenko, Hieronimus Loho, Julia Bondar, and others, examined the performance of machine learning models in predicting patient outcomes in independent clinical trials of antipsychotic medication for schizophrenia.

One of the key findings of the study was that machine learning models demonstrated high accuracy in predicting patient outcomes within the specific trial in which the model was developed. However, when these same models were applied to independent clinical trials, their predictive performance dropped to chance levels, indicating a lack of generalizability.

Furthermore, the study revealed that even when data from multiple similar multisite trials were aggregated to build more robust models, the predictive performance remained poor when applied to new patient samples, highlighting the context-dependency of these models.

This research challenges the widespread optimism regarding the potential of statistical models to improve decision-making in medical treatments. The hope, often based on successful observations of model performance in limited datasets or clinical contexts, may not accurately reflect the real-world generalizability of these models.

The implications of these findings are particularly significant in the realm of precision medicine and AI-driven healthcare, where the ability to accurately predict and identify the best course of care for patients is a central promise. The study’s results suggest that caution is warranted in relying solely on machine learning models for clinical decision-making, especially without rigorous prospective testing on independent patient samples.

The study’s authors emphasize the need for further research and scrutiny in evaluating the generalizability of clinical prediction models, particularly in the context of AI applications in healthcare.

For more details, the full research article can be accessed in the journal Science, Vol. 383, Issue 6679, pp. 164-167, DOI: 10.1126/science.adg85382,7061.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *