AI in Healthcare Faces Bias and Privacy Challenges Amid Growing Adoption

Key Points
- Google asserts it treats AI model bias "extremely seriously" and is developing privacy safeguards.
- Open Evidence provides AI‑generated medical summaries with source citations for clinicians.
- UCL and King's College London created the Foresight model using anonymized data from millions of patients.
- Chris Tomlinson highlighted the advantage of national‑scale data for representing diverse demographics.
- The NHS paused the Foresight project after a data‑protection complaint from the BMA and Royal College of General Practitioners.
- European researchers developed Delphi‑2M to predict disease susceptibility using UK Biobank data.
- Experts warn AI systems can hallucinate, posing risks in clinical decision‑making.
- MIT’s Ghassemi emphasized AI’s potential to address major health gaps rather than incremental performance gains.
Medical AI tools are expanding their reach, but experts warn they may downplay symptoms in women and ethnic minorities and raise privacy concerns. Google says it treats model bias seriously and is developing techniques to protect sensitive data. Open Evidence, used by hundreds of thousands of doctors, relies on citations from medical journals and regulatory sources. Research projects such as UCL and King’s College London’s Foresight model, trained on anonymized data from millions, aim to predict health outcomes, while European Delphi-2M predicts disease susceptibility. The NHS paused Foresight after a data‑protection complaint, highlighting the tension between innovation and patient privacy.
Growing Use of AI in Clinical Settings
Artificial intelligence is increasingly being integrated into medical workflows. Open Evidence, a tool employed by a large number of physicians, draws on medical journals, U.S. Food and Drug Administration labels, health guidelines, and expert reviews to summarize patient histories and retrieve information. Each AI‑generated output is accompanied by a citation to its source, providing transparency for clinicians.
Addressing Bias in Medical AI
Google has emphasized that it takes model bias "extremely seriously" and is developing privacy techniques that can sanitise sensitive datasets while safeguarding against discrimination. Researchers suggest that reducing bias begins with careful selection of training data, advocating for diverse and representative health datasets.
Large‑Scale Research Initiatives
University College London and King’s College London collaborated with the UK’s National Health Service to develop a generative AI model called Foresight. The model was trained on anonymized patient data from tens of millions of individuals, encompassing records of hospital admissions and Covid‑19 vaccinations. Lead researcher Chris Tomlinson noted that the national‑scale data "allows us to represent the full kind of kaleidoscopic state of England in terms of demographics and diseases," offering a stronger foundation than more generic datasets.
European scientists have also created an AI model named Delphi‑2M, which predicts long‑term disease susceptibility using anonymized medical records from hundreds of thousands of participants in the UK Biobank.
Privacy Concerns and Regulatory Scrutiny
The NHS Foresight project was paused to allow the UK Information Commissioner’s Office to consider a data‑protection complaint filed by the British Medical Association and the Royal College of General Practitioners. The complaint highlighted concerns over the use of sensitive health data in model training.
Risks of Hallucination and Clinical Impact
Experts caution that AI systems can "hallucinate"—producing fabricated answers—which could be especially harmful in medical contexts. Despite these risks, MIT researcher Ghassemi expressed optimism, stating that AI brings "huge benefits to healthcare" and should focus on addressing critical health gaps rather than merely improving marginal task performance.