Home Artificial Intelligence Transformative Potential of a Healthcare-Specific Foundational Model

Transformative Potential of a Healthcare-Specific Foundational Model


In the past two years, generalist foundational models like GPT-4 have significantly evolved, offering unprecedented capabilities due to larger datasets, increased model sizes, and architectural improvements. These models are adaptable to a wide range of tasks across various fields. However, healthcare AI is still characterized by models designed for specific tasks. For instance, a model trained to analyze X-rays for bone fractures would only identify fractures and lack the capability to generate comprehensive radiology reports. Most of the 500 AI models approved by the Food and Drug Administration are limited to one or two use cases. However, foundation models, known for their broad applicability across different tasks, are setting the stage for a transformative approach in healthcare applications.

While there have been initial attempts to develop foundational models for medical applications, this broader approach has not yet become prevalent in healthcare AI. This slow adoption is mainly due to the challenges associated with accessing large and diverse healthcare datasets, as well as the need for models to reason across different types of medical data. The practice of healthcare is inherently multimodal and incorporates information from images, electronic health records (EHRs), sensors, wearables, genomics, and more. Thus, a foundational healthcare model must also be inherently multimodal. Nonetheless, recent progress in multimodal architectures and self-supervised learning, which can handle various data types without needing labeled data, is paving the way for a healthcare foundational model.

Current State of Generative AI in Healthcare

Healthcare has traditionally been slow to adopt technology, however, it seems to have embraced Generative AI more swiftly. At HIMSS24, the largest global conference for healthcare technology professionals, Generative AI was the focal point of nearly every presentation.

One of the first use cases of Generative AI in healthcare that has seen widespread adoption focuses on alleviating the administrative load of clinical documentation. Traditionally, documenting patient interactions and care processes consumes a substantial portion of physicians’ time (>2 hrs. per day), often detracting them from direct patient care.

AI models like GPT-4 or MedPalm-2 are being used to monitor patient data and physician-patient interactions to draft key documents such as progress notes, discharge summaries, and referral letters. These drafts capture essential information accurately, requiring only physician review and approval. This significantly reduces paperwork time, allowing physicians to focus more on patient care, enhancing quality of service and reducing burnout.

However, the broader applications of foundational models in healthcare have yet to fully materialize. Generalist foundational models like GPT-4 have several limitations; thus, there is a need for a healthcare-specific foundational model. For example, GPT-4 lacks the capability to analyze medical images or understand longitudinal patient data, which is critical for providing accurate diagnoses. Additionally, it does not possess the most up-to-date medical knowledge, as it was trained on data available only up to December 2023. Google’s MedPalm-2 represents the first attempt to build a healthcare-specific foundational model, capable of both answering medical queries and reasoning about medical images. However, it still doesn’t capture full potential of AI in healthcare.

Building a Healthcare Foundational Model

The process of building a healthcare foundational model begins with data derived from both public and private sources, including biobanks, experimental data, and patient records. This model would be capable of processing and combining different data types, such as text with images or laboratory results, to perform complex medical tasks.

Additionally, it could reason about new situations and articulate its outputs in medically precise language. This capability extends to inferring and utilizing causal relationships between medical concepts and clinical data, especially when providing treatment recommendations based on observational data. For instance, it could predict acute respiratory distress syndrome from recent severe thoracic trauma and declining arterial oxygen levels, despite an increased oxygen supply.

Furthermore, the model would access contextual information from resources like knowledge graphs or databases to obtain up-to-date medical knowledge, enhancing its reasoning and ensuring that its advice reflects the latest advancements in medicine

Applications and Impact of Healthcare Foundational Model

The potential uses for a healthcare foundational model are extensive. In diagnostics, such a model could reduce the dependence on human analysis. For treatment planning, the model could aid in crafting individualized treatment strategies by considering a patient’s entire medical record, genetic details, and lifestyle factors. Some other applications include:

  • Grounded radiology reports: The healthcare foundational model can transform digital radiology by creating versatile assistants that support radiologists by automating report drafting and reducing workload. It would also be able to integrate entire patient history. For instance, radiologists can query the model about changes in conditions over time: “Can you identify any changes in the tumor size since the last scan?”
  • Bedside Clinical Decision Support: Leveraging clinical knowledge, it would offer clear, free-text explanations and data summaries, alerting medical staff to immediate patient risks and suggesting next steps. For example, the model cloud alert, “Warning: This patient is about to go into shock,” and provide links to relevant data summaries and checklists for action.
  • Drug Discovery: Designing proteins that bind specifically and strongly to a target is the foundation of drug discovery. Early models like RFdiffusion have begun to generate proteins based on basic inputs such as a target for binding. Building on these initial models, a healthcare-specific foundational model could be trained to understand both language and protein sequences. This would allow it to offer a text-based interface for designing proteins, potentially speeding up the development of new drugs


Although building a healthcare-specific foundational model remains the ultimate goal, and recent advancements have made it more feasible, there are still significant challenges in developing a single model capable of reasoning across diverse medical concepts:

  • Data mapping multiple modalities: The model must be trained on various data modalities such as EHR data, medical imaging data, and genetic data. Reasoning across these modalities is challenging because sourcing high-fidelity data that accurately maps interactions across all these modalities is difficult. Moreover, representing various biological modalities, from cellular dynamics to molecular structures and genome-wide genetic interactions, is complex. Optimal training on human data is unfeasible and unethical, so researchers rely on less predictive animal models or cell lines, which creates a challenge in translating laboratory measurements to the intricate workings of whole organisms.
  • Validation and Verification: Healthcare foundational models are challenging to validate due to their versatility. Traditionally, AI models are validated for specific tasks like diagnosing a type of cancer from an MRI. However, foundational models can perform new, unseen tasks, making it hard to anticipate all possible failure modes. They require detailed explanations of their testing and approved use cases and should issue warnings for off-label use. Verifying their outputs is also complex, as they handle diverse inputs and outputs, potentially requiring a multidisciplinary panel to ensure accuracy.
  • Social Biases: These models risk perpetuating biases, as they may train on data that underrepresents certain groups or contains biased correlations. Addressing these biases is crucial, particularly as the scale of models increases, which can intensify the problem.

Path Forward

Generative AI has already begun to reshape healthcare by alleviating the documentation burden on clinicians, but its full potential lies ahead. The future of foundational models in healthcare promises to be transformative. Imagine a healthcare system where diagnostics are not only faster but also more accurate, where treatment plans are precisely tailored to the genetic profiles of individual patients, and where new drugs could be discovered in a few months rather than years.

Creating a healthcare-specific foundational AI model presents challenges, especially when it comes to integrating the diverse and scattered medical and clinical data. However, these obstacles can be addressed through collaborative efforts among technologists, clinicians, and policymakers. By working together, we can develop commercial frameworks that incentivize various stakeholders (EHRs, imaging companies, pathology labs, providers) to unify this data and construct AI model architectures capable of processing complex, multimodal interactions within healthcare.

Moreover, it is crucial that this advancement proceeds with a clear ethical compass and robust regulatory frameworks to ensure that these technologies are used responsibly and equitably. By maintaining high standards of validation and fairness, the healthcare community can build trust and foster acceptance among both patients and practitioners.

The journey toward fully realizing the potential of healthcare foundational models is an exciting frontier. By embracing this innovative spirit, the healthcare sector can anticipate not just meeting current challenges but transform medical science. We are on the brink of a bold new era in healthcare—one brimming with possibilities and driven by the promise of AI to improve lives on a global scale.

Source Link

Related Posts

Leave a Comment