Artificial Intelligence: The Future of Health Care Unleashed?

This article explains generative AI and explores the pros and cons of using AI technology in health care.

icon_featured_hand

August 23, 2023

As the fastest technological adoption in history, ChatGPT hit 100 million users just 2 months after its release.1 This meteoric rise in the use of ChatGPT has electrified public interest about the current and future use of generative artificial intelligence (AI) and sparked rigorous discourse about its potential and drawbacks. Due to AI’s current influence on health care and its fast-growing potential, employers, employees and health care service providers likely have questions about how advancements in AI may help or hinder the Quintuple Aim.2

This article begins to answer some of these questions by highlighting key ways AI is being deployed in health care, both in clinical settings as well in employer-sponsored programs. The article also includes important considerations about the use of AI, particularly those related to ethics and equity.3

Generative AI

Generative AI is a field of artificial intelligence focused on creating new and original content using neural networks, which are computational models inspired by the human brain. These networks learn from vast amounts of data to understand patterns, styles and structures, enabling them to generate realistic and coherent outputs, such as images, text or music, that mimic human creativity. Neural networks within generative AI processes use layers of interconnected nodes that work collaboratively to transform input data into meaningful and innovative creations.4


How Is AI Being Used in Clinical Care Today?

With eye-catching headlines like “AI Improves Breast Cancer Detection Rate by 20%,” it may come as no surprise that the use of AI in the health care system is widespread.5 According to Grandview Research, the health care market for artificial intelligence is expected to grow globally by 37.5% to reach 143 billion dollars by 2030.6

The adoption of AI in clinical settings has shown that it has the potential to help health systems to process vast amounts of medical data-- leading to improved diagnostic accuracy,7 personalized treatment plans,8 and ultimately better patient satisfaction.9,10 For example, AI algorithms are being deployed in medical imaging to identify tumors in CT scans; diagnose disease by analyzing patient data to assist with treatment planning, medication management and predict patient outcomes; and power wearable devices so that health care providers can monitor patients' vital signs remotely, detect anomalies and undertake data-informed interventions.11,12 These examples are in addition to the ways AI is being used to assist health care providers with routine tasks like transcribing patient notes and answering simple medical questions.

A growing body of research is supporting the use of AI in clinical care. A Harvard study found that an AI algorithm was able to instantly identify the type and severity of a specific brain tumor - something that previously took weeks to accomplish.13 And a study published in JAMA in 2023 found that chatbots are being met with wide acceptance by patients for general medical questions; when comparing chatbot and physician answers to health questions, patients thought chatbot answers were more empathetic than physician answers, and 79% of patients preferred the chatbot responses overall.14

How Can AI Support Employer-sponsored Health Care?

Listed below are numerous ways that AI is being used to support employer-sponsored health care.

  • Prior authorization and administrative support: AI-powered tools are being used to complete tasks that may enhance the employee and provider experience, including improving the prior authorization process. A McKinsey analysis determined that AI-enabled prior authorization programs had the potential to automate 75% of the process, enabling faster and more accurate decision-making, reducing costs and ultimately, improving the patient experience.15
  • Benefit selection and navigation: Benefit platforms that leverage AI-enabled chatbots can improve employees' understanding of health benefits by addressing their questions and offering personalized guidance and clarifications about available offerings, which can be particularly helpful during open enrollment. These chatbots have been deployed to provide an adaptive, step-by-step tutorial to members walking through the enrollment process.16
  • Health care fraud, waste and abuse reduction: AI solutions are being used by health insurers to combat fraud, waste and abuse, which rose during the pandemic, reaching $300 billion in annual costs.17 For example, one AI- based business intelligence tool evaluated fee-for-service payments over 2 years in the Iowa Medicaid population and recovered $41.5 million in payments.18
  • Risk identification and early intervention: AI can analyze large datasets from sources like health assessments, wearables and electronic health records to identify and predict potential health issues and trends as a result of certain medical conditions. These conditions, including diabetes, musculoskeletal problems or maternal health complications, may necessitate early intervention.19,20,21 Additionally, AI is being used to provide personalized recommendations and interventions, such as health coaching and nutrition counseling, so that employees can better manage their health.22

[We must train AI] so that it does not perpetuate and exacerbate existing disparities in health care.


Sonoo Thadaney Israni, co-author of the National Academy of Medicine’s Special Publication, AI in Healthcare: The Hope, The Hype, The Promise, The Peril, and the executive director of Presence + Program in Bedside Medicine at Stanford University

What Pitfalls Related to the Current and Future AI Landscape Should Employers Keep in Mind?

While AI offers promise for improving the efficiency and quality of health care, its marked growth has occurred quickly and without coordinated oversight and regulation. As AI technologies continue to advance, it becomes paramount to understand potential risks, which include but aren’t limited to those related to:

key

Data privacy and security

In order to yield maximum benefits from AI tools, large integrated datasets are needed. This presents major challenges, as employers and their partners seek to protect these large datasets, which often contain personal health data, from unauthorized access. Thus, it is imperative that all data housing systems and their administrators comply with data protection regulations and employ secure storage and transmission protocols. Further, employers should consider the best way to inform patients about how their data may be used and provide consent for its use.

uneven scales

Bias

Ensuring that AI is unbiased is another large concern, particularly since algorithms have been found to contain bias, which can influence health care access and outcomes. This is the result of algorithms “learning from a biased dataset in a biased society and health care system,” said Ziad Obermeyer, MD, Blue Cross of California Distinguished Associate Professor of Health Policy and Management at UC Berkeley School of Public Health on a Business Group on Health podcast.23

artificial intelligence

False information (i.e., AI hallucinations)

Such inaccuracies can erode the trust and reliability of chatbots or virtual assistants, affecting their usefulness and credibility. One study from Stanford showed that in clinical scenarios, responses from Chat GPT 3.5 generated fabricated responses 9% of the time.26 In May, the World Health Organization called for caution in the deployment of AI in health care due to the possibility that it can be used to generate misleading or inaccurate information that can spread disinformation.27


Is it Possible to Overcome These Pitfalls to Fully Realize the Potential of AI?

As excitement over the potential of AI builds, experts in the field offer a word of caution, pointing to the “two AI winters” – the first AI winter, occurring in the late 1970s, characterized by reduced funding and enthusiasm due to unmet expectations, and the second AI winter, was in the late 1980s resulting from limited progress and skepticism about the viability of artificial intelligence technologies. “These were two chapters in the not-so-distant past where there was a whole bunch of overpromising and underdelivering, which led to the loss of funding, stagnation, and faith in what could be delivered and how and when,” says Israni. Today, there is hope that a deeper understanding of the perils of AI will help us overcome them and avoid another winter.”

Charting a positive path forward will require, among other things, transparency, due diligence and the development of a consistent regulatory framework. These are essential to understand and validate the reasoning behind AI-generated insights, ensure that they integrate appropriately with other technical and legal requirements and importantly, to correct missteps when they occur. For employers, this may involve requiring vendor partners to disclose and explain the ways AI is being used to augment benefits and programs, the practices they have in place to protect confidential patient and program data, approaches to identify and rectify potential biases or missteps, and how they take corrective actions to address any issues or discrepancies that may arise.

The path forward will also require a focus on human-centered design and collaboration, including partnerships between employers and the health and well-being companies they work with. As summarized by Israni, “Collectively, we must conscientiously develop this technology: to humans, for humans and by humans to build an AI-augmented world, where the human experience is complemented by AI technology.”

Collectively, we must conscientiously develop this technology: to humans, for humans and by humans to build an AI-augmented world, where the human experience is complemented by AI technology.


Sonoo Thadaney Israni

For additional information, listen to the Business Group’s podcast, Artificial Intelligence in Health Care: Its Perils (Bias) and Potential.


More Topics

Resource icon_right_chevron_dark Virtual Solutions icon_right_chevron_dark Plan Design & Administration icon_right_chevron_dark
More in Benefits Strategy

TABLE OF CONTENTS

  1. How Is AI Being Used in Clinical Care Today?
  2. How Can AI Support Employer-sponsored Health Care?
  3. What Pitfalls Related to the Current and Future AI Landscape Should Employers Keep in Mind?
  4. Is it Possible to Overcome These Pitfalls to Fully Realize the Potential of AI?