Navigating the AI Frontier: The Promising Horizons and Ethical Quandaries in Health Care

This blog post discusses how rapid adoption of AI has underscored both its potential and pitfalls.

As the fastest technological adoption in history, ChatGPT hit 100 million users just 2 months after its release. This meteoric rise in the use of ChatGPT has electrified public interest in the current and future use of generative artificial intelligence (AI) and sparked rigorous discourse about its potential and drawbacks. Due to AI’s current influence on health care and its fast-growing potential, employers, employees and health care service providers likely have questions about how AI is being used, as well as how it can help or hinder the Quintuple Aim.

Generative AI

Generative AI is a field of artificial intelligence focused on creating new and original content using neural networks, which are computational models inspired by the human brain. These networks learn from vast amounts of data to understand patterns, styles and structures, enabling them to generate realistic and coherent outputs, such as images, text or music, that mimic human creativity. Neural networks within generative AI processes build layers of interconnected nodes that work collaboratively to transform input data into meaningful and innovative creations.


AI is being adopted in clinical settings and has proven to be an effective tool to process vast amounts of medical data, leading to improved diagnostic accuracy, personalized treatment plans and ultimately better patient satisfaction.

AI is being used in numerous ways related to employer-sponsored health care, including but not limited to:

  • Prior authorization and administrative support: AI-powered tools are being used to complete tasks that may increase the employee and provider experience, including improving the efficiency of the prior authorization process and reducing costs.
  • Benefit selection and navigation: Benefit platforms that leverage AI-enabled chatbots can improve employees' understanding of health benefits by addressing their questions and offering personalized guidance and clarifications about available offerings, which can be particularly helpful during open enrollment.
  • Health care fraud, waste and abuse reduction: AI solutions are being used by health insurers to combat fraud, waste and abuse, which rose during the pandemic, reaching $300 billion in annual costs.
  • Risk identification and early intervention: AI can analyze large datasets from sources like health assessments, wearables and electronic health records to identify and predict potential health issues and trends that necessitate early intervention.

As AI technologies continue to advance, it becomes paramount not just to understand the potential but also the risks. For example, to yield maximum benefits from AI tools, large integrated datasets are needed. This presents challenges for employers and their partners because these large datasets contain personal health data that must be protected from unauthorized access. Thus, it is imperative that all data systems and administrators comply with data protection regulations and security protocols.

AI also may perpetuate health inequities, as algorithms have been found to contain bias and influence health care access and outcomes. For this important reason, AI must be trained “so that it does not perpetuate and exacerbate existing disparities in health care” notes Sonoo Thadaney Israni, co-author of the National Academy of Medicine’s Special Publication, AI in Healthcare: The Hope, The Hype, The Promise, The Peril, and the executive director of Presence + Program in Bedside Medicine at Stanford University. Israni recently spoke with the Business Group on the topic of equity and AI. Lastly, employers should be aware of the phenomenon known as AI hallucinations, which occurs when these tools provide incorrect information. As a result, trust in the reliability of these AI tools may be eroded.

Collectively, we must conscientiously develop this technology: to humans, for humans and by humans to build an AI-augmented world, where the human experience is complemented by AI technology.


Sonoo Thadaney Israni, co-author of the National Academy of Medicine’s Special Publication, AI in Healthcare: The Hope, The Hype, The Promise, The Peril, and the executive director of Presence + Program in Bedside Medicine at Stanford University

The path forward will require a focus on human-centered design and collaboration, including partnerships between employers and the health and well-being companies they work with. As summarized by Israni, “Collectively, we must conscientiously develop this technology: to humans, for humans, and by humans to build an AI-augmented world, where the human experience is complemented by AI technology.”