August 23, 2023
As the fastest technological adoption in history, ChatGPT hit 100 million users just 2 months after its release.1 This meteoric rise in the use of ChatGPT has electrified public interest about the current and future use of generative artificial intelligence (AI) and sparked rigorous discourse about its potential and drawbacks. Due to AI’s current influence on health care and its fast-growing potential, employers, employees and health care service providers likely have questions about how advancements in AI may help or hinder the Quintuple Aim.2
This article begins to answer some of these questions by highlighting key ways AI is being deployed in health care, both in clinical settings as well in employer-sponsored programs. The article also includes important considerations about the use of AI, particularly those related to ethics and equity.3
Generative AI is a field of artificial intelligence focused on creating new and original content using neural networks, which are computational models inspired by the human brain. These networks learn from vast amounts of data to understand patterns, styles and structures, enabling them to generate realistic and coherent outputs, such as images, text or music, that mimic human creativity. Neural networks within generative AI processes use layers of interconnected nodes that work collaboratively to transform input data into meaningful and innovative creations.4
How Is AI Being Used in Clinical Care Today?
With eye-catching headlines like “AI Improves Breast Cancer Detection Rate by 20%,” it may come as no surprise that the use of AI in the health care system is widespread.5 According to Grandview Research, the health care market for artificial intelligence is expected to grow globally by 37.5% to reach 143 billion dollars by 2030.6
The adoption of AI in clinical settings has shown that it has the potential to help health systems to process vast amounts of medical data-- leading to improved diagnostic accuracy,7 personalized treatment plans,8 and ultimately better patient satisfaction.9,10 For example, AI algorithms are being deployed in medical imaging to identify tumors in CT scans; diagnose disease by analyzing patient data to assist with treatment planning, medication management and predict patient outcomes; and power wearable devices so that health care providers can monitor patients' vital signs remotely, detect anomalies and undertake data-informed interventions.11,12 These examples are in addition to the ways AI is being used to assist health care providers with routine tasks like transcribing patient notes and answering simple medical questions.
A growing body of research is supporting the use of AI in clinical care. A Harvard study found that an AI algorithm was able to instantly identify the type and severity of a specific brain tumor - something that previously took weeks to accomplish.13 And a study published in JAMA in 2023 found that chatbots are being met with wide acceptance by patients for general medical questions; when comparing chatbot and physician answers to health questions, patients thought chatbot answers were more empathetic than physician answers, and 79% of patients preferred the chatbot responses overall.14
How Can AI Support Employer-sponsored Health Care?
Listed below are numerous ways that AI is being used to support employer-sponsored health care.
- Prior authorization and administrative support: AI-powered tools are being used to complete tasks that may enhance the employee and provider experience, including improving the prior authorization process. A McKinsey analysis determined that AI-enabled prior authorization programs had the potential to automate 75% of the process, enabling faster and more accurate decision-making, reducing costs and ultimately, improving the patient experience.15
- Benefit selection and navigation: Benefit platforms that leverage AI-enabled chatbots can improve employees' understanding of health benefits by addressing their questions and offering personalized guidance and clarifications about available offerings, which can be particularly helpful during open enrollment. These chatbots have been deployed to provide an adaptive, step-by-step tutorial to members walking through the enrollment process.16
- Health care fraud, waste and abuse reduction: AI solutions are being used by health insurers to combat fraud, waste and abuse, which rose during the pandemic, reaching $300 billion in annual costs.17 For example, one AI- based business intelligence tool evaluated fee-for-service payments over 2 years in the Iowa Medicaid population and recovered $41.5 million in payments.18
- Risk identification and early intervention: AI can analyze large datasets from sources like health assessments, wearables and electronic health records to identify and predict potential health issues and trends as a result of certain medical conditions. These conditions, including diabetes, musculoskeletal problems or maternal health complications, may necessitate early intervention.19,20,21 Additionally, AI is being used to provide personalized recommendations and interventions, such as health coaching and nutrition counseling, so that employees can better manage their health.22
What Pitfalls Related to the Current and Future AI Landscape Should Employers Keep in Mind?
While AI offers promise for improving the efficiency and quality of health care, its marked growth has occurred quickly and without coordinated oversight and regulation. As AI technologies continue to advance, it becomes paramount to understand potential risks, which include but aren’t limited to those related to:
Data privacy and security
In order to yield maximum benefits from AI tools, large integrated datasets are needed. This presents major challenges, as employers and their partners seek to protect these large datasets, which often contain personal health data, from unauthorized access. Thus, it is imperative that all data housing systems and their administrators comply with data protection regulations and employ secure storage and transmission protocols. Further, employers should consider the best way to inform patients about how their data may be used and provide consent for its use.
Ensuring that AI is unbiased is another large concern, particularly since algorithms have been found to contain bias, which can influence health care access and outcomes. This is the result of algorithms “learning from a biased dataset in a biased society and health care system,” said Ziad Obermeyer, MD, Blue Cross of California Distinguished Associate Professor of Health Policy and Management at UC Berkeley School of Public Health on a Business Group on Health podcast.23
False information (i.e., AI hallucinations)
Such inaccuracies can erode the trust and reliability of chatbots or virtual assistants, affecting their usefulness and credibility. One study from Stanford showed that in clinical scenarios, responses from Chat GPT 3.5 generated fabricated responses 9% of the time.26 In May, the World Health Organization called for caution in the deployment of AI in health care due to the possibility that it can be used to generate misleading or inaccurate information that can spread disinformation.27
Is it Possible to Overcome These Pitfalls to Fully Realize the Potential of AI?
As excitement over the potential of AI builds, experts in the field offer a word of caution, pointing to the “two AI winters” – the first AI winter, occurring in the late 1970s, characterized by reduced funding and enthusiasm due to unmet expectations, and the second AI winter, was in the late 1980s resulting from limited progress and skepticism about the viability of artificial intelligence technologies. “These were two chapters in the not-so-distant past where there was a whole bunch of overpromising and underdelivering, which led to the loss of funding, stagnation, and faith in what could be delivered and how and when,” says Israni. Today, there is hope that a deeper understanding of the perils of AI will help us overcome them and avoid another winter.”
Charting a positive path forward will require, among other things, transparency, due diligence and the development of a consistent regulatory framework. These are essential to understand and validate the reasoning behind AI-generated insights, ensure that they integrate appropriately with other technical and legal requirements and importantly, to correct missteps when they occur. For employers, this may involve requiring vendor partners to disclose and explain the ways AI is being used to augment benefits and programs, the practices they have in place to protect confidential patient and program data, approaches to identify and rectify potential biases or missteps, and how they take corrective actions to address any issues or discrepancies that may arise.
The path forward will also require a focus on human-centered design and collaboration, including partnerships between employers and the health and well-being companies they work with. As summarized by Israni, “Collectively, we must conscientiously develop this technology: to humans, for humans and by humans to build an AI-augmented world, where the human experience is complemented by AI technology.”
For additional information, listen to the Business Group’s podcast, Artificial Intelligence in Health Care: Its Perils (Bias) and Potential.
More TopicsWebinars & Presentations Mental and Emotional Well-being
- 1 | Hu K. ChatGPT sets record for fastest-growing user base - analyst notes. Reuters. February 1, 2023. https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/ . Accessed July 24, 2023.
- 2 | Nundy S, Cooper LA, Mate KS. The Quintuple Aim for Health Care Improvement: A New Imperative to Advance Health Equity. JAMA. 2022;327(6):521–522. doi:10.1001/jama.2021.25181
- 3 | Nundy S, Cooper L, Kelsay E. Employers Can Do More to Advance Health Equity. January 1, 2023. https://hbr.org/2023/01/employers-can-do-more-to-advance-health-equity. Accessed August 4, 2023.
- 4 | McKinsey & Company. What is generative AI? January 19, 2023. https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-generative-ai. Accessed July 24, 2023.
- 5 | Furlong A. AI improves breast cancer detection rate by 20 percent. August 2, 2023. https://www.politico.eu/article/ai-improves-breast-cancer-detection-rate-20-percent-swedish-study/. Accessed July 24, 2023.
- 6 | Grandview Research. Artificial Intelligence in Health Care. July 24, 2023. https://www.grandviewresearch.com/industry-analysis/artificial-intelligence-ai-healthcare-market#:~:text=The%20global%20artificial%20intelligence%20in,37.5%25%20from%202023%20to%202030. Accessed August 4, 2023.
- 7 | Furlong A. AI improves breast cancer detection rate by 20 percent. August 2, 2023. https://www.politico.eu/article/ai-improves-breast-cancer-detection-rate-20-percent-swedish-study/. Accessed July 24, 2023.
- 8 | Rudy M. AI tool gives doctors personalized Alzheimer’s treatment plans for dementia patients. New York Post. May 8, 2023. https://nypost.com/2023/05/08/ai-tool-gives-doctors-personalized-alzheimers-treatment-plans/. Accessed July 24, 2023.
- 9 | Whitestone, Noelle, et al. "Feasibility and acceptance of artificial intelligence-based diabetic retinopathy screening in Rwanda." British Journal of Ophthalmology (2023). doi: 10.1136/bjo-2022-322683
- 10 | Yang YC, Islam SU, Noor A, Khan S, Afsar W, Nazir S. Influential usage of big data and artificial intelligence in healthcare. Comput Math Methods Med. 2021 Sep 6;2021:5812499. doi: 10.1155/2021/5812499. PMID: 34527076; PMCID: PMC8437645.
- 11 | U.S. Government Accountability Office. Machine Learning’s Potential to Improve Medical Diagnosis. https://www.gao.gov/blog/machine-learnings-potential-improve-medical-diagnosis. Accessed July 24, 2023.
- 12 | Dileep G, Gianchandani Gyani SG. Artificial intelligence in breast cancer screening and diagnosis. Cureus. Oct 2022;14(10):e30318. doi:10.7759/cureus.30318.
- 13 | Talaga R. AI tool can predict a brain tumor’s profile instantly: Study. Becker’s Health IT. July 7, 2023. https://www.beckershospitalreview.com/innovation/ai-tool-can-predict-a-brain-tumors-profile-instantly-study.html.. Accessed July 24, 2023.
- 14 | Ayers JW, Poliak A, Dredze M, et al. Comparing physician and artificial intelligence Chatbot responses to patient questions posted to a public social media forum. JAMA Intern Med. Jun 1, 2023;183(6):589-596. Doi:10.1001/jamainternmed.2023.1838.
- 15 | Sharma AD. Transformative AI to revamp prior authorizations. Sagility., 2023. https://www.healthcaredive.com/spons/transformative-ai-to-revamp-prior-authorizations/646831/ Accessed August 4, 2023
- 16 | Management SHRM. Using artificial intelligence for employment purposes. https://www.shrm.org/resourcesandtools/tools-and-samples/toolkits/pages/artificial-intelligence-employment-purposes.aspx . Accessed August 4, 2023, 2023.
- 17 | PYMNTS. Artificial Intelligence is ‘Shining Star’ in Fight Against Healthcare Payments Fraud. October 12, 2021. https://www.pymnts.com/healthcare/2021/artificial-intelligence-shining-star-fight-against-fraud/. Accessed July 24, 2023.
- 18 | Johnson KB, Wei WQ, Weeraratne D, et al. Precision medicine, AI, and the future of personalized health care. Clin Transl Sci. Jan 2021;14(1):86-93. doi:10.1111/cts.12884.
- 19 | Business Wire. Value-based healthcare platform identifies individuals at risk for diabetes with over 80% accuracy. April 18, 2023. https://www.businesswire.com/news/home/20230418005451/en/Value-Based-Healthcare-Platform-Identifies-Individuals-at-Risk-for-Diabetes-With-Over-80-Accuracy. Accessed July 24, 2023.
- 20 | Hartnett K. How Northwell closes maternal health disparities with an AI chatbot. Modern Healthcare. June 23, 2023. https://www.modernhealthcare.com/digital-health/northwell-closes-maternal-health-disparities-ai-chatbot. Accessed July 24, 2023.
- 21 | Baum S. An AI-enabled approach to improve access to physical therapy for self-insured employers. MedCity News. July 18, 2023. https://medcitynews.com/2023/07/an-ai-enabled-approach-to-improve-access-to-physical-therapy-for-self-insured-employers/. Accessed July 24, 2023.
- 22 | Business Wire. Hello Heart adds breakthrough artificial intelligence (AI) capabilities to empower users to make better choices. https://www.businesswire.com/news/home/20221114005754/en/Hello-Heart-Adds-Breakthrough-Artificial-Intelligence-AI-Capabilities-to-Empower-Users-to-Make-Better-Choices. Accessed July 24, 2023.
- 23 | Business Group on Health. Artificial intelligence in health care: Its perils (bias) and potential. https://www.businessgrouphealth.org/en/resources/artificial-intelligence-in-health-care-its-perils-and-potential. Accessed July 24, 2023.
- 24 | Israni ST. Humanizing artificial intelligence. Viewpoint. JAMA. 2019;doi:10.1001/jama.2018.19398.
- 25 | Chen A, Chen DO. Accuracy of chatbots in citing journal articles. JAMA Network Open. 2023;6(8):e2327647. doi:10.1001/jamanetworkopen.2023.27647.
- 26 | Dash D, Horvitz E, Shah N. How well do large language models support clinician information needs? Stanford University Human-Centered Artificial Intelligence. March 31, 2023. https://hai.stanford.edu/news/how-well-do-large-language-models-support-clinician-information-needs. Accessed July 24, 2023.
- 27 | World Health Organization. WHO calls for safe and ethical AI for health. May 16, 2023. https://www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-ai-for-health . Accessed July 24, 2023.