Artificial intelligence (AI) can enhance the delivery of medical care and create both clinical and administrative efficiencies, but also poses new risks for healthcare delivery organizations. We review the large potential opportunities AI presents for healthcare organizations and examine steps that organizations can take to mitigate potential risk from deploying AI.
Artificial intelligence can improve healthcare dramatically by enabling deeper data analytics, facilitating image processing, and increasing operational efficiency. For instance, AI can improve diagnosis of imaging tests, identifying changes in pixels that could be missed even by an experienced eye. AI can help decrease the rate of missed diagnoses by ingesting clinical data and suggesting obscure diagnoses that might not occur to clinicians and can help shorten the time required to discover new drugs. AI can create algorithms to better predict risk, enabling low volume maternity hospitals to transfer pregnant patients to higher volume facilities to reduce maternal and fetal adverse outcomes. Pharmaceutical and laboratory companies are currently using machine learning to design algorithms that better predict drug effectiveness or predict patient outcomes. Providers are already using generative artificial intelligence to transcribe their office visits, optimize their schedules, and draft communications about laboratory results.
But integration of artificial intelligence into medical care delivery and operations entails new risks. For instance, artificial intelligence is often trained on real-world data collected in an environment that includes substantial disparities and discrimination. Researchers found that algorithms based on machine learning and vast quantities of claims data underestimated the clinical needs of people of color based on lower utilization of medical services. Artificial intelligence is also known to hallucinate; researchers have “caught” generative AI models fabricating references, and results of simple questions often give irrefutably Diagnostic Accuracy of a Large Language Model in Pediatric Case Studies. Flesh-and-blood physicians give false answers, too, but our tolerance of confabulation from machines will be much lower than our tolerance of foibles in human beings.
Some use of artificial intelligence could also lead to patient privacy concerns. Generative AI models learn from the data they are given and could accidentally share patient names or confidential health information to answer subsequent queries. We’ve already seen examples of a leak of industrial secrets that were uploaded into a generative AI model. Some AI models appear to have been trained on material that is protected by copyright, and some authors and artists are currently suing major AI companies for inappropriate use of their material. Royalty payments could increase the future cost of AI, although we have not yet seen litigation targeting users of AI for allegations of copyright infringement in the AI model.
Artificial intelligence can also enable “deep fakes,” and can create or amplify misinformation. AI can generate songs it attributes to a real artist and can create realistic videos depicting public figures saying things they never said. AI could also generate more personalized and convincing “phishing” or other malicious communications. We already suffer from declining trust in many institutions, and difficulty ascertaining “truth” could undermine patient faith in the healthcare delivery system.
Finally, artificial intelligence uses a prodigious amount of energy to power its enormously complex calculations. Building AI models requires highly talented engineers and large amounts of capital which may result in limited competition in this space. This could lead to high prices which could increase cost of care.
Knowing where artificial intelligence is being used by and for your company can identify business opportunities and mitigate risk. When the Government Accountability Office audited AI use cases at U.S. government agencies, it found that 20 of 23 agencies had identified 1,200 use cases for artificial intelligence. About a fifth were already in production, and half of these had been in use for over a year. The inventory should include current and proposed future uses of AI. The inventory should include AI used by vendors, which should also comply with organizational standards.
Artificial intelligence systems are trained on data from the real world, a world filled with disparities. Use cases for implementing AI should explicitly state how the tools are designed to promote health equity, including using data from diverse sources, and in most instances not allowing race-specific algorithms. Companies can periodically audit AI programs to assess whether algorithms should be adjusted to achieve health equity.
Artificial intelligence can inadvertently lead to breaches of privacy, as systems can be queried to provide personally identifiable information. Systems should have strong privacy safeguards, and private information should be available only where it is critical. An auditable trail of disclosure, currently used in electronic health record systems, can protect against privacy breaches and allow forensic evaluation. Companies should be especially vigilant to be sure that vendors have appropriate privacy protections in place if they have access to claims or medical records.
Proprietary information, if uploaded into a generally available artificial intelligence system, can be queried by competitors or obtained by hackers or others with malicious intent. Some companies are using AI systems that allow proprietary information to stay within the company firewall, and these systems are likely to become more common in the coming years. Individual employees will be less likely to inadvertently upload proprietary data into public AI models if they have access to secure AI tools within the company firewall.
Healthcare organizations need clear policies and procedures for the use of AI but must be careful to avoid overly restrictive rules which could impair innovation and lead to employee dissatisfaction. Restricting AI use to more senior employees could backfire since more junior colleagues might have the neuroplasticity to decipher new ways of adding value with AI. Burdensome approval processes increase administration cost and are disheartening to employees. Employees and customers should also know when they are interacting with an AI model rather than a human being.
The field of artificial intelligence is evolving rapidly, and rules established now might soon be obsolete. Healthcare organizations can establish a working group that will keep abreast of advances in the field and business needs and uses. This group can regularly report to senior leadership, which facilitates gaining insights into corporate goals and risk tolerance.
Directors and officers (D&O) and cyber insurance can protect healthcare delivery organizations from risks from artificial intelligence. Director and officer (D&O) insurance coverage tends to be written broadly, and generally provides coverage as long as employees or officers are acting in good faith within their capacities. However, regulatory coverage is often more limited, and many companies “stack” policies to gain higher coverage for adverse regulatory rulings. WTW has not yet identified any examples of an explicit exclusions for cyber losses arising from AI in the current marketplace. However, some cyber policies may contain exclusions for wrongful collection and wrongful use of data which could be problematic if organizations are collecting patient or customer data to inform their AI models.
The good news is that the cyber marketplace is currently favorable for buyers of cyber insurance with stable premiums and availability of expansive cyber coverage, despite a significant uptick of ransomware attacks. Additionally, companies that develop software and services which leverage AI as part of their product should closely evaluate the need for technology errors and omissions coverage (Tech E&O). There is no insurance available for business underperformance due to flawed implementation of AI.
Artificial intelligence can help healthcare organizations meet their missions, helping increase efficiency, decreasing errors, and even decreasing provider burnout. Healthcare organizations can take these steps to create guardrails and mitigate risk while their employees and patients gain benefits from this new technology.
Willis Towers Watson hopes you found the general information provided in this publication informative and helpful. The information contained herein is not intended to constitute legal or other professional advice and should not be relied upon in lieu of consultation with your own legal advisors. In the event you would like more information regarding your insurance coverage, please do not hesitate to reach out to us. In North America, Willis Towers Watson offers insurance products through licensed entities, including Willis Towers Watson Northeast, Inc. (in the United States) and Willis Canada Inc. (in Canada).