Skip to main content
main content, press tab to continue
Article | Managing Risk

AI in healthcare: How to manage artificial intelligence risk

By Rachel Phillips | November 6, 2023

The rate of development of artificial intelligence (AI) in healthcare for patient-facing services is rapid. We consider the possibilities of AI and the risks to identify and manage.
Corporate Risk Tools and Technology|Work Transformation|Risk and Analytics|Risk Management Consulting
Risk Culture

Applying AI to healthcare areas, such as diagnostics and predictive analytics, offers significant opportunities for improving healthcare outcomes. But while AI-enabled change is rapid and potentially transformative, these advances are generating new risk management considerations.

To help your healthcare organisation navigate emerging AI risks, in this insight, we examine:

The evolving landscape of AI in healthcare

Back in 2020, the World Economic Forum (WEF) predicted how AI would access multiple sources of data to reveal patterns in disease and aid treatment and care. WEF also forecasted healthcare systems would be able to predict an individual’s risk of certain diseases, suggest preventative measures, and also highlighted how AI would help reduce waiting times for patients and improve efficiency in hospitals and health systems.

We can expect to see wider and increasingly advanced applications of AI in healthcare in line with WEF’s predictions, supported by private and public sector investment. For example, this year, a £21m AI Diagnostic Fund was announced by the U.K. Health and Social Care Secretary. Its aim is to accelerate the deployment of the most promising AI imaging and decision-support tools to help diagnose patients more quickly for conditions such as cancers, strokes, and heart conditions.

In addition, in June the UK Government announced its intent to host the first global AI summit which took place last week.

By rapidly analysing vast amounts of data, AI can classify data points, accomplish specific tasks and learn from experience by harnessing machine learning. Examples of AI applications in healthcare include clinical pathway decision-making, wearable tech and off-site or home-based patient monitoring.

AI can also support increased efficiency, helping to address workforce shortages and reduce training costs. For example, radiology departments have been seen to increase productivity using healthcare technology that leverages AI to enable faster scan times with higher image resolution. This allows radiology departments to scan more patients in a day with diagnostic confidence and improve the patient experience at the same time through those shorter scan times.

We have also seen AI developers working on modelling that can predict heart failure-related health outcomes for veterans, a project launched by a collaboration between regulators and healthcare organisations in the U.K. and U.S.

We’re also seeing AI being used within domiciliary care to analyse data from care workers’ visit reports to produce risk assessments of individuals, predicting the likelihood of falls and hospital admission. Where an alert is triggered, regional service managers compare the automated assessment with the care worker’s written report to make informed decisions about the individuals’ care needs, potential intervention measures and escalation to other agencies.

Whether for clinical tasks or administrative work, AI can manage repetitive tasks and increase efficiency while reducing error rates for work processes. AI may also support reduced caregiver burnout by taking over some of their tasks.

Earlier this year, for example, care home review site carehome.co.uk reported more than half of care home staff think homes should use AI, such as smart devices, to help care for residents. Carehome.co.uk says AI can help people with limited mobility to regain some of their autonomy using their voice to control their environment, such as by operating light switches and temperature, as well as enabling them to call friends and family.

Ultimately healthcare providers should first consider what is the area of their business which requires support and/or a new approach and whether AI is the answer.

Regulation of AI in healthcare

Regulatory frameworks and guidelines can play a crucial role in ensuring your healthcare organisation uses and governs AI responsibly. Governments and organisations worldwide are actively working on establishing standards and frameworks implementing AI ethically and safely. The U.K. government's approach emphasises voluntary compliance using existing regulators and laws, while the EU's proposed AI Act takes a risk-based approach and introduces stringent standards for high-risk AI systems.

In late 2022, the Medicines and Healthcare products Regulatory Agency (MHRA) updated its ‘Software and AI as a Medical Device Change Programme’ to help ensure regulatory requirements for software and AI are clear and that patients are protected.

Meanwhile, the multi-agency AI and Digital Regulations Service was launched in June 2023 to advise the NHS and wider care system on using digital and AI technologies.

Identifying risks of AI in healthcare

While AI holds great promise for healthcare organisations, there are risks and challenges you should be ready to respond to, including ethical issues. These can arise when false information is propagated, and also originates from the inability of AI to interpret human nuances which can result in biases, lapses, and unintended consequences in care.

Some assumptions are ‘baked into’ technological programmes and can arise after longer periods of use. For example, in May 2023, the U.S. National Eating Disorders Association (NEDA) replaced its volunteer-run helpline with an AI chatbot. A study suggested that at times the chatbot unexpectedly reinforced harmful behaviours.

To address decisions made by AI that risk worsening healthcare outcomes for patients based on their profile and background, the NHS is trialing a programme designed to identify algorithmic biases in systems used to administer healthcare.

Healthcare organisations will need to clearly understand what AI can and cannot do and you will need to perform due diligence is key areas, specifically:

  • Biased decision making: A report from the World Health Organisation highlights the challenges and risks of AI, including unethical collection and use of health data, biases encoded in algorithms, and risks to patient safety, cybersecurity and the environment
  • Socioeconomic inequality: Algorithms may create an opportunity for abuses, such as deepfakes, and also raise concerns amongst employees around job security due to increased automation
  • Privacy violations: You will need to take steps to ensure data privacy for virtual care or AI, and avoid breaches of personal health information
  • The validity of AI outputs in light of historical data: AI requires massive data sets in order to ‘learn’ and for these learning outputs to be valid, meaning you will need to seek assurances around the historical patient data AI applications call upon in order to learn
  • Human factors: AI may not be able to interpret all human nuances, which could result in biases, lapses, and unintended consequences in care.

There are also challenges around attitudes to AI in healthcare, including educating your people on what AI can and cannot do and the role of AI in the future of your healthcare organisation.

Some employees may distrust AI or have concerns as to how it may impact them and their job security, which can impact their mental wellbeing and may further compound workforce challenges.

Patients can also be dubious. A Harvard Business Review report ‘AI Can Outperform Doctors, so Why Don’t Patients Trust It?’ indicated patients are reluctant to use healthcare provided by medical AI even when it outperforms human doctors. This is because they see their medical needs as unique and that ‘AI does not take into account one’s idiosyncratic characteristics and circumstances’. In other words, some people don’t believe taking care of their health can be adequately addressed by algorithms.

Managing the risks of AI in healthcare

While AI can help reduce some risks, it does not eliminate the possibility of errors resulting in injury which means healthcare providers, software developers, and algorithm designers, need to consider the complex challenges and the exposures to potential liabilities. For example, with the introduction of AI who makes the ultimate decision on patient care; the healthcare provider or the technology itself?

To avoid disputes over responsibility/liability we may expect increased examples of historically separate lines of insurance, such as medical malpractice, cyber insurance, and technology errors and omissions converge being combined into one policy underwritten by a single insurer to address concerns over the proximate cause of loss and avoid arguments over which insurer is liable.

Healthcare providers seeking to implement AI should adopt risk mitigation strategies to ensure patient safety and regulatory compliance. These strategies include:

  • Establishing policies and procedures for AI-based applications, devices and wearables that include ethical concerns in AI system development and guiderails for using AI responsibly to protect patient rights and avoid potential harm
  • Multi-disciplinary product review – Before you implement a new AI-enabled product, service or device, a multi-disciplinary team that includes end users, should review to help guard against unexpected outcomes
  • Systematically test for failures – You should get detailed analysis of the effectiveness of your AI-enabled product, service or device using ‘failure mode and effects analysis’ which works to identify potential modes of failure and their impact
  • Invest in training – As AI becomes more integrated into healthcare workflows, ensuring your workforce is adequately trained and educated is essential. Checking your staff understand the capabilities and limitations of AI systems to help mitigate risk and enhance the effective use of AI in providing quality care. These moves could include developing training checklists and educating care teams on escalation strategies where there are concerns over the integrity of a product, service or device, or when injuries occur
  • Review your insurance – Consider how adopting AI-enabled technologies could impact your risk profile and engage your organisation's insurers and/or brokers to review the insurance implications
  • Monitor and record incidents – Your organisation should routinely track and trend all AI-enabled product, services or device incidents that could or did cause harm and ensure care teams understand the process for reporting such incidents, including notifying regulators as appropriate
  • Include privacy requirements – Proceed with caution with contracted AI vendors and ensure you insert privacy requirements into any agreement
  • Monitor and record accuracy – You should consistently monitor any AI-enabled system after deployment to both ensure its safety and to measure how accurately AI outputs matches clinicians’ choice of action.

Artificial intelligence and augmented intelligence have the potential to revolutionise healthcare, but you will need to manage the risks carefully to implement these technologies successfully and without risking harm to patients and your organisation.

By understanding the evolving landscape of AI and addressing the associated risks and challenges, healthcare organisations can leverage emerging technologies to improve efficiency, patient care, and overall outcomes.

To discover smarter way to understand and mitigate the risks around AI in healthcare, get in touch.

Author

WTW Health and Social Care Leader, GB

Contact us