Artificial Intelligence (AI) is transforming professional services, accelerating analytics, automating processes, and enhancing decision-making. But with opportunity comes exposure. AI-related risks don’t fit neatly into one category; they span Professional Indemnity (PI), Technology Errors & Omissions (Tech E&O), and Cyber Liability, creating complex and often overlapping liabilities.
Where AI risk lives in your professional services offering
- Professional Indemnity (PI): AI-driven insights are still your advice. If recommendations based on AI outputs lead to client losses, liability remains with your firm.
- Technology Errors & Omissions (Tech E&O): Customising or integrating AI tools makes you responsible for system failures, misconfigurations, or defective performance.
- Cyber Liability: AI often relies on sensitive data, increasing exposure to breaches, adversarial attacks, and privacy violations.
Why this matters
AI risk is multidimensional, its profile can change dramatically based on how it’s used. To manage these risks effectively from an insurance perspective, organisations must start with visibility and categorisation.
Map AI usage across the organisation
Identify where and how AI is being used:
- Deploying third-party AI tools (e.g., ERP systems with embedded AI)?(being a “deployer” in EU AI Act terminology)
- Developing proprietary models, or substantially modifying third-party tools? (“provider” per EU AI Act).
- Employees using public or unsecured AI platforms?
- Document data flows: What data is being fed into these systems? Is it sensitive, regulated, or client-owned?
Classify risks by use case
- Third-party AI deployment– Vendor vulnerabilities, supply chain attacks, contractual liability.
- Proprietary AI development– Model poisoning, IP disputes, compliance failures.
- Unsecured AI tools– Data leakage, privacy violations, regulatory penalties.
Each category demands different controls and insurance considerations. Real-world incidents often blur the lines between PI, Tech E&O, and Cyber, creating grey areas where coverage disputes arise. Firms that fail to address these exposures risk uninsured losses, regulatory scrutiny, and erosion of client trust.
Emerging Regulations: The compliance pressure
Governments and regulators are moving fast:
- EU AI Act: Introduces strict obligations for high-risk AI systems, including transparency, explainability, and human oversight. Non-compliance can lead to fines up to €35 million or 7% of global turnover, whichever is greater.
- UK AI Regulation Framework: Principles-based, this framework focuses on accountability and risk-based governance, with sector-specific enforcement from regulators such as the ICO, FCA and others expected and developing.
- US State-Level AI Laws: Increasingly mandate disclosure and bias testing for AI used in decision-making. However, there is continued uncertainty regarding where and how these laws apply due to attempts from the Federal government to prohibit state-level regulation.
For professional services firms, this means AI governance isn’t optional—it’s a regulatory requirement. Failure to comply could trigger contractual breaches and regulatory penalties.
Real-world examples
- Case 1: AI-Driven Market Analysis Gone Wrong
A consulting firm used an AI tool to synthesise market signals for a client’s acquisition strategy. The model over-weighted unreliable sources, missing a pending regulatory change. The client overpaid and sued for negligent advice. PI responded—but the insurer questioned whether AI configuration constituted Tech E&O, delaying resolution.
- Case 2: Data Breach via AI Workflow
A firm integrated a third-party AI platform for operational optimisation. A vulnerability in the model exposed sensitive client data, triggering a cyber incident and regulatory investigation. Coverage disputes arose over whether the breach fell under Cyber or Tech E&O.
Key actions for professional services firms
- Identify: Map AI usage across the organisation and classify risks by use case.
- Update Contracts: Define AI limitations, allocate liability, and clarify client responsibilities for data quality and validation.
- Strengthen Governance: Implement explainability standards, establishing an AI Governance committee and robust data lineage documentation.
- Review Insurance: Carefully evaluate AI risk across PI, Tech E&O, and Cyber policies to avoid potential gaps.
- Operational Safeguards: Deploy secure AI environments, conduct adversarial testing, and creating approval processes with human-in-the-loop validation on workflows and all client-facing outputs.
Bottom Line
AI accelerates delivery and innovation—but also amplifies liability. Treat AI as a core risk in your methodology, contracting, and insurance strategy. Firms that act now will not only protect themselves but also differentiate through trust and resilience in an AI-driven market.
Quote from Sam Haslam, Practice Leader of WTW’s Risk & Resilience Advisory Practice and creator of WTW’s AI Risk Advisory Services: “Too often perceptions of AI risk are overly narrow: for example a heavy focus on how AI increases the risk of successful, sophisticated cyber-attacks. That remains a key consideration, but it is critical that important risks around areas such as PI and E&O are also given appropriate weighting. With “shadow AI” – the unapproved use of third-party AI systems – increasingly common in organisations, it’s never been more important to establish strong governance procedures, risk assessment processes, assurance of AI outputs and training for colleagues using AI systems. Through these measures, organisations can better identify, understand and mitigate AI risks such as those outlined in this insight”.