Skip to main content
main content, press tab to continue
Article | FINEX Observer

AI‑driven fraud and corporate crime: Risks, controls and insurance implications

By Dana Wells | February 25, 2026

AI is reshaping corporate crime as fraudsters use deepfakes, voice cloning and automated scams. Companies must strengthen controls, leverage AI detection and review insurance coverage.
Financial, Executive and Professional Risks (FINEX)
Artificial Intelligence

Artificial intelligence (AI) is fundamentally redefining the financial crime landscape. Criminals who previously relied upon simple impersonation schemes or phishing emails to access confidential information and private networks are now leveraging AI to commit fraud on an ever expanding, more efficient scale. Let’s consider five ways AI is actively being used to defraud companies.

How criminals use AI to commit fraud

  1. 01

    Voice cloning or spoofing

    Fraudsters create realistic voice clones from publicly available audio clips to impersonate executives, tricking employees and other individuals into transferring funds.

  2. 02

    Deepfakes

    Utilizing video recordings obtained from corporate websites and social media, fraudsters use AI to develop deepfake videos to impersonate executives or other employees. These videos are used to initiate or validate fraudulent transfers of funds.

  3. 03

    Automated scams

    AI generates flawless phishing emails, texts and fraudulent websites, making scams harder to detect. These tactics are often seen in business email compromise (BEC) scams, where an initial email or ongoing email chain tricks the recipient into clicking a link or sending money to what appears to be a legitimate, trusted source.

  4. 04

    Synthetic identities

    Criminals use AI to forge documents and create fake accounts, undermining traditional verification methods.

  5. 05

    Casting a wider net

    Criminals are using AI to expand their operations beyond large firms. They’re now targeting midsized and small businesses, where payoffs may be less but AI makes the process efficient and profitable.

According to Feedzai, a global leader in AI-native financial crime protection, more than half of fraud now involves the use of artificial intelligence, indicating that despite mitigating efforts by the Federal Bureau of Investigation (FBI), the Department of Justice and the private sector to thwart these attempts, criminals continue to gain ground. The FBI’s 2024 Internet Crime Report details an alarming trend, reporting that criminals successfully absconded with $16.6 billion in 2024, which is an increase of more than 30% year-over-year from the $12.5 billion reported in 2023.

What AI‑driven fraud means for your business

While these statistics highlight a growing challenge, many companies are proactively working to protect themselves from the growing threat of AI-driven fraud:

  • Evaluate your controls: AI heightens the ability of fraudsters to exploit procedural weaknesses. Companies should ensure the segregation of duties, conduct employee awareness training around AI related risks and maintain strong verification processes. Call-back verification to a known number remains the best method for authenticating instructions involving changes to account information or transfer instructions.
  • Fraud detection with AI: Just as criminals use AI to exploit vulnerabilities, machine learning models can analyze vast datasets in real time to uncover anomalies and suspicious activity. Financial institutions are increasingly using AI to affirm the numerous transfer instructions they receive daily. Likewise, retail and other commercial companies have found opportunities to minimize fraud through the use of artificial intelligence.
  • Review your insurance coverage: Crime insurers continue to suggest that the use of AI will not impact an insured’s ability to recover under their crime policy for what would be an otherwise covered claim. However, it is worthwhile to review relevant policy terms that may impact the clear intent of the policy.
    • Social engineering or fraudulent transfer insuring agreements differ by carrier. Confirming coverage language does not require the fraudulent instruction be made by a “natural person” best aligns with what is broadly understood to be the intent of these insuring agreements.
    • Policy language commonly defines employee as a “natural person.” The employee theft insuring agreement would require the internal theft to be at the behest of an employee. Should AI begin to fill roles traditionally performed by “natural person” employees, the availability of coverage may be called into question.
  • Ensuring policy language aligns with your company’s internal procedures and considers the potential for fraudulent AI induced transfers may maximize the accessibility of the corporate crime policy in the event of a loss.

    Key takeaways: Preparing for corporate crime in the age of AI

    AI is both a powerful tool for criminals and a vital defense for companies. For businesses and financial institutions, crime insurance remains a critical safeguard — but policies must evolve to consider emerging risks. By staying informed, strengthening internal controls and working closely with insurers, clients can protect themselves against the next generation of fraud and cybercrime.

Disclaimer

WTW hopes you found the general information provided here informative and helpful. The information contained herein is not intended to constitute legal or other professional advice and should not be relied upon in lieu of consultation with your own legal advisors. In the event you would like more information regarding your insurance coverage, please do not hesitate to reach out to us. In North America, WTW offers insurance products through licensed entities, including Willis Towers Watson Northeast, Inc. (in the United States) and Willis Canada Inc. (in Canada).

Author


Associate Director – Financial Institutions, FINEX Global
email Email

Contact us