Artificial intelligence (AI) is fundamentally redefining the financial crime landscape. Criminals who previously relied upon simple impersonation schemes or phishing emails to access confidential information and private networks are now leveraging AI to commit fraud on an ever expanding, more efficient scale. Let’s consider five ways AI is actively being used to defraud companies.
01
Fraudsters create realistic voice clones from publicly available audio clips to impersonate executives, tricking employees and other individuals into transferring funds.
02
Utilizing video recordings obtained from corporate websites and social media, fraudsters use AI to develop deepfake videos to impersonate executives or other employees. These videos are used to initiate or validate fraudulent transfers of funds.
03
AI generates flawless phishing emails, texts and fraudulent websites, making scams harder to detect. These tactics are often seen in business email compromise (BEC) scams, where an initial email or ongoing email chain tricks the recipient into clicking a link or sending money to what appears to be a legitimate, trusted source.
04
Criminals use AI to forge documents and create fake accounts, undermining traditional verification methods.
05
Criminals are using AI to expand their operations beyond large firms. They’re now targeting midsized and small businesses, where payoffs may be less but AI makes the process efficient and profitable.
According to Feedzai, a global leader in AI-native financial crime protection, more than half of fraud now involves the use of artificial intelligence, indicating that despite mitigating efforts by the Federal Bureau of Investigation (FBI), the Department of Justice and the private sector to thwart these attempts, criminals continue to gain ground. The FBI’s 2024 Internet Crime Report details an alarming trend, reporting that criminals successfully absconded with $16.6 billion in 2024, which is an increase of more than 30% year-over-year from the $12.5 billion reported in 2023.
While these statistics highlight a growing challenge, many companies are proactively working to protect themselves from the growing threat of AI-driven fraud:
Ensuring policy language aligns with your company’s internal procedures and considers the potential for fraudulent AI induced transfers may maximize the accessibility of the corporate crime policy in the event of a loss.
AI is both a powerful tool for criminals and a vital defense for companies. For businesses and financial institutions, crime insurance remains a critical safeguard — but policies must evolve to consider emerging risks. By staying informed, strengthening internal controls and working closely with insurers, clients can protect themselves against the next generation of fraud and cybercrime.
WTW hopes you found the general information provided here informative and helpful. The information contained herein is not intended to constitute legal or other professional advice and should not be relied upon in lieu of consultation with your own legal advisors. In the event you would like more information regarding your insurance coverage, please do not hesitate to reach out to us. In North America, WTW offers insurance products through licensed entities, including Willis Towers Watson Northeast, Inc. (in the United States) and Willis Canada Inc. (in Canada).