Skip to main content
main content, press tab to continue
Article | Managing Risk

AI risk management: Four ethical problems you shouldn’t ignore

By Eric Sanchez | November 24, 2025

Ethical risks around IP, privacy, bias and sustainability could expose your organization. Here’s what you need to know to stay ahead of governance gaps.
Risk Management Consulting
Artificial Intelligence

Making the most of AI means addressing ethical questions, as well as compliance requirements. From intellectual property rights, privacy and security to algorithmic bias and environmental concerns, your business shouldn’t underestimate reputational, legal and operational risks.

To help you identify the governance gaps AI could pose to your business, in this insight, we examine four ethical challenges you shouldn’t ignore.

AI intellectual property risks: Who owns AI-generated content?

Generative AI’s increasing use in business is driving growing risks of intellectual property disputes. For example, the question of ownership in AI-generated content is far from settled. Recently, the High Court of England provided the first UK judgment addressing alleged intellectual property infringements arising from the use of generative AI. In this case, a global image and stock photo licensing agency brought a claim in the High Court against the developer of an AI image-generation. However, the case was concluded on procedural grounds before a firm judgment could be made on the ownership question.

While organizations developing and training their own AI models need to ensure their training data is properly licensed, a lot of businesses use third-party models. In these cases, liability for IP infringement is still evolving. Some AI providers, such as Microsoft with Copilot, offer indemnities or legal protections for users. It’s important to review your provider’s terms and stay informed on global IP regulations. For example, following the World Intellectual Property Organization (WIPO) is essential to avoid costly litigation and reputational damage.

AI privacy compliance: The hidden risks of personalization

If you use AI to personalize services based on customer behavior and financial history, you could be using sensitive personal data, which can introduce significant privacy and security risks.

If you haven’t properly protected this data, it can be misused or stolen, leading to regulatory penalties and loss of trust. Compounding the challenge is how some AI systems operate as ‘black boxes;’ you can see the inputs and the outputs, but can’t understand how the system arrived at those results, making it difficult to explain decisions or demonstrate compliance with regulations like General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA).

You need to ensure transparency in how AI models collect, process and store data, using privacy-preserving techniques, such as differential privacy, and robust governance frameworks to maintain compliance and protect customer trust.

AI bias in recruitment: A cautionary tale

Biased AI can lead to unfair outcomes, legal exposure and reputational damage. For example, in August 2023, the U.S. Equal Employment Opportunity Commission (EEOC) settled its first-ever AI employment discrimination lawsuit against a virtual tutoring company. The company’s AI-driven hiring system was found to have systematically rejected older applicants based on age, violating anti-discrimination laws. This case highlighted how biased algorithms can lead to unfair outcomes, legal exposure and reputational damage, setting a precedent for regulatory scrutiny of AI in recruitment.

Addressing this requires careful attention to training data, regular fairness audits and human oversight in decision-making processes. Reflecting diverse perspectives within AI systems using human oversight not only upholds ethical standards but also strengthens operational integrity.

AI environmental impact: Balancing innovation with sustainability

Training large models consumes vast computational power. At the current rate of AI growth, U.S. data centers supporting AI could emit 24 to 44 million metric tons of carbon monoxide per year by 2030, according to a recent Cornell University study.

However, it’s important to distinguish between the environmental impact of training large AI models, which is significant, and the much lower impact of using these models. For most organizations, the primary concern is usage, which has become increasingly efficient. Still, at scale, even small per-query costs can add up, so sustainability should remain a consideration.

While AI’s environmental footprint is a concern, it’s also important to recognize its potential to drive sustainability by accelerating climate research, optimizing resource use and enabling greener innovations. For example, in Malaysia, the AIME project uses AI to predict dengue outbreaks by analyzing weather and terrain data. This enables targeted interventions and reduces reliance on harmful pesticides.

Organizations must balance the benefits of AI with its environmental costs. Exploring energy-efficient hardware, renewable-powered data centers and green AI practices can help reduce your impact while supporting your sustainability goals.

To explore how your organization can strengthen its approach to ethical AI, get in touch with our specialists.

Disclaimer

WTW hopes you found the general information provided here informative and helpful. The information contained herein is not intended to constitute legal or other professional advice and should not be relied upon in lieu of consultation with your own legal advisors. In the event you would like more information regarding your insurance coverage, please do not hesitate to reach out to us. In North America, WTW offers insurance products through licensed entities, including Willis Towers Watson Northeast, Inc. (in the United States) and Willis Canada Inc. (in Canada).

Author


Eric Sanchez
Risk Management – Marketing Manager
email Email

Contact


Head of Enterprise Risk Consulting, North America
email Email

Contact us