Skip to main content
main content, press tab to continue
Article | WTW Research Network Newsletter

Navigating the complex landscape of AI governance: Challenges, tools, and guidance for a trustworthy future

By Sonal Madhok | March 28, 2024

As Artificial Intelligence’s (AI) use in business and government increases by leaps and bounds, the need for transparent, fair, and safe governance standards has moved off the planning list into action.
Corporate Risk Tools and Technology|Insurance Consulting and Technology|Willis Research Network
the-future-of-work-and-risk|InsurTech

Introduction

As Artificial Intelligence’s (AI) use in business and government increases by leaps and bounds, the need for transparent, fair, and safe governance standards has moved off the planning list into action. In a vacuum of universal guidelines, many corporations and governments have already begun to create their own standards to address the key topics of model transparency, explainability, and fairness. While these guidelines have been an excellent start, there is still a need for organizations to control and guide their AI development while also keeping the current and emerging regulatory environment in mind.

Due to the nascent nature of the field, there are few widely implemented and agreed upon best practices. The 2023 World Privacy Forum (WPF) report on Assessing and Improving AI Governance Tools[1] attempts to remedy this gap by highlighting examples across 6 categories:

  1. Practical Guidance consisting of general education information on AI governance,
  2. Self-assessment Questionnaires,
  3. Procedural Framework with step-by-step workflows on AI systems and/or improvements,
  4. Technical Framework,
  5. Technical Code/Software,
  6. Scoring or Classification Output.

WPF are using the following definition of AI governance tools to shape their work: “socio-technical tools for mapping, measuring, or managing AI systems and their risks in a manner that operationalizes or implements trustworthy AI”. In other words, when making decisions with AI, it is necessary to account for the system's transparency, explainability, fairness and potential societal impact.

Table 1: AI Governance tool types Lexicon

Table 1: AI Governance Tool Types Lexicon

Source: World Privacy Forum[2]

Category Tools
Practical Guidance Includes general educational information, practical guidance, or other consideration factors
Self-assessment Questions Includes assessment questions or detailed questionnaire
Procedural Framework Includes process steps or suggested workflow for AI system assessments and/or improvements
Technical Framework Includes technical methods or detailed technical process guidance or steps
Technical Code or Software Includes technical methods, including use of specific code or software
Scoring or Classification Output Includes criteria for determining a classification, or a mechanism for producing a quantifiable score or rating reflecting a particular aspect of an AI system

These tools mentioned are in active use worldwide, making them an excellent starting point to ensure compliance with present and future regulations. Without the appropriate tools in place, it is nearly impossible to ensure a trustworthy AI future. A survey conducted by the AI Risk and Security (AIRS) group evaluated the current status of AI governance across their members (Figure 1). The survey revealed that there is room to improve, with only 30% of enterprises have established roles or responsibilities for AI systems, and just 20% have a centrally managed and budgeted department dedicated to AI governance.

“Global spending on AI is expected to increase from $150 billion in 2023 to $300 billion by 2026. The use of AI is moving at a rapid pace with regulators’ eyes keeping a close watch, and we’re seeing leaders in the TMT industry create their own governance tools as a commercial and operational imperative.” George Haitsch, WTW Technology, Media and Telecom Industry Leader

Global spending on AI is expected to increase from $150 billion in 2023 to $300 billion by 2026.”

George Haitsch | WTW Technology, Media and Telecom Industry Leader

Implementing several AI governance tools fosters a collective understanding and assessment of possible risks and limitations in AI explainability, transparency, and fairness across different departments. All personnel engaging with AI models should possess awareness regarding its capabilities and shortcomings, emphasizing the importance of maintaining a skeptical perspective toward AI outputs.

Figure 1: The current status of AI governance in enterprises

Notes: Respondents included professionals from technology risk, information security, legal, privacy, architecture, model risk management, and other fields, working in financial and technology organizations, as well as academic institutions.

Emerging AI governance regulations

Regulation is evolving in different pathways around the world. Policy makers across the globe are hurriedly addressing the issues, gaps, and limitations of AI driven decisions.

This has prompted a surge of literature and research into the subject. In July 2023, the U.S. Biden administration announced[3] that Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI will self-regulate their AI development.

  • Congress’ blueprint for an AI Bill of Rights[4] provides practical guidance through a set of five principles and practices to government agencies and a call to action for technology companies, researchers, and civil society to build these protections[5].

The EU produced a similar set of ethical guidelines to the US with 7 key requirements including transparency, accountability, and respect for privacy and data protection[6]. They also proposed a classification scoring system. In the European Union’s A.I. Act, regulations are assigned proportionate to the level of risk posed by an AI tool: unacceptable, high, limited, and minimal risk.

  • Unacceptable risks that would be banned include AI systems that perform social scoring of individuals or real-time facial recognition in public places. Other tools carrying less risk, such as software that generates manipulated videos and “deepfake” images must disclose that people are seeing AI-generated content. Violators could be fined 6% of their global sales. Minimally risky systems include spam filters and AI-generated video games.

Canada produced a mandatory risk assessment tool in 2020 named as the Algorithmic Impact Assessment (AIA).[7]

  • It consists of 51 risk and 34 mitigation self-assessment questions to determine the impact level of an automated decision-system. The final assessment scores are based on factors such as system’s design, algorithm, decision type, impact, and data.

Singapore launched a technical framework and software with AI Verify[8] in partnership with companies from different sectors and scale including AWS, DBS Bank, Google, Meta, Microsoft, Singapore Airlines, and Standard Chartered Bank. It became open source in June 2023.

In June 2023, China revealed that it was developing an “Artificial Intelligence Law” that could offer practical guidance, a technical framework, and a classification scoring system. Rather than devising a comprehensive regulatory plan, China has chosen to regulate AI through individual legislation, with specific laws addressing issues such as algorithms, generative AI, and deepfakes.

Global companies will be facing increasing pressure to comply with regulatory standards in AI governance. This often presents a challenge as they must also comply with other regulatory measures, such as ESG. As a result, companies may spend more time disclosing information about their algorithms rather than making tangible progress.

Risks and limitations – Explainability, oversimplification & bias

Several AI governance tools, including fairness AI auditing software, may be limited in their application to specific phases of the AI life cycle. AI fairness tools might solely address fairness concerns during the model training stage of AI development. However, ensuring fairness at one stage does not guarantee its persistence throughout the entire AI life cycle.

Addressing bias in AI models through technical code or software is just the beginning for AI governance. Companies must extend their focus beyond the technical developers’ Proof Of Value. AI fairness auditing implemented across the entire AI life cycle along with comprehensive documentation is key. The documentation should incorporate various AI governance tools understandable to both technical and non-technical audiences.

Technical developers are focused on achieving explainability by describing the mechanisms of an AI system or algorithm through software/code. One of the major problems for evaluating explainability from the technical developer’s point of view is oversimplification.

  • For example, the utilization of both SHAP[10] (Shapley Additive exPlanations) and LIME[11] (Local Interpretable Model-agnostic Explanations) for model explainability has seen an uptick. This rise in popularity is partly owed to the methods’ model-agnostic nature, allowing evaluation of any model, and to their abundance of user-friendly documentation.

In a typical scenario, a data scientist may opt for SHAP or LIME to show how an output was determined for a single instance of a model input, such as a specific decision or prediction, rather than the entire model. In other words, both methods operate by approximating more intricate, non-linear models (often referred to as "black-box" models) with simpler linear models, potentially resulting in misleading outcomes.

Oversimplification and the potential lack of critical context poses a significant challenge. Some algorithms designed for specific social settings may be inaccurately applied in different contexts, resulting in technical inaccuracies and misguided outcomes. The NIST AI Risk Management Framework[12], examined in Part II of the report, acknowledges the shortcomings of metrics used to measure AI risk, citing the risk of oversimplification, gaming, lack of critical nuance, and unexpected reliance.

This deficiency in contextual understanding could lead to unfair resource allocation or access, with implications across various areas such as mortgage lending, employment screening, college admissions, child welfare, and medical diagnoses.

Business recommendations and model framework

An AI governance framework should encompass internal governance structures and measures, define the extent of human involvement in AI decision-making, address operations management, facilitate stakeholder interaction and communication, and illustrate de-risking strategies for AI across the entire business to mitigate large-scale failures.

  1. 01

    Internal governance structures and measures

    Internal governance structures and measures play a crucial role in ensuring robust oversight of an organization's use of AI. They should be addressing risks and integrating ethical considerations through ethics review boards.

    Organizations may explore features for their internal governance structures, such as clear roles and responsibilities for ethically deploying AI. A decentralized governance mechanism might be considered when a centralized approach is suboptimal, bringing ethical considerations into day-to-day decision-making.

    The involvement and support of top management and the board of directors are pivotal. Key tasks include defining roles, responsibilities, and training for personnel involved in AI governance, using risk management frameworks for assessing and managing risks, and establishing monitoring and reporting systems. Regular reviews ensure the continued relevance and effectiveness of internal governance structures.

  2. 02

    Determining the level of human involvement in AI decision-making

    Before deploying AI solutions, organizations are advised to determine how much influence humans have in the process. The levels addressing human oversight with AI are listed below:

    Human-in-the-loop: In this model, human oversight is active, with people retaining full control. The AI provides recommendations or input to the humans driving the process.

    Human-out-of-the-loop: In this model, there is no human oversight, and the AI system has full control without the option of human override.

    Human-over-the-loop (or human-on-the-loop): This model involves human oversight to the extent that the human is in a supervisory role, with the ability to take control in the face of unexpected events. Humans can adjust parameters during the algorithm's operation. Examples include: AI assisted medical diagnoses, product recommendations, or GPS navigation systems.

  1. 03

    Data management and governance

    The individuals involved in model training and selection, whether internal staff or external providers, should work collaboratively. Data accountability practices, including understanding data lineage, ensuring data quality, and minimizing inherent bias, are essential. Organizations must understand the lineage of data and address factors affecting data quality.

    Minimizing inherent bias involves being aware of biases in datasets, using heterogeneous datasets, and employing different datasets for training, testing, and validation. Regular review and updating of datasets, even if non-personal, are recommended for accuracy, quality, and reliability. Good data accountability practices apply even when using non-personal data or anonymized personal data in AI model training.

    In deploying AI algorithms, organizations must iterate through model development until achieving the most suitable results for their use case. The interaction between data and algorithms/models is vital. Datasets, sourced from various places, both personal and non-personal, are integral to the AI solution's success.

  2. 04

    Stakeholder interaction and communication

    Effective AI governance relies on clear end-to-end communication with various stakeholders such as developers, executives, regulators, external AI tool customers, internal business users, and more. Achieving this involves concise and accessible AI documentation, addressing model gaps and biases, and specifying appropriate use cases.

    Transparent communication, simple user interfaces, opt-out mechanisms, and feedback channels are paramount for user understanding and interaction. Organizations should regularly assess their AI governance against evolving ethical standards and share the results with relevant stakeholders. Meta's "Why am I seeing this?"[13] feature exemplifies transparency in advertising and machine learning model training. As AI progresses, AI governance evolves. WTW will continue to keep pace with that change and explore collaboration amongst employees, executives, users, and researchers on examining and enhancing AI for optimal governance.

Sources

  1. World Privacy Forum - Risky Analysis - December 2023. Return to article
  2. Model Artificial Intelligence Governance Framework - Second Edition. Return to article
  3. Readout of White House Listening Session on Tech Platform Accountability. Return to article
  4. A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector. Return to article
  5. Generative AI: Implications for Trust and Governance. Return to article
  6. What is AI Verify? Return to article
  7. Artificial intelligence act. Return to article
  8. Creating an international approach to AI for healthcare. Return to article
  9. AI Risk Management Framework. Return to article
  10. An introduction to explainable AI with Shapley values. Return to article
  11. LIME - Local Interpretable Model-Agnostic Explanations. Return to article
  12. AI Risk Management Framework. Return to article
  13. Increasing Our Ads Transparency. Return to article
Author

Analyst, CRB Graduate Development Program
email Email

Contact us