Skip to main content
main content, press tab to continue
Article | FINEX Observer

Is your organization ready for AI?

By Talene M. Carter | August 17, 2023

We discuss what artificial intelligence (AI) is, how it is being used, the laws regulating the use of AI, the potential risks and some best practices.
Financial, Executive and Professional Risks (FINEX)
N/A

Introduction

Artificial Intelligence is developing faster than most people are comfortable. Regulators are doing their best to stay ahead of it by creating task forces, implementing guidance to prevent discrimination, passing laws to provide some guardrails, etc. There are certainly benefits to the advancements of AI, but there are also certain risks. In this article, we will discuss what AI is, how it is being used, the laws regulating the use of AI, the potential risks and some best practices.

What is AI?

According to Techopedia, AI is defined as, “artificial intelligence (AI), also known as machine intelligence, is a branch of computer science that focuses on building and managing technology that can learn to autonomously make decisions and carry out actions on behalf of a human being. AI is not a single technology. Instead, it is an umbrella term that includes any type of software or hardware component that supports machine learning, computer vision, natural language understanding, natural language generation, natural language processing and robotics.”

There are two programming techniques that are used in AI: natural language processing (NLP) and machine learning.

NLP is a branch of AI focused on giving computers the ability to comprehend text and spoken words in the same way humans can, similar to Alexa and Siri.

Machine learning is the branch of AI concerned with the use of data and algorithms (a set of step-by-step instructions and rules) to imitate the way humans learn and continue to learn through experience.

How are employers using AI?

Many employers are using AI in employment decision making in all aspects of the employment cycle – hiring, onboarding, learning and development, performance evaluations and termination. Some examples of employer use of AI are analyzing resumes, predicting job performance and performing facial analysis in interviews to evaluate a candidate’s stability, optimism or attention span. The more benign uses are human resources groups using AI (Chatbots) to assist employees with benefits, finding company policies, employee learning and development, tracking goals, etc.

Some employers are using AI for purposes of employee monitoring as well, particularly since the pandemic. Pre-pandemic approximately 30% of large employers conducted some form of employee monitoring, whereas post-pandemic approximately 60% or large employers are doing so. The potential risks with this type of monitoring are discussed below.

Laws regulating AI in the workplace

EEOC guidance

On May 18, 2023 the Equal Employment Opportunity Commission (EEOC) issued guidance - “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964” - regarding the use of AI in employment. EEOC Chair, Charlotte A. Burrows stated, “As employers increasingly turn to AI and other automated systems, they must ensure that the use of these technologies aligns with the civil rights laws and our national values of fairness, justice and equality.” The EEOC guidance is “limited to the assessment of whether an employer’s “selection procedures”—the procedures it uses to make employment decisions such as hiring, promotion, and firing—have a disproportionately large negative effect on a basis that is prohibited by Title VII”. Essentially, it is focused on disparate impact claims.

The most significant takeaways from the EEOC guidance are that the agency will be closely scrutinizing employer AI practices to ensure there are no violations of Title VII. In addition, the guidance makes clear that an organization cannot escape liability by using a third-party vendor to develop and implement their AI systems. The EEOC makes it very clear that if the AI system produces discriminatory results, the organization may still be liable.

New York city regulation – automated employment decision tools

New York City adopted a first of its kind regulation that went into effect on July 5, 2023. The regulation makes it unlawful for employers to use automated employment decision tools (AEDTs) to screen candidates and employees within New York City unless certain bias audit and notice requirements are met. In short, the requirements are:

  1. There must be a bias audit conducted no more than one year prior to use of the AEDT;
  2. A summary of the audit must be publicly available; and
  3. Notice must be provided of the use of the AI tool and an opportunity to request an alternative selection process must be provided to each candidate and employee who resides in New York City.

If there are violations of the rule penalties can range from $375 to $1,500 per violation. It is important to note that each failure to notify is a separate violation, and failure to meet the bias audit requirement can result in a separate, daily violations, too.

Proposed regulations locally and globally

There is no federal law (yet) governing the use of AI in employment, however the White House has released a “Blueprint for an AI Bill of Rights” and the National Institute of Standards and Technology has released an AI Risk Management Framework.

At the state level, several states have proposed their own laws (i.e., New York, New Jersey, California, Washington D.C.). The proposed laws are generally seeking to ensure the use of AI tools do not lead to discriminatory employment decisions. While Illinois and Maryland have limited AI laws on the books already - Artificial Intelligence Video Interview Act and Maryland’s H.B.1202 – which provide limitations around the use of video interviews and facial recognition tools, respectively.

From a global perspective, in April 2021 the European Commission proposed an EU regulatory framework (EU AI Act), which would be the first law on AI by a major regulator. Of note is that it could create a global standard, similar to GDPR. The rule is expected to be finalized by year-end.

Potential risks of using AI in the workplace

Given recently introduced regulations, will firms using artificial intelligence in the hiring process and for employment related decision have more EPL claims? There is certainly the potential for an increase in claims, such as, discrimination whether it is related to age, disability, race, gender, etc. The overarching accusations will likely be that employers, by using AI in the recruiting and hiring process, are screening applicants out based on a protected class status.

If employers violate any of the specific rules set out by legislation, they would seem to be walking into the cross-hairs of regulators and the plaintiffs' bar, exposing the company to penalties and increased litigation.

Privacy implications

As noted above, some employers are using AI for employee monitoring. This has the potential of opening the door for invasion of privacy claims, even though there is no federal right to privacy in the workplace. Moreover, if the employer fails to adhere to the specific notice requirements of the regulations that too has the potential of leading to invasion of privacy type claims.

Third-party EPL claims

Third-party EPL extends coverage for claims made by non-employees - a customer, a vendor or an independent contractor, for example. Covered third-party allegations typically include discrimination and harassment. The increasing use of AI to interact with customers, vendors or other third parties could result in allegations of discrimination.

Best practices

The use of AI is moving at a rapid pace with regulators’ eyes keeping a close watch. While there are many benefits to the use of AI, it does not come without any risk. Employers should consider implementing the following best practices:

  1. Always consult with outside counsel to ensure compliance with relevant laws;
  2. Conduct regular self-audits of your AI tools, even if they are implemented by a third party;
  3. Train employees on proper use of AI and how to assess for potential biases;
  4. Have a plan in place if unintentional bias is found;
  5. Communicate use of AI to employees and be transparent; and
  6. Consult with your insurance broker to discuss risk transfer strategies, whether it is an Employment Practices Liability insurance policy or another form of coverage.

Disclaimer

Willis Towers Watson hopes you found the general information provided in this publication informative and helpful. The information contained herein is not intended to constitute legal or other professional advice and should not be relied upon in lieu of consultation with your own legal advisors. In the event you would like more information regarding your insurance coverage, please do not hesitate to reach out to us. In North America, Willis Towers Watson offers insurance products through licensed subsidiaries of Willis North America Inc., including Willis Towers Watson Northeast Inc. (in the United States) and Willis Canada, Inc.

Author

National Employment Practices Liability Product Leader, FINEX North America

Contact us