Skip to main content
main content, press tab to continue
Article

WTW A&E Technical Brief: Artificial intelligence

November 14, 2023

The ability of AI to process and analyze large amounts of data and make predictions presents a significant opportunity to improve the efficiency of design and project monitoring.
N/A
N/A

The opportunity

The ability of artificial intelligence (AI) to process and analyze large amounts of data and make predictions presents a significant opportunity to improve the efficiency of design and project monitoring.

Global spending on AI is expected to increase from $150 billion in 2023 to $300 billion by 2026. AI adoption has increased threefold since 2019 with early adopters reporting a 32% improvement in customer and employee retention.[1]

Recent breakthroughs in natural language processing (NLP) using transfer learning and reinforcement learning techniques are accelerating the adoption of recommendation and optimization engines.

Organizations seeing the highest returns on AI investments have

  1. A plan that prioritizes AI initiatives linked to business value
  2. Senior management committed to a clearly defined AI vision
  3. An appointed credible leader of AI initiatives
  4. A set of key performance indicators to measure the incremental impact of AI initiatives and
  5. A clear framework for AI governance.[2]

Current applications

A sample of the programs available to engineers:

  • BST Insights tracks 35+ digital signals and applies AI and machine learning to predict project outcomes.
  • Civils.AI automates critical procedures, such as structural analysis, material selection and estimating.
  • Autodesk Civil 3D uses visual programming to generate scripts that automate repetitive and complex tasks.
  • Togal.AI detects, measures, compares and labels project spaces on architectural drawings for cost estimating.
  • Midjourney generates images from natural language descriptions, called “prompts,” similar to OpenAI’s DALL-E and Stable Diffusion.
  • Fotor allows use of text, images or blueprints as prompts for image generation.

Concerns

  • Intellectual property and ownership: AI-generated content cannot be copyrighted. Several AI companies are being sued for training their models on data scraped from the internet without permission from the copyright owners. AI tools can create designs in the style of a particular firm, which potentially exposes the designer who uses such a tool to infringement claims.
  • Liability and accountability: Engineers might face liability issues if their AI systems malfunction, produce incorrect results, or cause harm to individuals or property. Cases so far suggest that in the event of a failure of the output, it will be the user of the AI tool that is held accountable. Software license agreements will influence outcomes here by absolving the software company of liability. The software company is likely protected by an indemnity in the license agreement.
  • Data privacy and security: Engineers must navigate data privacy laws and regulations to ensure that the collection, processing and storage of data comply with legal requirements. Mishandling of sensitive data could lead to legal and regulatory consequences.
  • Regulation and compliance: Engineers working with AI must comply with licensing requirements and the professional standard of care. If a computer is generating content, is that output created under the “responsible control” of the licensed professional? So far there are no cases that suggest use of AI will be treated differently from the way CAD is used. While the professional standard of care will be influenced by the growing use of AI tools, evaluation of compliance with the standard of care is expected to remain a results-based inquiry.
  • Transparency and explainability: As AI systems become more complex, there is a growing need for transparency and explainability. Legal challenges can arise if decisions made by AI systems are not understandable or explainable, particularly in regulated industries like engineering, finance or healthcare. It is recommended that the use of AI be disclosed. In the event of litigation, undisclosed use of AI could be manipulated to make an engineer appear untrustworthy.
  • Ethical considerations: Ethical concerns associated with AI can have legal implications. Engineers may face legal challenges if their AI systems engage in unethical behavior, violate human rights, or otherwise conflict with widely accepted ethical norms. While more than 1000 technology experts, researchers and investors have signed an open letter asking for a six-month halt on the creation of “Giant AI systems” citing “profound risks to society,” there is no sign of a slowdown in the development of AI.[3]
  • International and cross-border issues: AI technology often crosses international borders. Engineers need to consider issues related to data sovereignty, cross-border data transfers, and compliance with different legal frameworks in various jurisdictions.
  • Employee termination: AI algorithms and robots are developing the sophistication to displace human employees, causing some employers to engage in mass layoffs and reductions in force. Employers need to comply with the notification requirements of the Worker Adjustment and Retraining Notification Act and other employment-related laws, including those related to age discrimination.

Globally, about 28% of surveyed companies reported failures of their AI initiatives, with 35% of failures in North America reporting that AI did not perform as expected.

The legal environment

There is currently no known federal legislation specifically regulating artificial intelligence. The Biden administration has been on a fast-track listening tour with AI companies, academics and civil society groups. The effort began in May when Vice President Kamala Harris met with the chief executives of Microsoft, Google, OpenAI. and Anthropic and pushed the tech industry to take safety more seriously. In July, representatives of seven tech companies appeared at the White House to announce a set of principles for making their AI technologies safer, including third-party security checks and watermarking of AI-generated content to help stem the spread of misinformation.

Last fall, the White House introduced a blueprint for an AI bill of rights, a set of guidelines on consumer protections with the technology. The guidelines also aren’t regulations and are not enforceable. So far none of the currently proposed regulations appear to have sufficient support to be passed into law. The United States remains far behind Europe, where lawmakers are preparing to enact an AI law this year that would put new restrictions on what are seen as the technology’s riskiest uses.

2023 has seen at least 25 states, Puerto Rico and the District of Columbia introduce artificial intelligence bills. 14 states and Puerto Rico adopted resolutions or enacted legislation. Several states proposed task forces to investigate AI, and others expressed concern about AI’s impact on services like healthcare, insurance and employment.

Common themes of current laws include:

  1. Giving consumers the right to opt out of automated profiling.
  2. Mandating data protection assessments if the automated decision-making poses a heightened risk of harm (including targeted advertising and some types of profiling).

Sources for tracking state-level legislation

Litigation

Copyright litigation cases include:

  • Thaler v. Perlmutter (2018): Copyright for AI-generated image “A Recent Entrance to Paradise” was denied because the AI system Creativity Machine was the sole creator. The work could not be registered because it was made ‘‘without any creative contribution from a human actor.” The Copyright Office’s decision is being appealed. There is a public inquiry into how the law should apply to the use of copyrighted works in AI training and the resulting treatment of outputs.
  • Getty Images is suing Stability AI in Delaware alleging misuse of copyrighted images. The AI company copies more than 12 million copyrighted images and their associated metadata without permission.
  • Anderson v. Stability Ltd. is a class action in California against Stability AI, MidJourney and DeviantArt alleging that Stable Diffusion was trained on billions of images scraped from the internet without consent. Plaintiff’s prospects do not look good based on Judge Orrick’s comment: “I don’t think the claim regarding output images is plausible at the moment, because there’s no substantial similarity [between the images by the artists and images created by the AI image generators.]”
  • In J. Doe 1 and J. Doe 2 v. GITHUB, INC. et al the Joseph Saveri Law Firm is currently suing Microsoft, GitHub and OpenAI (creator of ChatGPT and image generator DALL-E 2) for making Copilot, an automatic code generator trained on existing code available online without seeking permission from engineers who wrote it.
  • In Google LLC v. Oracle America, Inc. the Supreme Court ruled in a 6 – 2 decision that Google's use of the Java Application Program Interfaces fell within the four factors of fair use.

In the past, scraping images or other content for training datasets has been considered “fair use” in U.S. copyright law. For instance, in 2016, the Supreme Court refused an appeal from authors who sued Google for scanning more than 20 million copyrighted books and indexing them for its Google Books website.

Guidance for copyright applicants includes:

  • Applicants have a duty to disclose the inclusion of AI-generated content in a work submitted for registration and to provide a brief explanation of the human author’s contributions to the work.
  • Applicants should not list an AI technology or the company that provided it as an author or co-author simply because they used it when creating their work.
  • AI-generated content that is more than de minimis should be explicitly excluded from the application.
  • Applicants who are unsure of how to fill out the application may simply provide a general statement that a work contains AI-generated material.

Liability litigation cases involving AI include:

  • In Cruz v. Talmadge (Mass 2017), plantiffs were injured when a bus struck an overpass. The plaintiffs sued the manufacturers of two GPS devices arguing the device did not lead the driver to an alternate route to avoid a low overpass. The accident was forseeable due to previously reported accidents.
  • In Nilson v. General Motors, an autonomous vehicle (GM Bolt) swerved and hit a motorcyclist. A backup driver was present. Plaintiff alleged that the vehicle drove negligently. GM admitted the negligence requirement and settled.

In AI-related litigation, the court must decide how to apply the “reasonable care” standard to a nonhuman actor. Where an AI product acts autonomously, a court must establish how foreseeability is determined. If products themselves can be held liable, then it is ambiguous as to who should be responsible for the injury or damage that they cause.

Future regulation approaches

Future regulation approaches could take several paths:

  • Different industries develop their own regulations for the use of AI based on the specific needs and concerns of that industry
  • Regulate AI through the development of sector-wide regulations
  • Regulate AI through the development of general AI regulation

Recommendations

  • Disclose the use of AI.
  • The safest path is for companies to train models in their own servers without having to share data with other people, optimizing the benefits of the large in-house datasets and intellectual property.
  • Read and understand your software licensing agreement.
  • There is still a long way to go before AI models can be used without close supervision for safety-critical applications in engineering.
  • Areas with a bit more margin for error — such as demand forecasting or quality assurance — are much better suited.
  • As upskilling is fundamentally critical to AI staffing, organizations need to be intentional about embedding formal, active continuous learning on AI into employee education. Helping staff understand how AI, data science, and machine learning fit into the company’s overall strategy can be as critical as educating people on the concepts and technology themselves.
  • Collaborate with legal experts who specialize in technology, intellectual property and data privacy.
  • Support tailored AI regulations made by lawmakers with the input of a wide range of stakeholders as opposed to internal self-policing regulations that AI companies would probably prefer.

Possible future roles of the AI subcommittee

  • Share best practices and sample policies regarding the use of AI to promote a better understanding of the professional standard of care
  • Monitor and report legislative and litigation developments
  • Share information about the performance of AI software programs
  • Develop recommendations to be shared with legislative action or lobbying groups

Footnotes

  1. IDC Research, Inc. “Create More Business Value from Your Organizational Data” 2023. Return to article
  2. McKinsey Global Institute “The state of AI in 2022” 2022. Return to article
  3. Future of Life Institute “Policymaking in the Pause” 2023. Return to article

Disclaimer

Willis Towers Watson hopes you found the general information provided in this publication informative and helpful. The information contained herein is not intended to constitute legal or other professional advice and should not be relied upon in lieu of consultation with your own legal advisors. In the event you would like more information regarding your insurance coverage, please do not hesitate to reach out to us. In North America, Willis Towers Watson offers insurance products through licensed entities, including Willis Towers Watson Northeast, Inc. (in the United States) and Willis Canada Inc. (in Canada).

Contact

WTW A&E
email Email

Related content tags, list of links Article
Contact us