DAN BUELOW: Hello, and welcome to Talk to Me About A&E. I'm Dan Buelow, Managing Director at WTW A&E, the center of excellence for WTW exclusively dedicated to providing insurance and risk management solutions to design professionals in North America. Our topic today is on AI, artificial intelligence.
In our recent WTW A&E survey of A&E professional liability carriers on emerging risks and claim trends, AI was cited by most carriers as an emerging risk to watch out for. Most of us know generally what AI is. However, there's a lot of uncertainty as to what the impact of AI will be on the design profession and what will the implications be in regards to the professional liability and business risks of a design firm.
So what is AI. As defined in the ACEC guidelines on AI, that we will be reviewing today, AI, or artificial intelligence, is conceived of as the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision making, and translation between languages. It encompasses the subfields of machine learning, deep learning, and generative AI.
OK, so what does this all mean as it pertains to design professional liability risk? To help answer this question, I have with me our very own Mr. Mark Blankenship, Director of WTW A&E's risk management. Welcome back, Mark.
MARK BLANKENSHIP: Thank you, Dan. Glad to be back.
DAN BUELOW: Glad to have you. So Mark has been on our programs before in the past. And many of you know Mark as an expert in managing complex architects and engineer claims and negotiating professional agreements. Mark has taken a real interest in this topic of AI and he, in fact, co-chairs the subcommittee on AI for the National ACEC Risk Management Committee. As part of that committee, Mark helped draft the ACEC guidelines on the use of AI by design professional firms.
So I want to talk to Mark about these new guidelines that ACEC recently released on AI, and the risks and issues design professionals need to consider when it comes to the emerging risks of artificial intelligence. So, Mark, when it comes to technology, I regard myself as the consummate end user. The fax machine has come and gone and I still don't know how that damn thing worked. So now we have AI. Can you explain to us simple folks here what artificial intelligence is and its different subfields, as referenced earlier, such as machine learning, deep learning, and generative AI?
MARK BLANKENSHIP: Well, we think it's a good idea to start with defining our terms. So machine learning is a form of artificial intelligence based on algorithms that are trained on data. Deep learning is a subset of machine learning that uses artificial neural networks to mimic the learning process of the human brain, and generative AI generates content in response to a prompt, such as ChatGPT or DALL-E. Machine learning is focused on analyzing data to find patterns and make accurate predictions, whereas generative AI, or Gen AI, is focused on creating new data that resembles the training data.
DAN BUELOW: OK, so with all that in mind, what was the primary purpose of these guidelines that you and the ACEC Risk Management Committee have put together and released?
MARK BLANKENSHIP: Well, we think that AI is a valuable tool, like any other computer tool that design professionals can use. And the purpose of these guidelines is to provide information that design professionals may take into account in order to promote responsible, professional, and ethical behavior with respect to the adoption and usage of AI in their businesses, which would include the design professional firms policies, their culture, their confidentiality requirements, and contracts with clients.
DAN BUELOW: All right. So I'd like to talk then about the general and inherent risks of AI when it comes to the design profession. You include at the beginning of the ACEC AI guidelines, an overview that notes some of the risks and considerations design firms really should address, beginning with generated content.
And it notes in the overview that quote, "Anything generated in an OpenAI system is susceptible to finding its way into the public domain. Firms should be aware that, in general, purely AI generated content cannot be copyrighted." End quote. So how should firms handle AI generated content, especially regarding copyright and public domain considerations?
MARK BLANKENSHIP: Firms should be aware that you have two kinds of AI systems, open and closed. An open system is typically trained on a scrape of the internet. It takes information from multiple sources, synthesizes them into an output. The concern is that output is going to include copyrighted material that's going to be recognizable and could generate a copyright infringement claim.
So the preferred method would be to use a closed system where the design firm is in control of all of the training data that's used by the system, and then any outputs will only include information or data that's owned by the design firm. That's the best way to assure compliance with copyright laws.
This concern of copyright infringement is heightened in light of modern contractual requirements to indemnify for copyright infringement claims, the reason being that copyright attorneys are expensive. Think $1,000 an hour type lawyers. So you really don't want to be on the receiving end of a copyright infringement claim, especially if there's a duty to defend and indemnify the client against that claim.
DAN BUELOW: That's a very good point. And we know that all carriers are not created equal when it comes to this issue of copyright coverage under their professional liability policy. We know that some are silent and some actually exclude it. So you really have to be careful. So that's, I think, a very important point, Mark, that you're raising there.
So in the overview of the ACEC AI guidelines, you also include caveats when using generated content that notes, quote, "AI has the potential to make mistakes, use, misuse, or misappropriate copyrighted data, or invent facts even and sources. Human review and revisions of generated content may reduce or eliminate publishing of mistakes or intellectual property, IP, owned by others." End quote.
I would think this bit about inventing of facts and sources should give everyone designing buildings and infrastructure projects pause. So talk to me about this AI and the standard of care really of the design professional.
MARK BLANKENSHIP: Well, using artificial intelligence does create a concern relative to complying with the professional standard of care. In virtually every state, use of the seal implies that the sealing professional has reviewed the output, that the design has been prepared under the direct supervision of that professional. And so I think legally it's required to review that output to verify that it complies with the technical requirements of the statutes or building codes.
There is an additional concern, which is that artificial intelligence can make stuff up. We call this the hallucination problem. And in fact, at one level, you've probably seen AI generated images with people with six fingers or a dog with five legs. We've all seen this. Now, the only legal case that we've seen to date involving artificial intelligence and the hallucination problem actually happened to a lawyer.
There was a lawyer in New York who filed a brief with the court, and he used Chat to generate his brief. And it was a brilliant brief. It had convincing arguments based on definitive legal cases. The problem was Chat hallucinated six of those cases. To compound the problem, the lawyer did not review his brief or check the facts, and he submitted this to the court. Now, opposing counsel did look for these cases and found they were made up. And he filed a motion with the judge.
And that lawyer was sanctioned and he received a $5,000 fine and a disciplinary action stating that he did not comply with his professional responsibilities. So we certainly don't want this to happen to any of our clients.
DAN BUELOW: All right. So Mark, lastly, in the overview of the ACEC AI guidelines, it notes, quote, "Firms will likely recognize the value in the creative use of AI by their employees, as well as the need to obtain required permissions before any such use, and to comply with these standards of practice for any use." End quote. So, Mark, it's fair to say firms should be updating their employee handbooks, isn't it, in regards to adopting responsible and ethical AI practices.
MARK BLANKENSHIP: Well, that's right, Dan, because there are approved and there will be unapproved uses of AI. Approved uses of AI will be anything that furthers the interest of the business. But people might decide to use artificial intelligence to generate inappropriate content or content that does not further the interests of the business.
DAN BUELOW: Good point. So again, update the employee handbook. And then there's some-- we've seen some and heard some stories, haven't we, where-- I know you have a good one where one of our clients, one of their employees used AI to generate a proposal and it was interesting what the client came back with.
MARK BLANKENSHIP: Yeah, that is interesting. This was actually the first AE related AI problem that I heard about. And it went like this. One of our clients used Chat to generate a response to an RFP. I know that lots of folks are doing that today. Problem is, he submitted the proposal and the client called and said, hey, I got your proposal. It's only different from your competitor's proposal by five words. Lucy, we need to talk.
DAN BUELOW: Again, have guidelines. Talk to your staff. Communicate on that. All good points. So now looking at the guidelines itself and the body of these guidelines, Mark, on the use of AI by design professional firms that ACEC has put out there, which you helped develop here, there are really three sections on this. There's governance and approval process, there's sources and tools, and then there's guidelines for practice.
So talk to me about the key components of the AI governance and approval process outlined in the guidelines and how design firms should establish oversight and authorization mechanisms for AI tool usage.
MARK BLANKENSHIP: Right. Well, first off, use of AI has profound business implications, those being compliance with the professional standard of care and staying on the right side of copyright law and out of copyright litigation. Due to these very significant implications of the misuse of AI, we think that decisions regarding adoption of AI should be done at the senior level, ideally board level authorization for implementation.
A policy for a firm's use of AI might be reviewed by the board of directors or a senior managing body, or a specifically established AI governance oversight group, risk management committee, or at least a senior compliance officer. Generally, the procedure will look like this. The firm should establish a policy for the use of AI, including its privacy safeguards, its compliance measures, its workability, and, as you mentioned, installation on company computers, and assign responsibility, either individually or collectively, to senior members of the firm to effectively allow for oversight.
DAN BUELOW: Very good points. In these guidelines, the next section of these guidelines then, Mark, is sourcing and tools. So what are the best practices for sourcing content and identifying relevant IP rights when using AI tools, and what are the risks associated with using public or unapproved AI services for company communications?
MARK BLANKENSHIP: Well, this goes back to a theme that we've been playing for a long time, even before AI was invented, and that is documentation, documentation, and documentation. So our recommendation is to document the sources of AI and identify the holder of any relevant IP rights or generated content, whether it be open source AI, closed source AI, generative AI, or others.
Firms will want to manage the employees' posting or other publication of any personally identifiable information, intellectual property, or client information on any public and/or unapproved AI service. When generating company communications, policies, or other documentation, we want to avoid uploading content to unapproved AI. This would include content that shares copyrighted material, confidential, or information in the intellectual property of others that might violate privacy or confidentiality rights.
Now, you might think what does personally identifiable information have to do with design plans? Well, if you're doing a house, the owner's name and address will probably appear in the title block, and that would be personally identifiable information. So we want to be careful about that. Also, AI is currently being integrated into various software vendor tools, and member firms may be unaware that the usage of AI in packages they currently have is there.
So the integration of AI into application or tools will continue to increase, which could result in challenges about informing clients about the use of AI and work product or deliverables in complying with bid requirements that prohibit the use of AI, and we're seeing that now, and responding to regulatory disclosure requests.
DAN BUELOW: OK. All very interesting. And then the last section, Mark, in these AI guidelines is guidelines for practice. What steps should be taken to ensure AI generated content is accurate and compliant?
MARK BLANKENSHIP: This is probably the most important part, where the rubber meets the road. Firms will want to purge AI output of any confidential information. They will want to validate AI output by a qualified employee or consultant, and provide human input into that AI output so that the final deliverable is not purely the product of AI.
As of right now, under current copyright law, you cannot copyright material that is solely the result of artificial intelligence. Only human created content can be copyrighted. Much the way that you cannot copyright a painting done by your dog, you cannot copyright information that's generated purely by artificial intelligence.
As far as reliance on AI, while it can be an effective and time-saving tool for firms in their proposal writing or research, or other business purposes. Firms will likely want to include human editing, that is in the form of quality control, independent review by the responsible person in charge to avoid relying wholly on AI as the final output of the firm's deliverables. Firms will also likely want to bear in mind their professional obligations regarding the discipline in which they are qualified.
Avoid the practice of relying on AI to create content in areas where you do not have experience or proficiency. This is based on the fact that it's going to be difficult or impossible to accurately review the output if you do not have proficiency in the area of design.
DAN BUELOW: All good points, Mark. Another area, though, we've seen also with AI is that use of AI by contractors and using AI, the AI tools to review documents, which essentially can be a tool for prospecting for change orders on steroids. So that's a concern. I don't know if you've heard anything more on that, and I don't know what the answer is to that for the design professional, but I would think this is all evolving fast, isn't it?
MARK BLANKENSHIP: It is evolving very fast. And yes, I have seen that. We got a call from an engineering client who said the contractor asked for their electronic files. They were provided. And a half hour later, the contractor came back with a list of 33 change order requests. I've been trying to identify the program that was used to do that, but my thought is, jeez, let's get our hands on that and let's use it in a proactive basis to review our output before it gets submitted to the contract.
DAN BUELOW: And we've also heard from some of our A&E firm clients that some of their clients have asked them to confirm what the use of AI is, and in trying to essentially use this in negotiating lower fees, which I don't think is reasonable in my opinion, in that this is just a tool and it's not going to be taking the place of what the licensed professional is going to be providing as a consultant. What are your thoughts on this?
MARK BLANKENSHIP: Oh, yes. We are concerned about this race to the bottom. Clients are aware of AI and some of them will be expecting that our fees will come down or be reduced due to the automation procedure, but I want firms to keep in mind that the value is commensurate with the effort and the risk of doing the project. And so because we still have this obligation to thoroughly review the output, and the only thing we can effectively use AI in right now for life safety related risks is to automate routine tasks. And so we want to avoid this race to the bottom and focus that on value-based pricing.
DAN BUELOW: All right. So lastly, are you seeing in the contracts that you're reviewing AI is coming up?
MARK BLANKENSHIP: As a matter of fact, yes. It's already becoming an issue related to contracts. First, let's talk about disclosure. Already in California now there is a requirement that use of AI be disclosed, the tool, the purpose, et cetera. We expect this to travel to different states eventually. From a claims perspective, I think it's a good idea to proactively disclose the use of AI.
If we do not, and there is a problem that's associated with the output, and it comes out in the litigation process that the firm used artificial intelligence but did not disclose that use, this is what we would call a bad fact. A good plaintiff's attorney can spin this and make it look like the firm attempted to hide the use of AI and thereby generate unsubstantiated or unjustified profits, and it would just be a bad look, I will say, for us.
DAN BUELOW: Thanks, Mark. Really, this has been great overview on the AI and risk and business considerations design firms really should be thinking about. This technology really is moving pretty fast here, isn't it? I mean, we are, I would say, in the Model T stage of AI. And as this technology continues to evolve, it will be important for the design professionals to stay abreast of what's going on here. And these guidelines are really a useful tool.
Thank you and the RMC, risk management committee, for making this effort and getting this out here. And again, the guidelines are really only a few or so pages. So again, it's just that, these are guidelines for the firm then to take and use in adopting for their own practice. So Mark, any final words of wisdom you'd like to share regarding the responsible use of AI?
MARK BLANKENSHIP: Well, the guidelines are voluntary, but we think it's a good practice to have some guidelines, like you have rules for the rest of your practice, and then follow those guidelines. It will keep you on the right side of the law.
DAN BUELOW: Excellent. So this concludes our discussion on AI. I want to thank my special guest and business partner, Mr. Mark Blankenship, Director of Risk Management for WTW A&E. Thanks, Mark.
MARK BLANKENSHIP: Thank you, Dan.
DAN BUELOW: And if you care for a copy of the ACEC guidelines that we went over, please contact anyone on the WTW A&E team, including Mark or myself. I hope you found this program of interest. For a full listing of all of our WTW A&E Talk to Me About A&E podcast, as well as all of our WTW A&E webinars and on-demand programs, reach out to us, check out our Education Center in our website at wtwae.com. So thanks for joining me for another episode of Talk to Me About A&E. I'm Dan Buelow, and I will talk to you soon.
SPEAKER: Thank you for joining us for this WTW podcast featuring the latest thinking on the intersection of people, capital, and risk. For more information on Willis A&E and our educational programs, visit willisae.com. WTW hopes you found the general information provided in this podcast informative and helpful. The information contained herein is not intended to constitute legal or other professional advice, and should not be relied upon in lieu of consultation with your own legal advisors.
In the event you would like more information regarding your insurance coverage, please do not hesitate to reach out to us. In North America, WTW offers insurance products through licensed entities, including Willis Towers Watson Northeast, Incorporated, in the United States, and Willis Canada, Incorporated, in Canada.