CHRIS PINC: Hey, Gretchen. Great to be here.
GRETCHEN BRODERSON: Thank you, both. It's great to be here together. Well, I think it's safe to say, it's really no secret that AI is a hot topic and an increasingly pervasive part of our everyday lives. I know I'm preaching to the choir, considering how much time you both spend thinking about AI. But I also think it's safe to say we all still have a lot to learn. So I'm excited for today's conversation personally, as well as for our listeners.
I think it's a management truism that's strong and innovative teams embrace some discomfort on the path to innovation and great performance. And let's face it. I think AI is still a little uncomfortable for some. So I'd love to embrace and hopefully ease some of the tension around AI by talking about some myths and misunderstandings that are maybe out there about what it can do and also what its limitations are. So let's start with Chris, you. And what do you think is a common myth or misunderstanding about AI?
CHRIS PINC: Well, I would say the biggest concern that we hear when we talk to clients about AI really has to do with job replacement. People are afraid that AI is going to take over jobs, both jobs within HR and jobs outside of HR. And I think that that's a totally understandable concern, because anytime there has been a major technology revolution in the past, candidly, there are some jobs that get replaced.
And you think about the internet. And the internet replaced a large number of jobs or just office automation. People being able to type via email instead of having to have an assistant who would type up emails for you on a typewriter. Automation definitely results in job elimination for specific targeted roles.
But what economists have found time and time again is that technology and technological innovation, while they do disrupt specific jobs, in aggregate, they typically add more jobs than they take away, because they create all kinds of new opportunities for new products services, solutions in the marketplace that just weren't possible before.
So if you take the example of the internet, yes, there are some bookstores that went out of business, but there's a whole new industry of all kinds of jobs that are only possible because the internet exists. You think about people who host Airbnb homes or think about people who are YouTube influencers or people whose job is to focus on search engine optimization and web design.
I mean, there are countless new jobs that have been created as a result of the internet. And I think that AI is going to be really similar. AI is similar to the internet in that it's a foundational technology. It's a technology that is going to be able to be used across a wide range of industries, a wide range of applications.
And we don't know yet what those new jobs are. We know some of them, like there are a lot of jobs around prompt engineering that are coming together, but there are so many more that are going to be created as a result of this technology. We don't know what they are yet. So while I think there's reason to be concerned about specific kinds of roles, especially if you're in, for example, copywriting and you write content, there are some jobs that are potentially at risk.
But in aggregate, I think this technology is going to add new jobs. And when it comes to HR in particular, I think that there's a tremendous opportunity here to actually emphasize the human and human resources, as opposed to remove the human and human resources. That's the other myth that we hear about or the concern when it comes to job replacement. Is AI going to take away from human resources' ability to interact with employees on a person to person level?
And we actually think that the opposite potential is really there. That there's going to be more opportunity for HR professionals to take some of the more routine, mundane work off of their plate, answering the questions that employees could look up today, right now on the internet, if they knew where to find content, for example.
And those HR professionals are going to be able to use that extra time to engage with employees on the more strategic and more sensitive issues. So maybe you'll spend less time pointing out to an employee where to find the tuition reimbursement policy. But you may be able to spend more time with an employee who. Who has a family member who is going through a critical illness and needs a lot more personal support. We think that there's real potential for those kinds of actually improvements in the human relationship and in human resources.
So that's what I would say is just in terms of one of the most common myths that I hear on this topic. But Ryan, what would you add on that?
RYAN MCNEILL: Yeah. First, I would completely agree. I'm glad you brought up time. Personally, I feel like one of the biggest issues that we face as adults in the United States is time management. And having the time to really spend on anything, let alone benefits. One of the things at WTW that we're doing is we're trying to deploy artificial intelligence in a way that's going to create efficiencies for users and for people by predicting why they've come to our site or called our service center.
And we feel like because AI predictive assistants can generate content or navigation that's going to allow them to really do what they need to do as efficiently as possible, that's going to ultimately make the experience easy. And it lends itself to gaining what we call user trust, which ultimately allows them to come back and engage the
experience over and over in the future, and not be afraid of the tools, but really embrace the tools and recognize that they are there to help them, not necessarily do something for them.
So we're very excited about the opportunity in this space, and feel like artificial intelligence is going to be an asset for us.
GRETCHEN BRODERSON: Excellent. Well, I'm relieved to hear that the human and human resources is here to stay. And the conversation reminds me of an anecdote I heard from a colleague talking about using AI in his work and saying, it's like I've gone from using a manual screwdriver all these years. If the analogy is a carpenter, I've gone from using a manual screwdriver my whole life to now having a power drill. And with a power drill, I can do things much more efficiently and get rid of that mundane activity that's not the creative work that really makes carpentry fun and interesting.
So appreciate that perspective from both of you. So let's talk about another myth. I know AI has great power to answer our questions. And my teen daughters might say that Siri is smarter than I am most of the time in answering their questions. But can you unpack this myth around? Does have the power to answer all of our questions in HR on any topic? Or how would you put some thoughts and maybe guardrails around that idea?
RYAN MCNEILL: What I would say is AI is really the tool to getting us to the answer. We really view AI as a copilot in the analytics space. I mean, there's just so much data to sift through anymore. Whether it's employee survey comments, whether it's behavioral statistics, or market trends. And we really feel like the value of benefits professionals is not mining data. it's really creating actions out of that data.
So we really want to create tools that are going to get employers and administrators to hone in on specific area. So often we get bogged down in the data that becomes an analysis paralysis situation. Artificial intelligence is going to help you sift through that and get more efficient in getting to those actual outcomes.
When we talk about mining data with our clients, we really talk about how do we put some AI tools on top of that data and those large data sets that really are going to allow us to what we call having a conversation with our data. So rather than writing complex queries with outer joins, inner joins, all kind of fun stuff like that, we really talk about, how can we use a chat bot to really have a conversation?
Talk back and forth, ask questions, and then ask subquestions to get deeper and deeper. And that's really going to get you to hone in on a specific area of interest that's going to allow you to really make a legitimate change.
CHRIS PINC: Yeah. I think that's totally right, Ryan. And maybe, Gretchen, just to go back to your question, will AI be able to answer every question anybody ever has? I think we can pretty safely say the answer to that is no. There are these interesting studies-- I saw this survey that was a survey of AI professionals asking them to predict when certain milestones would be hit in the future.
And one of the milestones was having an AI agent serve as a member of the board of directors of a Fortune 500 company. Some people said 2035, and some people said 2050, and some people said 2075. But everybody who was interviewed said it was going to be at some point that it was going to happen.
I think we're a ways away from AI being able to be an omniscient, all powerful voice of truth on any topic that anybody asks of it. But I do think, Ryan, to your point, that there's real potential for AI to help with analytics and to help employees answer really specific targeted questions. And I touched on this a little bit earlier in the podcast.
But just to go into it a little bit in more detail, when it comes to self-service and getting specific information, there is real potential for AI to automate processes in a much more effective way than the way they're automated today. Right now, when you get stuck on an automated voice agent on the phone, for example, you find yourself often-- everybody's found themselves in the hell of having to try to figure out, should I press option 1 or press option 2 and option 3? None of them are really my option. I just want to talk to an agent.
One of the things that AI is really good at is understanding the intent of some content that it's taking in. If you speak to an agent, an AI agent and say, this is what I'm trying to accomplish, this is what the problem is that I'm trying to solve, it does a really good job of understanding that, and then matching that to a list of preset potential options and solutions that can answer your question.
So I think it's really to-- you're really going to start to see in the next couple of years, AI agents, whether that be through a chat bot that you type into or through interactive voice recognition system that you talk to over the phone or that you're talking to your computer to, you're going to really start to see improvements in these day-to- day interactions that we have with bots that start to make them more effective and free up time for individual employees in a workplace scenario.
Where they'll be able to get information that they're looking for more quickly, and then focus on the things that really matter. The things that they need to do that are work-related or if this is a personal related question.
Really take the answer to that question and apply it to what they need to do in their life outside of work.
GRETCHEN BRODERSON: I love that phrase you used, Ryan, around having a conversation with your data. And Chris, some of your comments around just this is all very iterative. And it really implies that it's a dialogue and a balance between people and information and AI, not just relying on AI to have all the answers.
So clearly, AI is playing this bigger role in answering or generating answers to questions. So what about people who say that there's a real risk of bias, that maybe AI is inherently biased. Another myth or fear that we sometimes hear. It sounds a little sci-fi movie to me to say that AI has its own agenda. But what's your take on bias in AI?
CHRIS PINC: Yeah. So I would say that the people who say that there's a risk of bias are 100% right. The people who say that AI is inherently biased, I think, are overstating the case. So absolutely. There have been multiple cases where the bias in AI has been shown to exist. One of the famous ones is an organization was using AI for recruiting, scanning resumes, evaluating candidates, and selecting people who would go on to the next round of interviews.
And what was found after a pretty detailed analysis was that the organization was selecting based on basically the same criteria it had selected on in the past, which would be fine, if those criteria were objective. But in fact, there was a lot of human bias in the selection process in the past that selected people who were from overly represented groups as opposed to underrepresented groups.
And that bias then got replicated in the automated system. And that's a real concern because I think that there's a tendency people to just blindly trust, if the AI said it, it must be right. If it came out of the algorithm, this must be accurate. Whereas when there's human decision making involved, there's a recognition that there's bias. And so there are extra checks and balances to make sure that you're accounting for and preventing those kinds of biases from emerging.
So what that really means is that there is a risk of bias, but it can absolutely be mitigated. And many organizations are doing great work to really thoroughly examine the source data that they're using to detect any underlying biases that are being used to train their agents, their chat bots, their AI solutions, their algorithms.
And if that due diligence is done effectively and the data going in is bias-free, then there is not an inherent bias in AI. It's really entirely dependent on the training data and the way that training data is used.
And so my advice to HR professionals out there is that when you're considering using an AI solution, particularly in situations when there's some subjective judgment involved, like a hiring decision or pay increase decisions or promotion decisions, et cetera, it's really, really critical to ensure that the vendors that you're partnering with or exploring have a lot of really robust processes in place to ensure that there aren't biases in their training data, to ensure that there aren't biases in that creation of the algorithm itself so that you're able to get to bias-free recommendations coming out of those systems.
RYAN MCNEILL: Couldn't agree more. I think this is an area where rush to market can really impede our goals of using AI in an effective and efficient way. In order to gain that user trust that we talked about previously, you're absolutely right. We have to make sure that underlying data source is accurate and reliable. Just like a house. If the foundation isn't solid, it doesn't matter how nice your kitchen is.
If your data is not solid and your inputs are not solid, it doesn't matter how slick your AI tools are, the answers provided will not be accurate or at least completely accurate. And then the user is not going to utilize those tools any longer. So completely agree, and have seen it in our own use of artificial intelligence that spanning the time
and the energy on training large language model to ensure that engine is completely accurate and liable is probably the most important part of developing the AI tools in the future.
GRETCHEN BRODERSON: I've heard you both use words like trust and reliable. And we've talked about risk somewhat in this conversation, which brings to mind the word security and the fact that in this increasingly digital world, data security and privacy, of course, is always top of mind. Some might posit that AI compromises privacy and security. What's your take on that? Ryan, you want to start with this one?
RYAN MCNEILL: Yeah. I'm going to maybe flip the script a little bit. I think done responsibly, AI can actually enhance privacy and security. We believe that we're going to utilize AI to proactively analyze user behavior on employee self-service applications to determine if actions that are taken should be flagged as suspicious or fraudulent. Right now, it's pretty much a manual review. And that behavior is reactionary, whereas I believe the AI tools can help prevent these things more proactively.
Additionally, I think I'll also go back to the source of the data that the AI tools are leveraging. If we're securing our data properly, the threat of security being compromised due to AI is significantly mitigated.
CHRIS PINC: Yeah. I think you have a really good point. I mean, the fraud detection capabilities are getting better and better every day. And people are experiencing that, I think, in the consumer world. You get a fraud detection alert on your credit card, for example. And those are quickly going to move into the HR space and I think and add a lot of value.
I do think at the same time, there are some situations where HR professionals do need to be careful about data security and data privacy, because of some of the new capabilities that AI solutions provide, especially when it comes to the free chat bot solutions that are out there like ChatGPT and others.
Most of them, if not all of them in their terms and conditions, give them permission to use any information that is entered into them. So if you are using ChatGPT for work, and I encourage everybody to play around with it, and explore it, when you do, you have to be really careful not to enter in personally identifiable information or sensitive company-specific information, because any information that goes in can then be used in that large language model.
And the question that you type in could come out as an answer in somebody else's question. So there is real need for concern there or for carefulness or for caution, I guess, I should say. There's real need for caution there. The other thing is that as people use AI agents more and more and start to trust them more, they start to enter information that is potentially more sensitive.
There is a real rise, for example, in the HR space. A lot of startups are coming out with chat bots that are coaches for employees. And they start with pretty straightforward, simple things. Advice on how to get ahead in your career and advice to new professionals and how to behave professionally in a business environment.
But because you can type anything in you want, people could start to type in sensitive information about relationships that they have at work that may be tense or challenges that they might be having with a coworker. And all of that private information is beyond what I think people typically share today in conversations that are recorded.
All of the information that you type in is recorded. So there's a real need for organizations, when they're thinking about AI-based solutions, to really think about the data privacy and security implications, because there is so much more sensitive data that can be potentially ingested in these systems.
And organizations might even have the best privacy policies in place. But if they don't have the right security in place and they get hacked, then those privacy policies are out the window. So really important, as you're thinking about looking at vendors in this space to really examine thoroughly their data privacy and data security policies to make sure that you're covered in case there are any potential issues that could come up.
GRETCHEN BRODERSON: Some great reminders there. Thank you, both, for tackling some really important questions, I think, around these myths surrounding AI. I think it's good to hear how AI is transforming and
enhancing work and jobs, not necessarily replacing them. And that balancing technology with human insight and judgment and creativity really continues to be crucial, even as AI becomes more pervasive.
And of course, we all need to be mindful about the data and the tools we're using and how we're using them to mitigate risk. So before we wrap, I'd love to just hear any parting advice you might offer to our listeners about how we can find the benefits in AI. Get a little more comfortable with maybe the uncomfortable. Ryan, you want to start?
RYAN MCNEILL: Yeah. I think my best advice is to use tools. I think the best way to get comfortable with anything is to start using it. See what you like about it, see what you don't like about it, see what works and what doesn't work for you. I think about technology as a whole. Chris mentioned the internet before. I mean, no one was comfortable with the internet before they started using it.
If I think about smart speakers, probably more recent, those were not things that we used our everyday life, and now they are somewhat crucial to, at least my behavior, getting up in the morning and stuff like that. Setting my awake alarm. So I think just beginning to use technology in your personal life really helps you see benefits of how you might use it in your professional life as well.
CHRIS PINC: Yeah. I think I would agree 100%. I think that's the main point that I would emphasize as parting advice. I talked a little bit earlier about whether or not AI is going to take your job. And the best advice I heard is, AI is not going to take your job, but somebody who uses AI might.
And this is really going to be a very pervasive technology that we all need to learn how to use and get comfortable with. What's been fascinating to me is, it's so much easier than I thought to really start to experiment with AI and use it. And at a really basic level, you can do that by going into ChatGPT and asking some questions.
The next level up I would say, is, and it sounds scary, but it's really not, trying to experiment a little bit with prompt engineering. Prompt engineering sounds like a fancy term, but prompt engineering really is explaining to a chatbot what you want it to do. And there are some really great sites out there.
One is called po.ai. I went in not knowing anything at all about prompt engineering or even about computer programming. I've never written a word of code or a line of code in my entire life. But you can go into these systems, and there will be a one-page or half-page instructions on how you might want to write some prompts. And then you can write in some prompts and quickly design a chat bot to do different things. And you can experiment with them.
Some of them are goofy and jokey and others are more serious. But the real revolution with the large language models that we're seeing is that computers can now listen and talk in natural human language. You don't have to know how to code in Python to communicate with a computer, and the computer doesn't have to communicate back to you in code either.
You can write in natural language, the computer responds to you in natural language. So these are things that are really, really easy to do. The only obstacle is-- the mental obstacle of saying, wow. I could never be a prompt engineer. Well, it's actually a lot easier than you think. And the more that you go in and experiment, the more that you'll start to see how some of these applications can really help you in your day-to-day work. I use AI now all the time in my day-to-day job. And it helps me with all kinds of tasks. And I think that our listeners will find that to be the case as well.
GRETCHEN BRODERSON: This has been a great conversation. I'd like to take a moment to thank our guests, Ryan McNeill and Chris Pinc. So good to have you with us.
CHRIS PINC: All right. Thanks, Gretchen. It was to be here.
RYAN MCNEILL: Thank you. Great conversation, guys.
GRETCHEN BRODERSON: And really, I'd like to thank our Benefits, with Purpose! listeners. We appreciate you joining us. We hope you'll subscribe to future episodes, and we look forward to having you back here next time.
SPEAKER: Thank you for joining us for this WTW podcast featuring the latest perspectives on the intersection of people, capital, and risk. For more information, visit the Insights section of wtwco.com.