Skip to main content
main content, press tab to continue
Podcast

Ethics and AI in senior living

The Senior Advisor: Season 3, Episode 6

December 1, 2025

N/A
N/A

Join us for an important discussion on the ethical challenges facing senior living communities as they adopt AI. We'll look at two views. One focused on communities actively considering or using AI technologies, including how to evaluate vendors ethically, create ongoing oversight and ensure resident values guide decisions. The other will address concerns from senior living communities hesitant about AI, exploring their ethical worries and barriers to adoption.

Whether your facility is moving forward with AI or taking a cautious stance, this conversation will offer essential frameworks for making ethical choices that truly benefit residents.

Ethics and AI in senior living

Transcript for this episode

JASON LESANDRINI: AI is not the boogeyman. It's about us doing it right. Yes, it's wonderful technology. We've had lots of technological innovation over the last 100 years plus whatever. It's about doing it right.

SPEAKER: You're listening to The Senior Advisor, a WTW podcast series where we'll discuss issues facing the senior living industry and explore risk management solutions, hot topics, and important trends critical to senior living operations.

RHONDA DEMENO: Welcome to the Willis Senior Advisor podcast. My name is Rhonda DeMeno. And I will be your host for today's discussion. Today, we will be discussing a very complex area for senior living, the area of artificial intelligence.

We know there are many complexities in artificial intelligence. People are hungry for information on this topic. But I thought it would be very important for us to really peel away the layers of the onion, so to speak, to really talk about some of those ethical issues behind AI.

We know that there's currently not that many lawsuits that are out because of artificial intelligence being so new, though we are forecasting, as we get more involved in AI, the impact, the ethical decisions that may arise from AI or from using machine learning. So, we're going to talk a bit about that today.

I'm very honored to introduce our two guest speakers. And the first speaker is Jason Lesandrini. Jason is a PhD and is an expert in ethics and leveraging innovative techniques to build ethical capacity leaders, teams, and cultures. He leads the department of ethics, advanced care planning, spiritual health, and language access services for Wellstar Health System.

In his role, Dr. Lesandrini provides leadership and resources to promote ethical behavior and decision making aligned with the system's missions, visions, and values. So welcome, Jason. Thank you so much for taking the time to talk to our audience. Thank you, Jason.

JASON LESANDRINI: Yeah, thanks, Rhonda. I'm glad to be here.

RHONDA DEMENO: And our next guest is Dr. Tammy Winner. Tammy is an award-winning professor of writing at the University of North Alabama, a published academic author, NA, SA faculty fellow, and a keynote speaker, with over three decades of experience. She has designed and delivered university level courses in professional writing, technical editing, journal writing, and podcast scriptwriting, equipping thousands of students with tools to express their message with confidence. Welcome, Dr. Tammy. Pleased to have you.

TAMMY WINNER: Thanks for having me. It's great to be here.

RHONDA DEMENO: So, as I mentioned, we'll be talking through the ethical challenges facing senior living as they navigate the AI adoption. Willis has done a lot of different publications on this topic. We've done a webinar on the responsible use of AI. So, if any of you are interested out there, please check our Senior Advisor podcast page or our Willis Towers Watson health care page so you can have access to any of those resources.

The other topics that we're going to be addressing is we're going to really look at two different perspectives from the communities, how they are actively considering or already have implemented artificial intelligence technologies. And then we'll be looking at another side of the coin, so to speak, on ethical reservations and barriers to implementation. So, I know that there's quite a lot to unpack. So we're going to start our questions.

Our first question goes to Jason. Jason, when senior living communities are evaluating AI vendors, what ethical red flags should they watch for? And how can they dig deeper than just checking compliance?

JASON LESANDRINI: Yeah, thanks, Rhonda. Just thanks, again, to the community for being on the podcast. And thanks to my co-podcaster Tammy on here. So just, again, want to just say thank you.

So yeah, I mean, it's a good question. I think what this gets at is how we think about being proactive in the ethical evaluation of AI technology as it's coming down. There's been a number of articles and other podcasts and things that have been done that think about this. But when I think about this, I think there are a couple big things that stand out.

One that I think a lot of people have heard about is really talking about how the algorithm makes decisions. What data are they using? What is the process that they're using to make decision? And what we tend to see lots of times is vendors hiding behind language of proprietary technology when you ask about data sources or decision-making problems. And what I would say is that strikes me as a big ethical flag. If they can't tell you how it's making the decision or what data they're using, I think that becomes problematic.

I think, second, what I would lean in is for the vendor to tell you what biases they are seeing are happening when AI is being used or even when it was tested. The fact is that every data set that everybody uses for the training of these tools has inherent biases in it. So anyone who's claiming that their system is perfectly neutral or doesn't have any biases built into it, I think, really just doesn't understand its own product or really isn't being honest with you about it.

I think the last couple of things I might ask and think about from red flags from the get-go is, where did they test this? These tools are tested on populations. And we want to make sure that the populations actually reflect the residents that are being served by the senior living community.

And so who was involved in the development of this? Did they involve different stakeholders that might help engineer the model so that it actually represents the people that it's trying to make decisions for?

And then I asked them quite frequently about, what do we do when things get it wrong? Do we always allow for a human to step in and override the agent? There are lots of times I think organizations will pick up tools and let the AI tool go. And I think we really need to make sure that we have some type of human oversight long term. And I think if an organization, a vendor gets defensive about any of these types of things and you're questioning the ethics of what's going on, I might say keep looking for a new vendor.

RHONDA DEMENO: Well, those are really good takeaways. I know myself; I'm currently taking a course at Harvard Medical School for AI and technology. And a lot of the resources or things that we're looking at is how easy it is really to go out on ChatGPT and even build like a resident service plan. I found that. But to your point, Jason, it really does take an extra set of eyes in a human to go in there to ensure that there's no hallucinations, that there is actual data that's being used to come up with a real solution that is customized to the population we're serving.

So love those responses. My next question, Tammy, then goes to you. What are the most significant barriers, like financial, cultural, or technological, that senior living communities face when considering artificial intelligence implementation? And how might these be overcome?

TAMMY WINNER: Sure. The first thing that comes to mind, Rhonda, when I think about the financial or technological barriers-- what we're seeing across the board, especially in senior living communities, is fear-based emotional resistance to AI. What I mean by that is that you have this resistance that stems from not just the residents themselves not understanding what AI is and how it can be used as a tool. But you also have the families of the residents. And then you have the staff as well. Let's break down those three areas. And hopefully, I can answer your question. But it is complicated.

From a resident's perspective, a lot of people have, as we know, a resistance to technology regardless of what it is. And AI is simply another form of technology that people resist because they don't understand it. So you have residents who may think they need it or understand it.

And these are folks that are not in the memory care ward but the folks that are just in assisted living, senior living communities for other reasons other than memory care. They want to understand it. They want to adopt it. And it's being pushed toward them by families because the families themselves, who do understand the advantages of using the AI as a tool, are finding ways to push it onto the residents.

And when I spoke-- I guess it was a year ago this week. I spoke at a conference in Virginia. And I spoke about the different ways that families are pushing AI technologies into senior living communities that the staff and C-suite executives of those communities may not even be aware of. I had C-suite executives come up to me after that talk and say, what are you talking about? You think AI is already in our senior living communities and we're not aware of it? And I said, absolutely. And I gave a couple examples of it.

So I say all that to make the point that the monster is out. It doesn't have to be a monster. It can be a tool. So it is already there. So if you as a senior living community are choosing to ignore AI until you have to address it as a risk factor or as a tool in your community, you're taking the wrong approach. It's there.

So you need to recognize it. Because if you don't recognize it, the families of your residents are going to push forward. And they're going to introduce it in some way that you may or may not understand. So it's really important that in order to care for your residents and your staff, you really need to push past the fear-based emotional resistance to the AI and put in some best practices in terms of setting up an AI framework and understanding how AI can be used as a tool in your community by your staff, by your residents, and by your residents' families.

RHONDA DEMENO: Now, I really like those responses. I think some of the best ways to ease residents into it is really clearly communicating and disclosing the ease of use so they're not so resistant or afraid of it. And to your point, what we're finding is that some families are becoming very savvy with the different technologies. And they're going to the senior living communities and saying, you're not using this technology. So you're putting my mother or father at more risk for ongoing falls or elopements, those types of things.

So there is a hidden risk if you do not adopt technology into the community. So when I'm talking to our clients, I always try to stress that too. It just has to be the right fit.

And the way we go about communicating this technology and educating, I think, really would help with ease of use and build trust. Really like that response. So Tammy, some senior living communities, though, are worrying about how they're going about introducing AI. And I talked about this trust among residents, families, or staff. Can you do a little bit more elaboration on those areas that are creating this distrust?

TAMMY WINNER: Right. So let's start with the ones that I see currently showing the most resistance, is staff. So staff and senior living communities are obviously worried that the AI could replace them. So the clearest way to quiet that is to be very transparent about using the AI as a tool and that the AI will never replace human touch and that we always need a human brain next to the AI to watch what's going on, to monitor it.

So all we're doing is introducing it as a tool to make the senior living community more efficient, more effective. And I'll give an example for that. If it's used effectively and the staff understands that, it's just going to free up time with the staff to spend more time administering care.

And care is always the number one aspect of any senior living community as far as I'm concerned in terms of the research that I've done. The people want the level of care. That's what's important.

So if you introduce the AI to the staff, then the staff will then be able to explain that to the families. And the families will be able to explain it to the residents. So it'll kind of trickle down in terms of everybody understanding what the AI is supposed to do.

Essentially, you want to explain it in a way that's transparent, that the AI will free up the staff in terms of other responsibilities so that they can focus on care and the human touch and the idea of what the AI can do for the residents. But families will have concerns about privacy and the oversight, while the staff is being worried about being replaced. You need to let them know you're not being replaced. We're just trying to free up your time to provide more care.

Families, in terms of privacy and oversight-- listen, privacy and oversight can't compare to being able to provide a higher level of care for your family member. And to some degree, their privacy has to be compromised in this way in order to provide that level of care. And then, of course, the residents themselves want that level of human touch and human communication.

Again, the issue of transparency. The AI will help us care for the residents more effectively. It does not replace human care. It's just you being used as a tool. So there's some things that need to be put in place in order create a staff that are ambassadors of trust for AI instead of skeptics.

RHONDA DEMENO: Very good. I really like that, really building a staff as ambassadors for technology adoption. Now, as I mentioned earlier, our topic today-- we're focusing on ethics. So there's a lot that goes into trust and distrust. But I'm turning this next question over to Jason.

Jason, artificial intelligence companies are always leveraging what they're doing. They often talk about helping seniors. But who's actually making the ethical decisions behind these technologies? And how can communities, when they're sourcing AI, ensure those values align with the residents' needs? Can you expand on that a bit?

JASON LESANDRINI: Yeah, Rhonda, I start to think about this question in a way of who's driving the car. So who's in the passenger seat? And just observing. And really, who's driving the car?

And that will depend, of course, on the vendor and the technology and what they're trying to do. But I think what we need to think about is, who built it? And what was the reason they built it for?

So sometimes, these technologies are integrated into communities and have good connections and get feedback from them. And those tools tend to reflect the practice in space of senior living communities. And then sometimes, it's not. It's a far-off, distant company that is not connected back to that space. Or the tool was created for another space. And they saw an opportunity to market into this space. And so we just have to get clear about that and thinking about whether these are developers or companies have really spent time in the spaces of senior living.

And so the other thing that I think we need to make sure is we're thinking about when they have built it, whatever company it is, whether they're connected to this space or not, it's asking, who did you talk to when you built this tool to think about our community and the unique populations that we serve? So did you involve gerontology? Did you talk to social workers? Did you communicate with families of residents? Did you talk with people who actually worked in senior living?

Have you ever had seniors themselves? I think we often miss the boat on engaging with the people who this is going to impact the most. Wouldn't it be nice if they were involved in the development of this? Because I think too often technology gets developed for seniors without actually involving them in the process of development. And I think this is true with AI and ML tools that are out there right now.

And so when I think about driving the car and a community thinking about adapting to technology, I think about, how can that community drive the car and align that path that they're on with the values of the community, with their residents, their team members? And really, that's about engaging the staff at the local level. Like Tammy said, it's about making staff aware, seeking their input on it, creating potentially even advisory-- even if it's a small group of people, an advisory board that reflects how the organization is going to think about AI and what frameworks and tools, as Tammy talked about, are they going to adopt.

And how do we make sure that those tools that are adopted represent the values of the organization and the residents? So talking to them, talking to the staff, talking to their families, asking them what's important to them. Tammy pointed out privacy and concerns about privacy. I think that's right.

I think we got to ask them, is privacy something that worries you? I think it is. It's making sure there's a human connection and then looking at how each one of these tools actually supports those values or undermines them. To Tammy's point, how does it enhance the individuals who work in these settings to provide the care that they're best at providing? And that requires that human connection.

Don't allow fancy language and neat tools about empowering seniors or their families. Substitute, really, for real, concrete evidence, that the technology respects the organization, the values the organization is trying to live out. And this senior living community who often can be at its most vulnerable spot.

RHONDA DEMENO: Really like those comments. Jason, there's a lot talk about-- and I know Tammy even mentioned it as far as the efficiency of care. Obviously, building efficiencies for care is essential in senior living, especially with the workforce shortages. We know that. Who else wins when senior living communities adopt AI? And how should that influence implementation decisions?

JASON LESANDRINI: Yeah. So listen, let's just be honest and be transparent with ourselves. There's no doubt about it. The company or the vendor of the AI tool benefits financially. But I don't necessarily think that's a bad thing if the value exchange is fair.

So I think what we need to really be thinking about is whether residents are the primary beneficiary. Or are they secondary to other stakeholders? How are we thinking about how those residents might play in?

And yes, there are other stakeholders who might benefit. So insurance companies might benefit from better documentation or risk prediction. Families might benefit because they might get better remote opportunities to see their loved ones. Staff might benefit, as Tammy said, from having less administrative responsibility and being able to be with people, holding hands, touching in the way that residents need.

And I think corporate owners can benefit or owners of these senior living communities can benefit from better operational cost savings. There's nothing inherently wrong with that. I think what we need to do is just we need to be honest about who's benefiting and how we're prioritizing those values.

So if a system that's going to be put in place, some type of tool is going to reduce liability exposure or it may cut staffing costs but it creates this surveillance environment, what are we trading off? We're trading off something operational cost for a sacrifice of privacy. And how does that align with who the organization is and says they are and the community that they serve?

And so we just really need to think about asking these kinds of hard questions up front about, look, it's OK to adopt a tool. It's OK to adopt an AI tool if it saves money. But what is that saving money going to do? What is it that the operational efficiency gets for us? What do we get at the end?

And I think if we're doing this type of work around who benefits, what values are getting promoted, I think we're becoming more honest with ourselves. And we're serving our communities in a better way. One, I would be more inclined to go to a community and stay in that community if I felt like my values were going to be heard, my family felt like their values were going to be heard, and there was alignment. And what better way to align than just being transparent and clear about what we're doing with it?

RHONDA DEMENO: So I think that brings us to the point where we say a community is really seriously considering a technology solution, whether it's for staffing or a care, risk mitigation for elopements or fall mitigation. Is there some type of ethics checklist or an ethical evaluation for AI adoption? I really think that would give a senior living community operators peace of mind, really help guide the decision making. But Jason, can you expand on that a bit regarding the ethical checklist and ethical evaluation?

JASON LESANDRINI: Yeah, thanks, Rhonda. And I'm going to pick up on some of the stuff that Tammy said. And I'm going to try and tie together what I've been saying in the past.

I think about it in three phases. So there's checkpoints and checklists in each of the phases-- so pre-implementation, implementation, post-implementation. And then we have to recognize that things are going to merge over time. I think about people creating these ethics checklists.

And then they say things like, well, we did the checklist. We're done. And I think that's akin to saying, well, I did a background check on an employee when I hired them. And then I never evaluated their performance again.

It's not a one-and-done thing. You might be able to catch obvious problems up front. But listen, things can go awry as people work with us, as tools get implemented. And so we need to make sure that we're thinking about them in different phases.

And part of the reason is because these tools can learn and change. So the algorithm that you worked with one month ago might start showing different types of biases in six months because it's a tool that actually learns. And so we need to be clear about that.

I think the other thing that we have to be conscious of when we're doing ethics checklists is, yes, they're important to think about. What data was this trained on in pre-implementation stage? Implementation-- is the tool working in servicing benefits? Post-implementation-- do we learn anything on the post on the outcome?

But we really need to make sure that we're thinking about this long term. Because not only do our staff change, not only do the tools change, but so do our residents. They might have different needs. They might have different vulnerabilities than when you originally set it up. You might have introduced something during COVID. And then things are different. And so there might be a different way the tool starts to think about this.

So I think ongoing evaluation is especially important because of all these changes. And just thinking about the people that we take care of in senior living communities and that put their trust, back to both of your points, in us, as we care for them, may not be willing to speak up about problems. Their family may not be willing to say something. Or you may have someone with dementia or some type of illness that they don't have the option or ability to be able to do so.

And so we need to make sure we're thinking about implementing regular structured check-ins that look for problems rather than waiting for complaints. Rhonda, we could talk probably for the whole day. I imagine Tammy could get on this bandwagon with me and talk extensively about what needs to be on that checklist.

I think what we need to be super conscious of, one, are about the phases of implementation. So I would think about what unique set of questions come up in pre-implementation, questions we've already talked about. What data are they training it on? How does the algorithm make decisions?

Do we feel comfortable with that? What values do we think this would be promoting? During implementation, are we noticing any change in conditions? Are we monitoring the way that it's working?

I think about this in the acute care space. I was talking with a colleague of mine the other day about some tools, some AI tools, that have been used in cardiology space to measure a plaque on vessels. And what they were sharing with me is this tool learns over time. So it appears to be getting better. But what it's doing is making people complacent.

So all the tool did it. There's the answer. Click and accept versus, yes, there's still a human. But we don't want to get complacent. So how are we monitoring that during implementation phase?

And then at the back end, it's about thinking about, we've ran this tool. We've now had a six-month run. Let's look at what it's doing. Let's compare the outcomes. Tammy was saying this.

We thought this tool might give some pause around privacy. Did it? Did the residents have to give up privacy? Did they give up more than what we thought they had to give up? What are we doing?

And so making sure we have clear, established, quote unquote, "ethics checklists" that happen over periods of time where we can intervene or post-implementation. And be honest with ourselves. I think, ultimately, at the end, is this tool living up to the values that we said we wanted it to do at the end? And there's no easy answer about this. It's not a yes or no. It's like, tell us about it. What was it?

Because at the end of the day, back to the questions that you were asking before, Rhonda, this is about who's driving this bus and who is benefiting from it. And it strikes me, the focus should be here on the residents. Yes, it will provide operational efficiency. Yes, it may help the organization. It will help the vendor. But we need to put these things in place to make sure that it really is doing that, that it really is benefiting the staff, it really is benefiting the residents. And that's what's required, is doing a checklist like this.

RHONDA DEMENO: Jason, do you have any suggestions on a resource where our listeners can find an ethics checklist? Is there a resource out there?

JASON LESANDRINI: A good question, Rhonda. So I think there are a number of organizations out there you can go look. I think the World Health Organization has published articles on it. There are a number of places. But I think maybe what I might tell the audience today is to think about things in those three different spaces.

So in pre-implementation, you might think of things like, is the technology respecting resident autonomy? Is it trying to make decisions for them? Is it causing any harm directly?

Does it increase burdens on residents? Does it increase burdens on staff? What data is it collecting? There are lots of different factors that we could think of in the implementation.

And then during implementation, we might see, say, even simple things like, what misuses of the technology are we seeing? And most of these checklist questions, what I would say-- and I'll give you a few others to maybe help the audience. I don't think they should be yes or no checklist in a way,

Rhonda, partly yes or no are really easy for us to just repetitively click yes and no. I mean, we all know this. So when you go buy a home or you're signing a consent somewhere, you just need it.

Recently, I was telling you guys before, I had an accident. I needed to go to the doctor. I saw the forms. I was like click, click, click, click. I knew there were all these clicks that needed-- I just accepted yes because I wanted to get into the door. I want to go see the doctor.

There are those things that I want. And so I think it behooves us as owner operators or companies or organizations or whatever to say that we're not going to just ask for yes-no but that we're going to leave it as an open-ended question. So we might say, how is this technology respecting resident autonomy? How is it making decisions for them? So we're thinking about the content of that rather than a quick automatic response.

I think other things going back now to your point about what types of questions might we ask or tools could we use, we may want to think about during implementation, is our residents able to use it? Is the staff able to use it? How has it been designed? Does it look at things like vision or hearing impairment or language problems. How is it impacting those spaces where patients or residents or whoever the staff might be most vulnerable?

And then in post-implementation, we've started talking about this-- how is it meeting our values and then describing that? What new issues have emerged? What is the comfortability level of our residents and our staff in adopting that technology?

And I think there are different frameworks that people have created that have asked these kinds of tools. And of course, the audience is more than welcome to reach out to me if they ever wanted to. But I think you could go to things like the World Health Organization. I think there's something called CAIS, which is an AI group. There's lots of them out there. You can Google them. But what I would ask the listener to do is really tailor them for their needs and making sure they're not focusing just on yes-no's but really keeping those questions open ended.

RHONDA DEMENO: That's so important. And I love the fact, keeping those questions open ended. Because when you do a yes and no, you're just not getting to the root of really what's happening. So great feedback there, Jason.

Tammy, the next question-- and we're going to be wrapping up our conversation today. Just a couple more questions here. But my next question goes to you, Tammy. We talked earlier how there's a lot of hesitation adopting AI. Senior living communities may have hesitation due to ethical concerns or privacy, dignity, and the importance of human touch. How can leaders strike a balance between these ethical priorities and the potential benefits?

TAMMY WINNER: Yeah. The first thing, the most important thing, that I can share in this space, in this time frame today is to involve stakeholders early. And that means in terms of stakeholders-- once again, I circle back to what I said earlier about the residents, the staff, and the families. Those are the stakeholders.

So the simple thing that you could do at this point-- like Jason said, if you look at the very beginning of this whole conversation, the preemptive what you can do to begin with is to create a resident technology committee and put people on that committee that are from the three groups that I just talked about. Put residents that are able on that committee. Put family members that have the time and are willing. And put staff on that committee so that they all have some input on shaping the process on how they feel the AI can or cannot be used within that senior living community.

And so they all feel like they have some ownership. And that suspicion or hesitation turns into ownership in the process itself. Because essentially, what you want them to do is create an AI ethics and oversight policy because that policy is going to be different for every senior living community. And that policy is going to be shaped based on the core values of that senior living community.

So the policy may be a no AI policy at end of the day, after those stakeholders all have input. Or it may be a policy with contingencies or a policy with resident data privacy addendums. Most senior living communities, in order to mitigate risk, they have a resident privacy policy in place.

That's going to need to be amended. Or there's going to be some kind of addendum needed to be added to that to cover AI and the ethics of AI. Also, an AI vendor vetting checklist should come along.

Jason was talking about that at the beginning of the podcast. How are you going to vet these AI vendors? There needs to be a checklist in place. That checklist needs to be created with the help of staff, family, and residents.

And it all begins with the staff. There needs to be a staff training policy in place too. That all circles back to the individual senior living communities' AI ethics and oversight policy. So it really needs to start with the residents, the staff, and the family coming together to create a resident AI technology committee in my opinion.

RHONDA DEMENO: That is really good information. I haven't really heard that. But I think that just makes such good sense and good practice because you're involving those key stakeholders. And I know communities oftentimes don't have the bandwidth-- another committee.

But I think as we're embracing technology more and more, this is a standard that really should be considered. Great feedback, Tammy. Thank you for that.

TAMMY WINNER: Rhonda, let me just add one thing, if I may. I want that the C-suite executives that may be listening-- they won't be surprised that it won't be really necessarily residents. But it will be the families. And what I've seen in the data that I've collected, it's the families that are pushing for the AI technology into the senior living communities because you were talking about bandwidth.

These places are largely understaffed. And they just don't have the time to be thinking about this. But you're getting residents, families coming in and saying, we want this. So really, truly, you could create these communities and create these communities with the residents, with their families, and with staff that you can free up to lead these initiatives. But the push is coming from the families.

RHONDA DEMENO: Oh, absolutely. We see it every day. We see it every day. And the consumer is becoming more savvy. And they have higher technology intelligence. So this is something, as we mentioned earlier, we really have to consider adopting and be very forward facing with technology and what communities are doing because that is only going to help census push to-- if they're comparing one community to another, that's going to appeal more to the consumer. And I think as baby boomers become more sophisticated, they're going to have that expectation. So really good feedback there.

So one question I have-- really, like a final question to Jason. Then I'm going to ask both of you to do a summary. But Jason, senior living communities often lack the internal expertise at hospitals. We always know that we can't compare a senior living community to hospital infrastructure.

Sometimes, senior living communities don't have an ethics committee. They may have a chief medical director. But oftentimes, it's not to the level that a hospital has. So how should communities structure their ethical evaluation of AI tools? Who should be at the table really making these AI decisions? I think we addressed it in the earlier conversation points. But if you'd like to add some final comments on that.

JASON LESANDRINI: Yeah, I do. I do. So I think Tammy hit the nail on the head. I think that's exactly right. I think she talked about the stakeholders that need to be there. I'll just reemphasize one.

Residents and/or their families need to be there. And it doesn't mean they need to be every resident and every family member. The committee doesn't have to be that large. It can be a small group of people. But there probably need to be those three key stakeholders there.

And you're right, Rhonda, about the ethics expertise. Now, listen, I think ethics is owned by everybody within the organization. There's no doubt about it. But some of these issues are complex and require a level of sophistication or knowledge or training that someone with ethics expertise could bring.

So I think it is possible. Well, let me say what it isn't. It isn't likely that senior living communities are going to hire their own ethicists to be on staff to help them navigate these things or other things. They may come to fruition.

But I do think there are lots of ethicists out there or people who have expertise in ethics that are willing to work for senior living more on a contract basis or even a temporary time frame as this structure gets set up that Tammy so eloquently talked about. Getting those processes and forms and documents and policies in place is the beginning phase. Once you get that off the ground, then you may not need someone on an ongoing basis.

It can be that structure that's set up that has all those stakeholders at and then intermittently involving someone with ethics expertise to help them make sure they're seeing the things that they need to see, paying attention to the things that are not being seen, and really evaluating from an ethics perspective. That's not asking for full time person to be on staff. More of a short, episodic time frame with a few hours here and there as they help drive this.

I think at the end of the day, it doesn't need to be complicated. It can be small workgroups with involvement of others. But I think we need to just make sure that we're giving a fair shake to senior living communities and the resources that they do need. And that's what I would ask all the operators and organizations out there and leaders out there.

Are we making sure the structures that we put in place are giving a fair shake for our communities, our families, the organization, the vendors in terms of this broader ethical framework? And sometimes, that requires a temporary involvement or part-time involvement or small time involvement of outside experts. And so understanding, Ronda, there will be full transparency bias in that space. This is the space I sit in. So I recognize where I'm coming from.

But I do think they bring value. I've seen it in my own work with other organizations. And I've seen others in my field being able to really return back to organizations the things that they need in order to make sure they're doing the right thing by their communities.

RHONDA DEMENO: And I think three big takeaways are trust, transparency, and confidence in the technology. And by including these key stakeholders, I think we're going to see more satisfied residents better outcomes. So really appreciate both of your input today.

Any final comments? Tammy, any final comments that you'd like to share?

TAMMY WINNER: I just was thinking about what you just said about trust, transparency, and confidence. And I agree with you. And I just want to reword it that if you start with the transparency, the transparency builds trust. And the trust builds confidence.

And at the end of the day, AI needs a human to drive it. The human is in control. The human pushes enter. The human creates the prompt. So just remember that.

When you're thinking about your own senior living community and using AI as a tool within your senior living community, remember that we are in control for now. And I say that for now because I don't what's going to happen next. Right, Jason? I mean--

JASON LESANDRINI: Terminator. Terminator, yeah.

TAMMY WINNER: Correct. But for right now, we as humans are in control. So while that is still the case, please take the initiative. Don't wait for it to happen. It's already happened.

Take the initiative to take the control, to be transparent, to create that trust, to boost the confidence of the major stakeholders, which for me and for all of you listening, I'm sure, is at the end of the day number one, the residents, their families, and your staff. So yeah.

RHONDA DEMENO: Thank you for that, Tammy. And Jason, any final comments?

JASON LESANDRINI: Yeah, I'll just piggyback maybe on what Tammy said. AI is not the boogeyman. It's about us doing it right. Yes, it's wonderful technology. We've had lots of technological innovation over the last 100 years plus, whatever. It's about doing it right.

And so how do we do it right? We think about the things that-- Rhonda, we've had the opportunity to talk with you and the broader community today about putting structures in place, having the right people at the table, focusing on the key values, the things that organizations and communities really hold near and dear to their heart. It's just double downing on that and making sure that they have the resources available to properly evaluate these things and make sure that the right ones get implemented for the right settings. And that's ultimately at the end of the day. And I think everybody sees the added benefit that AI can bring. Just doing it in the right way.

RHONDA DEMENO: Great way to summarize everything. I really want to thank both of you for sharing your time today. I'm sure our listeners really appreciated this content.

If our listeners are interested or have additional questions for our speakers today, their information will be noted on our podcast page. And again, we want to thank everyone for spending a little bit of time to talk about this very important and timely topic of ethics and sourcing your artificial intelligence. Thank you very much for attending The Senior Advisor podcast. This concludes our discussion for today. Thank you.

SPEAKER: Thank you for joining us for this WTW podcast featuring the latest perspectives on the intersection of people, capital, and risk. For more information, visit the Insights section of WTwco.com.

WTW hopes you found the general information provided in this podcast informative and helpful. The information contained herein is not intended to constitute legal or other professional advice and should not be relied upon in lieu of consultation with your own legal advisors. In the event you would like more information regarding your insurance coverage, please do not hesitate to reach out to us.

In North America, WTW offers insurance products through licensed entities, including Willis Towers Watson Northeast, Incorporated in the United States and Willis Canada, Incorporated in Canada.

Podcast host


Rhonda DeMeno
Senior Vice President Risk Services – Senior Living

Rhonda is the host of The Senior Advisor and has over 30 years of extensive senior living experience as a healthcare risk manager, regulatory compliance expert and operations leader.

email Email

Podcast guests


Founder/Principal, The Ethics Architect

Jason Lesandrini, PhD, is an expert in ethics and leveraging innovative techniques to build ethical capacity, leaders, team and culture. Dr. Lesandrini provides leadership and resources to promote ethical behavior, decision-making and conduct aligned with organization's mission, vision and values.

As an educator, Dr. Lesandrini serves as a faculty member for the Physician Assistant Programs at Mercer University and South College and teaches undergraduates at Georgia Tech. He has also worked as an ethics resource for numerous national professional organizations including The American College of Healthcare Executives, The National Hospice and Palliative Care Organization, The American Board of Medical Specialties, and others.


Professor of Writing, University of North Alabama

Tammy Winner is an award-winning writing professor at the University of North Alabama, a published academic author, a NASA Faculty Fellow and a keynote speaker with over thirty years of experience. She has developed and taught university-level courses in professional writing, technical editing, journal writing and podcast script writing, helping thousands of students learn how to confidently express their message.


Related content tags, list of links Podcast Senior Living
Contact us