S6 #3 Shadow AI, boardroom blind spots, and the insurance wake-up call

S6 #3 Shadow AI, boardroom blind spots, and the insurance wake-up call

S6 #3 Shadow AI, boardroom blind spots, and the insurance wake-up call

Guest:
Guests:
Russ Fradin and Michael Levine

Russ Fradin and Michael Levine

Russ Fradin is CEO and co-founder of Larridin, a platform providing AI Execution Intelligence to help organizations discover, govern, and optimize their AI investments. Prior to Larridin, he spent 25 years building measurement, digital media, and enterprise software companies, including serving as President of Comscore. Russ is the successful founder of Adify (Cox) and Dynamic Signal (FirstUp). He was the Vice Chairman of Carbon Health. Russ has served as Board Director for CoachArt, ComScore, TubeMogul, Udemy and more companies over the last 25 years.

Michael Levine has spent his entire career advising clients about insurance and handling insurance coverage disputes. Previously, Mike represented the insurance industry in some of the highest-stakes matters, including the property, liability and reconstruction appraisal proceedings arising from the September 11 terrorist attack on the World Trade Center. Since 2005, Mike has exclusively helped policyholders maximize their insurance recoveries, where Mike leverages his substantial insurance industry experience to obtain the greatest possible recoveries. Mike has recovered hundreds of millions of dollars of insurance proceeds for clients under general liability, property, directors and officers, cyber, errors and omissions, employment, environmental, and representations and warranties insurance coverages, among others.

Mike is a prolific writer and frequently serves as a faculty member and lecturer on issues concerning insurance recovery, insurer bad faith and unfair claims practices and assessing coverage for new and emerging risks, such as artificial intelligence, political risk and supply chain disruption. Mike also is Editor-in-Chief of the ABA’s insurance coverage journal, Coverage and a board member of the Potomac Chapter of RIMS.

AI is transforming organizations at an unprecedented pace—but governance, legal frameworks, and insurance protections are struggling to keep up. In this timely episode, host Kristina Podnar dives into the growing disconnect between AI adoption and corporate oversight, exploring what it means for executive accountability, enterprise risk, and financial exposure.

Russ Fradin, CEO of Larridin, shares real-world insights into the prevalence of shadow AI, revealing that in many organizations, most AI tools in use are completely unknown to leadership. He explains why visibility, rather than fear, should be the starting point for any AI governance program, and what a “minimum viable” approach looks like for boards and executives today.

Michael Levine, a leading insurance and liability attorney at Hunton Andrews Kurth, outlines how insurers are responding to AI risk by introducing sweeping exclusions and coverage limitations—some broad enough to deny claims tied to any AI use at all. He discusses how recurring AI-related harms, regulatory uncertainty, and system-level AI failures are reshaping liability, and why executives must now treat AI oversight as a core fiduciary responsibility.

Keywords:
AI governance, shadow AI, artificial intelligence risk, executive accountability, board oversight, AI liability, insurance exclusions, digital policy, AI regulation, enterprise risk management, AI compliance, corporate governance, insurability, emerging technology risk
Season:
6
Episode number:
3
Duration:
28:58
Date Published:
December 16, 2025

[00:00:00] KRISTINA: Today we're diving into one of the most pressing issues in the corporate world right now, and one that's been all over the news lately. Definitely talking a lot about AI adoption as it accelerates far faster than governance frameworks, structures, or regulatory guardrails can keep up. It's a whirlwind out there, I tell you.

[00:00:17] INTRO: Welcome to The Power of Digital Policy, a show that helps digital marketers, online communications directors, and others throughout the organization balance out risks and opportunities created by using digital channels. Here's your host, Kristina Podnar.

[00:00:33] KRISTINA: Over the past few months, we've seen a lot of stuff going on. Major litigation over algorithmic bias, insurers quietly adding AI exclusions to enterprise policies, and certainly a lot of regulators who are publicly backing away from either digital rules or rewriting them mostly because of pressure to stimulate local AI innovation.

So if you are A CEO, if you're a board member, CFO, or frankly any leader watching these headlines, you're probably asking yourself things like, how exposed am I? And I know that there's nobody better to help us unpack this emerging risk landscape because these are the two guests who really sit at the center of this unfolding story.

So Russ Fradin, he's the CEO of Larridin, he works directly with enterprises tackling governance gaps and shadow AI exposure, which I love as a topic. And Michael Levine, a partner at Hunton Andrews and Kurth, one of the leading insurance and liability attorneys advising companies on real world AI risks.

Can't imagine having a better team with us. So Russ and Michael, welcome to the show.

[00:01:37] RUSS: Thanks, Kris. Hi. Thanks for having us.

[00:01:40] KRISTINA: All right. Here's the deal. Let's just start with a thing everyone is whispering about in boardrooms, but not saying out loud, right? We know that most employees are using AI in some form, whether it's corporate systems, personal chat, GPT accounts and leadership often has no visibility or might be cringing. So, Russ, help us quantify a little bit like how widespread is shadow it really when it comes to ai. What patterns are we seeing inside companies that executives consistently underestimate? What's really going on out there?

[00:02:11] RUSS: The shadow AI thing varies widely from company to company and, and so I think it's just something every company's gonna make a decision. We've seen for some of our customers, for every one tool that they know, they've purchased another two to three tools being used by employees that no one in it particularly knows about. Now some companies view that as a giant problem. Some companies don't view it as a giant problem. Our general perspective with companies is.

You need to understand what's actually happening at your company. What are people using? What are they using them for? And you know, where could they do a better job? What are things you should shut down? What are things that are risky? So you, we don't go in and try and scare companies about tool use, but it is insane to have things being used, by your employees that you don't know about. What you do with that info, there's a lot of different things you can do with that info, but, you know, I think a good thing to keep in mind is it is not at all unusual for there for it to be two thirds of the AI tools that being used in your company are tools you don't know about.

[00:03:13] KRISTINA: That's certainly a scary number. Or at least it seems kind of scary to me, right? Because you can't manage what you don't measure. And certainly if there's shadow AI going on, it's really, really hard to understand that. We've seen recent lawsuits, anything from like the workday bias case to class action around open AI copyright ingestion, where plaintiffs are really arguing that executives knew or should have certainly known better. Are we entering an era where ignorance itself is becoming a breach of duty?

[00:03:40] MICHAEL: So let me, let me jump in and try to answer that. Yes, and, and this is not unique to AI. It's anywhere where you have a known or a likely risk of harm, you start to get to a level where you have a duty to not do things or not create situations that are reasonably likely or foreseeable to cause harm. And this has been an ongoing issue through product liability, forever. At what point does a risk become reasonably likely to occur and,, is the harm of such a type that it is severe enough that we need to take reasonable cost avoidance measures to mitigate it. And we're at a point now with AI where we are seeing recurring harms. We are seeing extreme levels of harm, magnitudes of harm. The open AI lawsuit that you just mentioned, Kristina is a great example, right? So we're there and the question is how do we impose these guardrails or safeguards now to start to mitigate and control this liability and turning to insurance, make sure that if and when that liability occurs, that the company is properly backstop financially.

[00:04:55] KRISTINA: For listeners who don't live in the insurance world, Michael, can you explain a little bit about what insurers are actually excluding right now , and what does that look in terms of liability for CFOs or board members?

[00:05:07] MICHAEL: Yeah, so it's, it's very wide ranging and we are seeing on a regular basis new endorsements and exclusions start to roll out from the mainstream carriers. And I'll put aside for the moment, the upstart niche AI specific carriers, but looking at the traditional lines of coverage, general liability directors and officers liability, those insurers are introducing limit to. Provisions and exclusions. In fact, we saw one well-known carrier recently introduce what they're calling an absolute AI exclusion that they are rolling out into their financial lines coverages, so directors and officers, fiduciary liability, professional liability that purports to exclude from coverage, any loss that is related in whole or in part to any use of artificial intelligence by anyone. It doesn't get much broader than that.

[00:06:06] KRISTINA: I'm wondering if there are certain scenarios where executives are personally held liable and, is that something that you think is going the wrong way or is that sort of where we're heading right now? So does it end with the corporate entity or is it going now to sort of criminal referrals and liability for executives?

[00:06:26] MICHAEL: So I haven't seen instances of criminal liability or personal liability of a management team or directors and officers. However, you can look at traditional scenarios where that would lead to individual liability or even criminal liability for making known misstatements and known omissions in corporate disclosures. Taking action that would have an adverse impact on a company's stock value. Those things can rise to a level of individual liability, particularly on the disclosures now with Sarbanes Oxley and other statutes that are in place that impose personal liability on directors and officers for making O glaring omissions or known omissions.

[00:07:12] KRISTINA: It seems like we're seeing this sort of tension globally, or it seems like this to me anyway, where, in the EU regulators are softening their digital rules out of mostly fear that they're losing competitive ground. And in the US it seems like we're simultaneously sort of watching this case law and insurance exclusion expand faster than the formal regulation is rolling out certainly. How does this regulatory uncertainty increase liability exposure for corporations at this moment?

[00:07:37] MICHAEL: Russ may have a different view on this, but the regulatory uncertainty you know, it, call it the wild west if you will. You've got different statutes in different legislative acts that are rolling out in jurisdictions. Oddly many of the jurisdictions, Colorado's a great example, rolled out legislation to govern the use of AI and then immediately said we don't think this is really ready for primetime. So before it becomes effective in 2026. We encourage the legislature to, to take a new look at this and make some significant changes.

A lot of this has to go to the rapid evolution and the rapid understanding of artificial intelligence. That curve is so steep right now, and it likely will be for quite a while that things are almost obsolete as soon as they become effective. And, we see this in so many ways and, you know, to, to Russ's earlier comments about understanding what the company is doing with AI and how they're using it. I view that as a snapshot in time because in a month or six months, whatever analysis you do today is going to be completely changed in six months. And that has a ripple effect because you think about insurance. And trying to mitigate certain risks. But that's great today when you buy the insurance but six months from now, that picture may look very, very different.

[00:09:05] RUSS: Obviously people use our tools for a lot of reasons unrelated to in insurance. But on the insurance side, I can't imagine how you'd possibly run a business as fast change. Frankly, I can't imagine how you'd run any business without understanding what tools your employees are actually using to get their work done. Right. Just flatly. That's the kind of thing where I'm not saying that people, look, someone at the company should be monitoring it every day. I'm not saying CEOs should be monitoring it every day, but if I'm a board how am I not getting a monthly report on the risk of, Hey, what are tools being used? What's driving a lot of productivity? By the way, there's a, I mean, there's a reason we're all excited about ai. Let's be clear. This stuff is fantastic, right? Every, I'm sure everyone listening to this podcast uses it all the time. They're amazing. It is truly wonderful. But how could you possibly, as a board have any feeling that you're doing your duty of oversight and risk management without understanding what's actually being used at the company. Right? What are they doing? What's working and delivering value? 'cause don't forget, this stuff's expensive. So you wanna know what's being used for a bunch of reasons, not just from a risk standpoint, from a cost standpoint, from a opportunity standpoint, from are we using it as much as we should? Are we driving as much productivity as we should? So that's. That's not really the topic of this podcast, but it's one of the many reasons you do this. But on the risk side and on the potential liability side, of course, companies need to understand what's actually happening inside their company with an area that's potentially driving very large, unmeasurable risk at this point.

[00:10:42] KRISTINA: Just thinking about this and grounding it in the practical, what is a minimum viable sort of AI governance program look like to you, Russ? The sort of, don't leave home without a checklist if I'm a board member, I probably am not fluent. I don't know what to ask for. Although you know what you just suggested, which is ....to me at

[00:10:59] RUSS: ..A minimum, to me what too many people do today, and one of the reasons, you know, we've done well is because I think everyone realizes this is ridiculous, is what we see from a lot of boards is they'll get a report from, , very, very smart CIOs or heads of AI on all of the AI initiatives and tools that have been purchased. And there's nothing wrong with that. You should know where you're spending your money. You know, our general perspective is first of all, from a measuring value standpoint, just because you bought a bunch of tools doesn't mean people are using them. Using them well, using 'em effectively, driving any value for your company. So number one, that's insufficient. But then number two, to my point earlier, there's so many things being used you don't know about. My general perspective by we're on a board would be, listen, I, I wanna know, it's, this is, we're not talking about all the dashboards that everybody creates that we provide for our customers.

But if I were on a board, I'd wanna know, listen. What are the AI tools being used in our org? How often are they being used? Daily and weekly and monthly? By what members? By what teams? By what teams in the org. Which of the apps are authorized and unauthorized for those that are unauthorized? Are they great? We've decided to have lead a thousand flowers, bloom strategy where we're gonna allow employees to experiment. That's fine. For those that are unauthorized, if they're valuable and not dangerous, great. Let's bring them into the fold and make sure we have a governance plan for them. For those that are risky. Great. Let's shut that off. Fine. People test a lot of things. That's, you know. Probably nothing wrong with that. If you have 50,000 employees in your company, it is fine that people try things, but let's shut that down. And what's our actual ongoing maintenance plan? And I'd, I'd spend 10 minutes on it in every board meeting given what's happening with AI right now.

First of all, like I said, because of all the exciting upside, but then you get into the potential huge issues of voiding my insurance if we're not doing this work. Michael and I have talked about this in the past with other folks where, it's not that you're guaranteeing nothing will happen, but you have to at least be doing the effort to make sure you understand what is happening in the company. And when you learn about a risk, minimizing the risk, mitigating the risk immediately. And so, you know, for me, if I were a board member at any private company or public company, I wanna understand. What are we doing in AI? What are the tools we know about? What are the tools we don't know about? For the ones we don't know about, that doesn't mean it's all north Korean hackers. It could just be a great note taking app being used in our marketing team that hadn't made it through procurement yet. Not the end of the world, but let's just make sure we know about it.

[00:13:36] KRISTINA: I remember the very early days of the web, we could actually walk around the office and figure out sort of the shadow IT, because everybody had a server sitting under their desk where there were cords that you would still have to step across For folks today, how do they actually do an inventory and do it properly to understand where AI or shadow AI is actually being used?

[00:13:54] RUSS: When companies use our tools, one of the things they're getting is for all of the groups and for all the departments. We don't do it at the Kristina level or the Michael level. That's not the point here. Is what are the tools being used everywhere in New York? I mean, look, in the early days of the web, one of the reasons it was easy is 'cause there was no wifi and there was no DSL at home. So all of the work was done at your actual desk. So I could just see what you were doing if I walked over to say hello to you. The world's obviously very different today. Of course. And so my perspective with this is that's fine. It's wonderful if you have smart, innovative employees trying things and teaching each other and you're coming up with your process in real time. So like I said, at a minimum, someone should understand what's actually being done.

[00:14:34] KRISTINA: And when you say someone, 'cause you've mentioned the someone several times now, is there someone inside of the enterprise that you usually like to see? Who's the central point, sort of the steward for understanding what's happening end to end?

[00:14:47] RUSS: So I mean, our customers almost always eye either the CFO, the CIO or the head of AI that, that's who's generally the customer for larrain. Now one of the users of our tool, not, not the person we're selling to particularly, but you know, a key fan of ours, generally, heads of HR because they wanna know this is unrelated to liability. There's a lot of things about AI that are scary for workers. We are effectively doing something where we're trying to retrain the entire global workforce of knowledge workers in every industry all at once. And, not everybody in the workforce is 22 and just graduated with a CS degree. Some people are nervous and they don't wanna make a mistake and they don't wanna get fired for a whole bunch of reasons with AI tools. And so HR tends matter a lot and then legal matters quite a bit for a lot of reasons, and insurance is one of them.

[00:15:37] MICHAEL: Yeah. And, and Kristina, let me throw a, a wrinkle in here because everything Russ is saying I totally agree with, and, traditionally, even before AI, it would be the CFO or the risk manager, or that I would typically engage with on a risk assessment analysis. AI doesn't change things in terms of generative AI. The systems that we're running, whether it's Co-pilot it, it could be Co-counsel, one of the legal platforms that we use and, and myriad others chat, GPT, et cetera, even Siri and the new enhancements in the Apple products and iPhones. On the other hand though, you have what I would call systems AI or operations related AI. It's the AI that's embedded in the tools the company is using, right? That's not on your desktop. It's not what the employees are being licensed to use. It's the sort of the brain that's running the things the company does. If it's a manufacturing facility, it's running the robots on the assembly line. It could be a food distribution company. It's the AI that's in there, climate control systems and refrigeration systems that's keeping everything at either a sufficiently cool temperature or sufficiently hot temperature.

Those systems have and I think will continue to malfunction from time to time like anything else. And the results can be catastrophic. Take a oil refinery for example, if one of the systems monitoring temperature of certain constituent chemicals malfunctions and it gets too warm by only a few degrees, catastrophic explosion. We see this happening with analog systems. No reason to think it won't happen with AI driven digital systems. So that's a physical loss that results from a system completely unrelated to the generative AI and platform products that the company's using that it understands well. So it, there's much more at stake and, and I think it becomes more complicated than just the CFO, just the CIO to understand all of the ways the company is using AI and all of the risks that it's exposed to. It really becomes now a matter of engaging with stakeholders throughout the company to understand how each unit of the company is using AI in what they do.

[00:18:03] KRISTINA: So are we moving really then towards a world where sort of failing to supervise employee use of AI becomes sort of a legally actionable a mission? Is that really what we're talking about?

[00:18:15] MICHAEL: Well, no, no more so or less so than failure to supervise employees about the use of any dangerous tool, right? , If you have a known hazard and you fail to take reasonable safeguards, then there's potential liability AI's no different. But it, it's also making sure that you understand all of the things that your company's using that incorporate AI. Because that's where kind of the other types of risks and liabilities can occur. The fires, the explosions, the product line gets shut down. Where your distribution system fails. Your supply chain, think of it on the purchasing end. You have chemicals and materials you rely on to build widgets, and suddenly you're no longer getting your incoming material or product. You can't do your job. You can't make widgets, you, so you lose substantial business income loss on the backend, and these are all things that are insured under different types of insurance. So when that root cause is discovered to be an AI failure, it brings us back to the broad exclusion that we were talking about. A traditionally covered loss now becomes a broadly excluded loss.

[00:19:23] KRISTINA: Should we be thinking maybe about, and I don't know the answer to this, I'm curious from both your perspectives, Russ and Michael, but should we be thinking about the use of AI beyond just AI systems and is there a distinction, and if so, what type of distinction is there between the use of AI versus, for example, the use of data in AI? You can have AI function as a system but it's functioning fine. But it's maybe the data that I as employee acquired and put into the algorithmic computation that's off or wrong or the wrong type of data. Is there a distinction there? Are we sort of just treating everything as it's an AI system and if it goes awry there's liability full stop

[00:20:05] MICHAEL: I was gonna say, that is a fantastic question, Kristina. It, it's really the jumping off point when I go out and talk about AI it's the definition of AI we all talk about it. We've been using the term throughout this podcast, but what do we really mean by it? And would any of the three of us really know it if we saw it? I'm a lawyer, I'm not a computer scientist. I wouldn't know the first thing about developing an algorithm, let alone putting it into, into function in a system. So I know how to describe AI at a very high generic level, but I would not know. The details of it, whether it's an algorithm, whether that algorithm is tied to deep learning or neuro nets and, and the other technical aspects that really go into artificial intelligence. And when we look at insurance policies, which are contracts, they're all driven by terms and they either have a plain meaning that we all know, or it has a defined meaning.

If it's a technical term, when you start to look at the defined meanings of artificial intelligence, it very quickly gets to a level that I certainly don't understand, and I think the average person on the street doesn't understand. And so to your point, how do we know what we're dealing with when somebody's talking about artificial intelligence?

[00:21:26] KRISTINA: And I'm wondering if that makes most board member heads spin three times around and some smoke come out of them. And so at that point, Russ, what would you tell a board member whose head is on fire at this point listening to this conversation and they're like, you know what? We're just gonna deal with AI governance later. It's really complex.

[00:21:42] RUSS: Yeah. This is why I tend not to try and scare people with, oh, shadow AI. I actually think that this is the fastest growing area of business investment basically ever in any of our lifetimes, right? Companies are investing in these tools. Across all industries at a rate no one has ever seen. That is why this is growing so quickly. So every board is thinking about this, so no one's gonna say, I'm gonna think about it later. And that's also why I don't come on, you know, fire breathing about all of the risks and all of the, whatever. My general perspective is if you are a board member, and by the way, if you're a CEO, if you're a CFO, this is, you know, there's no possible way that you can manage the company and the, like I said, ignore the risk. Let's say there was no legal risk. Let's say the, of course there is, it's substantial, but let's say there was no legal risk we wouldn't have beyond this podcast, but let's say there was no legal risk if you're a CFO or CIO. Of course, given that you're being asked to spend money at a rate that no one has ever heard, you'd wanna understand. What are these tools? Who's using them? Are they being used well and effectively, and are they driving more productivity? I think about it this way. If you were a thousand person company and someone came in tomorrow and asked for two additional headcount, you'd probably kind of say yes based on their seniority and whatever, right?

But if they came in tomorrow and asked for 4,000 additional heads. You'd have a long board meeting about that if you were going from one to 5,000 employees you know, can you, what, what are all those people gonna do and how are we gonna hire 'em? How are we gonna manage 'em? And with a lot of these agents, that's really what's happening, right? If you don't think of this as just some AI tool and you think of this as incremental digital workers, you're hiring into the company, without real concrete plans of what they're gonna do and how we're gonna measure what they do, how we're gonna manage what they do, and how we're gonna, understand the risk that they are the wrong folks or people, obviously they're digital workers in this case you would never do that. What we do to deliver our product is quite complicated. The actual core message is quite simple, which is you should just know what's happening at your org. That's why even our unauthorized apps, most of the time, when I said earlier, right at the beginning, I believe I had the first answer to your first question, which is about two thirds of the apps are unauthorized. I will tell you most of the times if they're after initially seeing that list of unauthorized apps, when people really look at it. Most of your employees are good folks. They're not actually trying to steal your data. They're not trying to do anything wrong. Most of the times those unauthorized apps are relatively harmless, but to Michael's point, if you ever wind up in litigation, you still need to make sure that you had controls in place to understand what was happening. So most of the time, so you know, some small percent of the time you're discovering real unauthorized apps that are very dangerous. Most of the time you're discovering unauthorized note takers where you're going, you know, we had decided as a company to use Read ai, but yet. A third of our sellers are using Otter. Is that a giant legal risk? It's a little risk, but of course, right? 'cause now we have our customer call notes going somewhere where we don't have a data protection agreement in place. So we'll decide, Hey, sellers, no more Otter or we'll make Otter an approved vendor, right? So you, you can deal with all of this, but like I said, step one is simply what is actually happening.

The way I talk about this to people all the time is like, we do not tell people if they should go on a diet. We just sell them a scale. What you do with the info that is up to you. I cannot imagine how you'd operate in this world with constant demand for additional investment, giant demand for productivity gains, and stories of what your competitor's doing without understanding what's happening in my company. Where's it being used? What's it being used for? What's authorized? What's unauthorized? What's really effective? And by the way, don't forget a lot of these tools you're buying are a waste of money. A lot of these tools you're buying, you're going to buy them because everyone's buying everything quickly. When you do that quickly, sometimes things were a mistake. Great. What do we do about that?

[00:25:55] KRISTINA: So those I think are really three great things that a company can implement actually this quarter right before the holidays hit. These are things you can start implementing that could meaningfully reduce AI risk, which I think is really helpful for the folks.

[00:26:08] RUSS: And Michael will tell you when you then wind up in a fight with your insurance company at some point. It's going to be much better collection of facts that you had systems in place to understand what was actually happening in the first place. And discussions about whether that was good, bad, or indifferent.

[00:26:25] KRISTINA: So Michael, on on the legal side and the insurance side, you know, what should executives be asking their brokers then, or their counsel or their internal teams right now? Like obviously taking Russ's three points of advice, but really thinking about that legal and insurance side. Anything else you would add?

[00:26:40] MICHAEL: Yeah, well obviously, what are the, you know, are there any AI specific provisions being inserted into our insurance policies. Whether it's an affirmative coverage and very likely may be a sublimited coverage, or is there an exclusion that we need to be mindful of, or a condition we may need to be mindful of? That's always the troubling thing where new forms, new provisions are introduced and somebody doesn't catch it. And next thing you know, it's in the policy for a couple of years and a loss happens. So, the first thing. Be vigilant of changes in your insurance. I think hand in hand with, with Russ's assessment or inventory of your AI what you're doing, you wanna make sure that your insurance matches your risk. So as that risk profile changes, you've gotta talk to your broker about what changes do we need to make, if any, to our insurance program. To make sure that we're keeping up with our technology. And, , that becomes a continuing conversation because the forms and the coverages are changing too. So we, we have to continually update.

[00:27:50] KRISTINA: This is great, Michael. Russ obviously a conversation that every leader needs to be having right now. AI isn't just transforming workflows, although we see a lot of that. It's also transforming risk responsibility, obviously the executive accountability. To our listeners, if you don't know what AI tools your organization is using, you already might be exposed.

So just remember as you're listening, governance is no longer optional. It's a prerequisite for leadership as. Russ said, and certainly not to scare you. You don't need to be scared. You need to be rational, but you need to be acting because being passive is no longer an option. So that's what I'm taking away from this conversation.

Thank you both for joining me today. And to everybody listening, stay tuned for our next episode where we continue exploring how organizations can innovate responsibly in a rapidly shifting digital landscape. Until next time, continue to do good policy work.

[00:28:38] OUTRO: Thank you for joining the Power of Digital Policy; get access to policy checklists, detailed information on policies, and other helpful resources, head over to the Powerofdigitalpolicy.com. If you get a moment, please leave a review on iTunes to help your digital colleagues find out about the podcast.

Feel free to respond to this podcast here: