Bran Bauer
Brian Bauer brings over two decades of experience in business transformation and advanced technology innovation. As VP of Artificial Intelligence Products at Rational Exponent, he leads the company’s AI go-to-market strategies, enabling clients to generate meaningful business value through artificial intelligence. Brian has developed and patented technologies in NLP, sentiment analysis, and machine learning—frequently cited by companies such as Google, Microsoft, Sony, AT&T, and IBM. His career is defined by driving growth through innovation and delivering measurable results. He holds a Bachelor of Arts degree in Economics from Middlebury College.
n this episode of The Power of Digital Policy, host Kristina Podnar sits down with Brian Bauer, Head of Product at Rational Exponent, to unpack the next big shift in financial technology: Agentic AI -autonomous systems that reason, act, and adapt within regulated banking environments.
They explore how banks are moving beyond pilots to production-scale AI, why policy and compliance can accelerate innovation, and what it takes to build trust, transparency, and explainability into autonomous systems.
Brian shares insights from decades in banking and fintech leadership, outlining how Rational Exponent’s technology translates complex regulatory language into machine-readable rules, creating safe operating “sandboxes” for AI agents. Together, Kristina and Brian tackle:
[00:00:00] INTRO: Welcome to The Power of Digital Policy, a show that helps digital marketers, online communications directors, and others throughout the organization balance out risks and opportunities created by using digital channels. Here's your host, Kristina Podnar.
[00:00:19] KRISTINA: Today we're looking at Agentic AI and banking and sort of the counterintuitive idea that policy and compliance, if done right, can actually speed innovation.
Citi's recent move to deploy AI agents across internal systems is a strong signal of where the industry's going. Operational efficiency, yes, but also new questions about oversight, explainability, and accountability. So to unpack this, I'm joined by Brian Bauer, head of Product at Rational Exponent, a FinTech AI company focused on turning compliance into competitive advantage.
Right there with digital policy. I tell you, Brian, welcome. It's great to have you.
[00:00:57] BRIAN: Good morning, Christina. Happy to be here. Looking forward to a conversation. I think this is one of the biggest topics, in the industry now, and probably the, AI in, in banking and regulated entities is probably the biggest thing since the internet, and probably even bigger than that.
So really exciting times.
[00:01:14] KRISTINA: Take a moment and zoom out. Um, AI agents and banks aren't new, but we're seeing a step change from isolated pilots to mission critical workflows. From your vantage point, what makes this moment different?
[00:01:29] BRIAN: When we use the term AI, right? AI has been around for quite some time, so we have to be very specific when we talk about what we mean. For example, machine learning, right? That's been around, for multiple decades. It's been used in fraud detection, and anomaly detection, looking for signals. So that's not new news. What is new news is generative AI. Which has been around for a few years, but really came to the forefront kind of, of everyone's collective consciousness a few years ago, with chat GPT. For the first time, OpenAI brought a product to market and suddenly everybody, regardless of what you do for a job, maybe even, certainly as a student, you've got your fingertips on the, on your hands of AI. And I think, for most people now AI is equal to generative AI and LLMs, Chat, GPT, Gemini, and so forth. And it's important to understand that because when we say, well, agents have been around for a while. Okay, well, what kind of agent? And there are different types, at one end of the spectrum. We've got things that I would roughly equate to, RPA, what we used to call robotic process automation. So it's an agent, but it doesn't have a lot of agency, and by that I mean it's not super agentic and it's not really autonomous. To be trendy, we can call it an agent, but it's really just automating people work. Now what's new and what's really, exciting now is we're bringing into this space of automation, some agentic capabilities that are, a core, obviously AI enabled, but we're giving these agents some of, some autonomy and, and what that means is as, as opposed to traditional decision trees where everything is logic based. If this then that, we're giving some freedom to these agents to say, Hey, consider the context. Consider the domain. If you see this, spend some time reasoning through the various things that you might do. And those things that you might do are not necessarily defined in a classic looking decision tree where everything is binary. If that then go, this way. And so that concept. Is kind of a new generation of agent. And the way we think about it is, a lot of, regulated entities today and we deal primarily, with banks of all sizes, they're very interested in process automation, in introducing, LLMs and generative AI to produce agents that take off some of sort of the, the burdensome or mundane part of people's jobs. Those kinds of agents don't necessarily have to be autonomous or agentic. However, the way we look at it. You're setting a foundation, it's a crawl, walk, run, step progression where the agents that we're producing may be limited in agentic or autonomous capabilities in the beginning, but it's creating this foundation for what comes next.
And the way we do that is actually really exciting and, and innovative.
[00:04:41] KRISTINA: So what's the biggest leap that banks have to make now to move from these clever demos right to production systems that auditors and operators trust?
[00:04:50] BRIAN: That's a really interesting question and I, I think, the answer, at least in my opinion and in my experience, might be a little surprising. And I say that because the biggest leap is not, is not always technical. It, it's not about the technology. Um, it's not always about proving that this thing works. I'm gonna put brackets around the word works, like works in the lab. I wrote a research paper about a year ago for a top 10 bank in the United States, and I referenced this concept of path to production. And what that meant was, Hey, how do I take a pilot or a proof of concept, which are two different things and get it from a lab into an operationalized, fully production context, and I said one of the biggest challenges that you're gonna have in doing that is getting approval and agreement from the people who stand in various positions along that path to production. So when someone might think, if they're looking through a very technical lens, they're gonna go, oh, well, to productionize this, this proof of concept, I have to look at the software development lifecycle and it's pure technology. Let's make sure that it's hardened, for industrial use, and let's make sure we've got failover and the data's replicated. That's all very important. But the biggest challenge is that you've got people who are in very important decision making roles that stand between your application, your proof of concept, your pilot and use in production. Each one of those people is in a role where they have to sign off and say, I agree. This can be allowed to go to production. And what happens is you get people who are in traditional roles who bring traditional experience and they may be very good, I'd say compliance or risk or operations, but they are not experts in AI things. And so when you're asking them, Hey, will you approve this application for use in your compliance space? They are not subject matter experts in AI. So the challenge becomes how do you make that person comfortable? And there's a lot of work there. You've gotta meet them where they are, right? And by that you've gotta help them understand what you've built in terms that they understand. You have to make them comfortable that it's trustworthy. And that using this, this capability is prudent. Prudent is a very important word. Um, and then you have to, eventually win them over by the technology has to be explainable. It has to be transparent. Again, it has to be trustworthy, it has to be prudent. And then you have to work with that person to get them to a position where they're like, yes, I agree. This can go forward. And depending on the size of the organization, you may have to do this 10 times. Or two dozen times all of these people. And then you're gonna run into scenarios where some of the larger organizations, when they're thinking about AI they're spinning up innovation centers or centers of excellence around AI. And that's a new domain. And so when now you're trying to get this AI enabled technology into production, and you've got this new group over here called AI Innovation, well, what's their role? The business is trying to bring, bring a technology, into their operation. Now, do you run a detour through the innovation center, which is brand new?
Well, who's in that center? Where did they come from? What's their background? And do I have to win over another group of people? Right. So to answer your question, I think the biggest challenge is that we are seeing are not necessarily the technologies, it's the people that are, the approvers along this path to production.
[00:08:37] KRISTINA: Your thesis, I think at Rational is that compliance can be a growth engine practically. How do you move a bank from reactive compliance and sort of checking boxes after the fact to more of what you're talking about, which is a proactive compliance that really gets to accelerating launches and doesn't have yet another cycle of review.
[00:08:57] BRIAN: This is absolutely my favorite question. It's my favorite topic. It's the thing that gets me out of bed every morning, um, for a few different reasons. One, we all see the headlines every day, whether they're coming from MIT or elsewhere. 95% of, AI projects fail or fail to produce ROI, right?
ROI being the key word. That's your return on in the investment. That, that may be true. Right? And defining what an AI project is, that can be all over the map. But the point is you need to, when you're thinking about. Any kind of ROI, whether it's gonna be, driven by AI or anything else, you have to have a lot of hygiene and a lot of discipline around understanding how you define return on investment. And that begins with, defining and understanding what value is. And a lot of people aren't very good at that. They'll say, oh, we should use this technology or this product 'cause it'll make things better. I, I can't put better on my general ledger. It doesn't equal, that has to be quantified.
And so I've spent a lot of time working with customers to understand how do you, how do you properly identify and measure and put formulas around. Value targets and those targets generally align to what we'll call, run, grow, and transform. Right. And I know this is a long-winded answer, but I'm gonna, I'm answering your question.
So in the run space, we think about cost optimization, service quality, risk management. In grow, it's literally grow revenue, transform new products and services into new markets or new ways of doing business. Right? And so. When you've taken the time to properly identify the value target, and come up with the KPIs and metrics to define what it is that's your problem space. So then when you start designing an AI enabled solution. You're gonna be laser focused on targeting one or more of these well-defined value targets. Now at Rational Exponent, one of the things that makes us different is that we come from a background of being bankers.
I was in capital markets and investment banking for, for over 25 years. I've been tasked as CIO and CTO with ensuring that the bank operation is running the best it possibly can. And that's important because, for the folks, on our team and on our management team, particularly myself as head of product. I know what it means to run a bank. I know what the problem areas are and I know what the opportunities are. So when I craft products and solutions, I'm not creating a generic product that goes looking for a problem. I know what the problem is and I'm bringing a solution that's gonna meet that need. And so when we think about compliance and risk, right, which traditionally are cost centers, right? They're taking the revenue that the business is generating, and they're spending it on risk and compliance system, and lots of people. So risk and compliance is a cost center, right? In a bank, you're either making money or you're spending money.
So I looked at that problem. And said that there's actually an opportunity here for us to bring a risk and compliance solution, into this banking organization and present it in such a way that it's not only gonna be accomplishing, run value targets, uh, cost optimization, risk management, but it's gonna leap over the fence and help with revenue generation. And I said, well, how am I gonna do that? What I looked at, and this is kind of getting into the core cognitive abilities of our product, is to say, in risk and compliance we have developed a means of ingesting huge amounts of regulatory publications, right? All of the rules and laws that are coming out of regulatory bodies. And we've developed a technology to translate that super arcane and vague language into machine readable and machine understandable rule sets. We use those rule sets to enable a bank to discover, validate and rationalize policies, procedures and controls. So that's a, that's the first baby step, and that's an example of how our product helps compliance and risk.
Basically, accelerate, and improve the service quality of their task, of creating, policies that produce adequate coverage of business operations. That to me is just a stepping stone for what comes next. Because when I do that, now I can go to the business, right? And I'm literally talking about the folks who are responsible for run and transform. And I can give them a workbench to say, you wanna bring a new product or a service to market. Traditionally in the past, and I'll pick on credit cards 'cause those are kind of easy to understand. You wanna bring a new credit card product to market in the past, maybe it's taken two years. Why it's easy to craft this new credit card idea. There's only so many variables, to play within the new credit card space. The problem happens when you then take this new product package and you give it to compliance, and you give it to risk, and you give it to ops and you give it to legal, and then they start reviewing what you've put together and they're like, oh, here's a problem. That takes five weeks for them to identify the problem, and then it goes to legal and they go, oh, this language isn't correct. And it takes another six months for you to get the language back, and suddenly you're in a two year cycle of approval, the workbench that we've created, you put it in the hands of the business users to say, Hey, you're ideating a new product. And as you work through crafting that product, I am giving you real time feedback on all of the relevant regulations. All of the relevant policies that your organization may have. So while you're crafting this product, I am giving you, insight as to whether it's compliant or not in real time. So working in this collaborative workbench, the business user can say, oh, I'd really love the interest rate on this credit card to be 73%. There's a regulation that caps that at 36% in an internal policy at 24%, hey, reagent is gonna help you work through that problem so that you're not wasting cycles taking a product package, giving it to legal and compliance, and then taking months to get an answer back saying, you're out of compliance. We're doing this in real time. And that what that means is as a business user. I am contracting the time to approval or the time to market of a new product from potentially months or even years down to days, right? And that translates into make more money sooner. And I'm doing this off of a framework that was positioned in the risk and compliance space. So it's literally, a risk and compliance foundation with a workbench tool put into the hands of the business user, allowing them to generate revenue.
[00:16:05] KRISTINA: So it's interesting because that works sort of once you get to the deployment stage, but what I see is that a lot of institutions are getting stuck in what I call the pilot trap. Lots and lots of pilots, few deployments, so we can't even get out of the gate. What separates the banks that scale from those that stall ?
[00:16:24] BRIAN: In my experience, banks that have been good at bringing any technology into, an operationalized state are gonna be better at bringing AI into their operations. You can't expect, an organization that's traditionally been very poor at innovation to suddenly change their stripes and be very good at AI innovation.
So if you've been good at other things in the past, and things that come to mind might be cloud. There were first movers on, cloud adoption and then there were others who said, I don't trust cloud. I'm not going to cloud, and you, and they still haven't gone to cloud. And that may be okay, but that's, that's a really, that's a big telltale around an organization's willingness to embrace new technologies. I think the other thing that happens with all these pilots is that there are a lot of, vendors out there right now saying, we've got a new technology, use our technology. They don't, haven't necessarily identified, the problem that they're going after, in, in discrete terms, they haven't identified the value target and they haven't made it exactly clear to an organization, um, how that can be achieved. So the the bank may say, okay, we'll pick up a pilot, we'll try this. But there's no real North star, there's no driving force. It's kind of an idea. And I think, we see that from some of the largest software, providers in the world, right? Like Microsoft comes to mind because why wouldn't it come to mind? Everybody's using Office. And then you get these new derivative products that, that Microsoft is promoting to be AI enabled. And so spinning up a pilot to use some of these things. It is as easy as a click of a button, right? So now everybody's running a pilot, but why? Have you defined that pilot? Has it been self-contained?
Did you define the KPIs and metrics that are gonna determine if it's successful or are you running a pilot because it was potentially free and easy, but you don't really know why you're doing it? That's one of the biggest problems. And I think when you read a study like MIT and they're saying, well, 95% of these things fail to go to production 'cause they lack direction in the first place.
[00:18:32] KRISTINA: And so what's really the litmus test from your perspective to test for the moment of applicability? Like what's the point when you say, okay, this seems viable. How do we actually know? How do we actually get to the point where, the business is clear on the business value?
[00:18:48] BRIAN: I think that's a really good question. And my answer is gonna be, let's just call it technology agnostic. And, and what I mean by that is where I come from in the world, I come from, being a CIO and A CTO, hedge funds and large banks, my task was always solving business problems. Even though sometimes in the biggest banks, it doesn't seem that way. It feels sometimes it feels like technology is doing their own thing and the tails wagging the dog. But I always looked at it as how can I help the business do better at, the act of being a bank. And, and what that comes down to is identifying a discreet problem.
So when we think about risk and compliance, and this is true of large banks and very small banks, they all have to do the same kinds of things. Um, and that starts with understanding all of the regulations that may be relevant to that banking organization and the products and services that they transact in. So you've got regulatory bodies like the OCC, the Fed, et cetera. Every one of those bodies is producing masses of regulatory publications. Every regulated entity has the burden of understanding all of those regulatory publications and translating them into policy, standards, procedures, and controls that are relevant and appropriate for their bank. Doesn't matter how big or small you are, you have to do this work. And what makes it even more burdensome is that these regulations change all the time. So you can't just do this once. You have to, you have to get your house in order, um, and get in really good shape. But then you have to monitor for change. And anytime there's a regulatory change, you have to assess the impact on your portfolio of policy, standards, procedures, and controls. That's a huge amount of work. Right, and it's, it's like a regressive tax on smaller banks because it's the same burden, but it's performed by less people. And it takes, a disproportionate amount of time in the largest banks, you'll find tens of, literally tens of thousands of people working in this space. What I'm trying to do here is, is is sort of paint a picture of whether you're large or small there's a problem or an opportunity that we have identified. When you ask people working in risk and compliance, what the worst part of their job is, they will tell you it's reading regulations and having to understand how the regulatory impact on my policy. And it's not just that I have to do this once, anytime it changes, I have to start over again. Read the regulatory change. Make sure I understand it. Reexamine my portfolio of policies to determine which are impacted and what I need to do about it. That kind of churn is a massive amount of manual cognitive labor, but it consumes a lot of people's time and it's the worst part, of people's jobs.
So there's an opportunity there, right? Especially when you're, a smaller bank and I'll say tier three to tier five, and you're trying to grow. You've been told, Hey, the ambition of the bank is to grow, but we're not gonna grow by scaling up an army of people in risk and compliance. Think about how you can be more productive and more efficient. That's where software products like ours, have been crafted around this problem and opportunity. How can we amplify, and basically be a force multiplier for risk and compliance people allowing them to reduce that manual labor. So that then they can focus on elevated more value added activities that really engage, their subject matter expertise. So that's a well-defined problem space. How do you manage, the ingestion of huge volume of regulations, translate that into the impact on policy, standard procedure and control. That's one of the things that our software does. So when we go to a bank and we say, what I just said, Hey, here's the problem. Everybody's shaking their head, go, yep, that's a big challenge for us, and we go, here's how we solve that problem using an AI enabled solution. The answer generally is. That's amazing. That's fantastic. And they're doing the mental math already and sometimes it's, Hey, I hired a big six consulting firm last year to check, all of my policies against these regulations and it cost N number of millions of dollars. I won't need to do that again. Or the CEO has told us. We wanna scale to be three times bigger without scaling the workforce. How do we, how do we, get the productivity and efficiency that we need to make that happen? And they're thinking the reagent software is gonna enable us to do that. So it's a problem and an opportunity that everybody understands our software has been crafted to solve that problem or capture that opportunity and people understand it.
[00:23:27] KRISTINA: It feels a little bit though, like a chicken or egg to me, right? You have a software component here that can help with explainability around regulations and arguably it's not just about regulatory transparency. I'm assuming it's also or equally about operator trust and customer outcomes, but I'm thinking about this sort of from a speed without losing control perspective, especially as agents start chaining tools and making recommendations faster than humans can review. How do folks lean in and trust a software like yours to solve the problem when again, it's hands off? Just because we are trying to get that speed.
[00:24:05] BRIAN: Love it. If you looking at our product roadmap right now, you would see a progression and it starts with a few of the things that I've just been talking about, right? We've got a product called Operations IQ, which is what's ingesting all of those regulatory publications and translating them into machine understandable language for the benefit of creating policies, standards, procedures, and controls. Policies, standards, procedures, and controls are like words that describe rules. Okay? So that's very important because when we move away on from people. Doing all of this manual labor to agents, particularly when those agents are agentic AKA autonomous. In order for us to trust that those agents are only going to be doing what they're allowed to be doing, they have to be placed in an operating environment that is an extremely well-defined with operating constraints, right?
The operating constraints are the rules that we generated when we read all of those regulations and translated them into policy, standards, procedures, and controls. Those become the rules that bound, let's just say the arena into which we can then place, these autonomous agents. I like to envision it kind of as a, like a sandbox, with the robots going all around, but they bump into the edges of that sandbox and they can't get out.
Well, how are those edges defined? Right? That's what our Operations IQ product is doing. Translating sort of fuzzy language regulations written by bureaucrats, by people for people consumption. Translating that into not just machine readable rules, but machine understandable rules, because then when we take our agentic products and put them into this space, what you, what we're doing is we're saying, okay, agentic agent, I'm not going to tell you how to do your job, but I'm gonna tell you what the required outcome is, right? Or the required output. And I'm gonna place you into this space of an operating constraint, defined by operating constraints. And I'm gonna ask the agent, solve for the optimized path to produce the required output. Of course, subject to the operating constraints we have derived from regulations that made this pass through policy, procedure, and control. So now the agent is in a confined operating space, which is where the trustworthiness comes from. And if anybody asks, well, is this explainable? Is it transparent? The progression of how we got there from regulation through policy, procedure, standard, and control into, machine boundaries or operating constraints that's all transparent. It's all explainable. And now the operator of these agents is saying, alright, if I'm gonna use an agent to help craft a policy document for the bank or or complete a business transaction, I can trust that reagent has consumed all of the required regulatory publications and produced this safe operating arena, for the agent. And it's not going to go rogue.
[00:27:24] KRISTINA: So recently I've run into several banks that have had a hard time with agents because it seems like they're on shaky foundations. A lot of issues around data quality, lineage integration with core systems. Especially this notion of data lineage, right? One of the problems I heard last week was an organization saying, look, I know who Kristina Podnar is in my CRM or CDP, but I'm also getting Kristina P coming in from one data source. I'm getting a K Podnar coming in from a different source. How do I know that that's the same person as Kristina Podnar in my CDP or my CRM. And how do I understand what the latest data stamp is? Where should I actually trust the consent? Where should I not? So kind of thinking about your software and some of the non-negotiables that go into scaling, how do you thread a lot of this together to make sure that you're successful?
[00:28:19] BRIAN: I come from, the side of the fence where, I was a bank manager and responsible for what I call whole bank operations which is getting, to your question, MDM or master data management as a practice, it's a kind of a hygiene, it's very expensive. It's very complicated. And it's something that often it gets neglected in, in banks of all sizes, because banks tend to grow organically. Systems are added either through mergers or acquisitions or for whatever reason, and you end up with these large portfolios of business applications and the data's not normalized. It's not standardized. In recent years, people have been developing data lakes, and data warehouses where you're meant to be extracting this data and you do go through this MDM process and you've got normalized and standardized data. I, that's pretty good. Right? That's admirable for, all of the right reasons when, we think about what we are doing, we recognize that if I were to say, the, the, the hurdle that you have to get over to use our software is simply normalizing, standardizing and rationalizing all of the data across your entire bank. No one would, no one would ever be able to get started. Right? So what the, the correct thing that I, that you know, that I believe, and it goes back to identifying discrete value targets and identifying the problem that you're trying to solve, is to start small. It's to understand I've got a compliance group, I've got a risk group, I've got an operations group, I've got a new product group, and I'm gonna think about what do they need right now to solve a problem or capture an opportunity. I understand they're part of a great big organization with lots of things in play. I can't boil the ocean all at once. So what I'm gonna do is I'm gonna introduce a framework, right, the reagent framework, but I'm gonna introduce it to a small population of people who've got a well-defined problem while making it understood that there's a foundation layer. There's a framework that we're introducing, a chassis, if you will, that you can add to over time with compatible modules. But the point is, let's start small to a well-defined space, um, because to your point , if the hurdle to get started was rationalizing the data across an entire bank, you'd never get started with something like this. So we like to start small.
[00:30:36] KRISTINA: And so if you're an executive sponsor and you're starting small, what should you measure to know that you are small, is ready to become big, you're ready for production.
[00:30:45] BRIAN: Yeah. And there, there's a couple ways to think about that. I don't really think of small meaning like, from, proof of concept or pilot to production. It's more about the size of the disruption, or the interception of current business processes that are going to change when you start using reagent. And so again, in, in the compliance space, if you're looking at folks who are tasked with ensuring that policies are providing adequate coverage and are up to date on a daily basis, that's a well-defined space. You've got a certain number of people. In a, in a well-defined task, you can lay those things out. And what we tend to do is, is work very closely on a collaborative basis, with each bank to understand what are those people doing. And then if we put this tool in their hands, how is it going to amplify, the work that they're doing? Amplify being a key word right now. I did not say displace and I did not say eliminate these folks, because that's not the goal right now. The goal is to help them do more. I would say with less, but they're not getting less. They're actually getting these powerful, AI enabled tools. But when you begin to measure that, you, you really can produce KPIs and metrics. For example, if you're a mid-sized bank, you might run a portfolio of about 150 policies. Give or take. Everybody's different. If there's a regulatory change, some number of people have to read that regulatory change. Then they have to examine across this portfolio of existing policies to determine which policies are gonna be impacted by that regulation. That work has been repeated so many times over years, that people have a general understanding that one person could review I don't know, a handful of policies a week, something like that. And so you can begin to calculate how long it's taking to complete these tasks. Then when you're using reagent, you can now say, oh, well this one person who used to do three or four of these a week can now do all of them. In a week. And so that's a very easy metric, to calculate, the delta between, current state and future state. What was the return, on that investment? In the technology it's pretty straightforward.
[00:32:59] KRISTINA: So if we're looking ahead, Brian, because here we are, October, 2025, looking into 2026 in your crystal ball, what does the next phase of a gentech AI in banking look like? Does compliance remain an accelerator from your perspective? Or does regulation risk slow momentum? What do you see happening?
[00:33:18] BRIAN: Political climates in the United States where we're mainly focused, they change, sometimes day to day, and that creates a regulatory churn. That's kind of the energy, that, that drives our momentum. Um, churn is part of the reason, that we are in business. So whether it's additional reregulation or deregulation, it's that change. It's that sort of market volatility, if you will. That's kind of driving the need for our product. Now, when I think about, where, in banks, you're gonna find more autonomous agents and agentic agents. I believe that there's, there is a step progression, and it's gonna come from introducing these foundational elements like I've described, introducing a, a product like reagent to do this mass translation, of regulations written by people for people into machine understandable language. And you're, again, creating this environment into which it will become safe, and even prudent to place these agents, who are maybe doing autonomous things. So the path. I see, the prudent path, to adopting, agent, um, in this space starts with laying a really good foundation. So over the next year, working, the, and I know this from the clients that we work with, it, it's gonna be focused on adopting, in creating these safe environments for the agent agents. Um, so that when they start working in an autonomous way, you know that they're constrained and contained and they're not gonna start doing unpredictable, undefined things. So we've got a lot of foundation work that has to happen.
[00:34:51] KRISTINA: And so what is the one piece of advice around that foundation, what should be the number one thing that folks should be looking at next year in terms of updating their AI governance right now to be ready for that Agentic system at scale that you talked about?
[00:35:06] BRIAN: It's really hard to answer with one thing. I I, I've got three things in my head right now. I think establishing this foundation, like I've just described, where you're creating a well-defined operating environment for ag agentic and autonomous agents that is bounded by a well constructed translation, of regulations through policies, procedures, and controls, so that when you place an agent into a space, it is bounded in a way that's explainable transparent and proven to be trustworthy, which includes the word, accuracy. So that's the number one thing. Two, I would caution anyone or everyone from suddenly introducing an unmanaged or unorchestrated federation of agents that are coming in from a large number of vendors. Every software vendor out there right now pretty much is going, oh, I've got an agent for this, and an agent for that. If you were to take all of those in-house and suddenly you've got this wild roaming population of agents. Who's in control of orchestrating, that federation? So we've recognized that we've got a product we call Harmony Ex, which is meant to be an orchestrator, but in its vendor agnostic. So before you start piling in agents, you need to be, think, begin thinking about how you're gonna manage them. Because most sophisticated business tasks are completed through a chain of activities. And if that chain is comprised of agents, particularly autonomous or semi agentic agents from multiple vendors who's responsible for the chain, because if any one of those agents in the chain is slightly off those errors are going to compound, and what started as a small error could send you into left field by the time this chain is completed and activity. So you need to think about who's got oversight, you know who's orchestrating these chains of vendor agents. And you need to think about that before you start buying lots of random agents from all your favorite vendors. So there there are two things that we should think about.
[00:37:13] KRISTINA: Brian, anytime you talk about governance and laying that foundation, it makes my heart go pitter patter because honestly, most folks seem to disregard that until it's too late. So I'm right there with you and this has certainly been a clear-eyed, practical look at how compliance can power innovation, which is what we're all about here on the power of digital policy. Thanks for joining us. Appreciate you being with us today, and thanks to everybody for listening. If today's conversation was useful, subscribe and share it with colleagues exploring AI compliance digital policy. I'm Kristina Podnar. See you next time on the Power of Digital Policy.
[00:37:47] OUTRO: Thank you for joining the Power of Digital Policy; get access to policy checklists, detailed information on policies, and other helpful resources, head over to the power of digital policy.com. If you get a moment, please leave a review on iTunes to help your digital colleagues find out about the podcast.
Feel free to respond to this podcast here: