S4 #7 Unlocking AI success through governance and policy adoption

S4 #7 Unlocking AI success through governance and policy adoption

S4 #7 Unlocking AI success through governance and policy adoption

Gerald Kierce

Gerald Kierce

Gerald Kierce Iturrioz is the Co-Founder & CEO of Trustible – a leading technology provider of responsible AI governance. Its software platform enables AI/ML and legal teams to scale their AI Governance programs to help build trust, manage risk, and comply with AI regulations.

Gerald has spent his career at the intersection of technology and policy. Most recently, he served as Vice President & General Manager of FiscalNote (NYSE: NOTE), where he lead the AI Solutions division. Gerald also served in various roles at FiscalNote including Corporate Development, Chief of Staff to the Chairman & CEO, Product Marketing, Customer Success, and Business Development. Before FiscalNote, Gerald worked in the Professional Services practice at the Corporate Executive Board (now Gartner, NYSE: IT) and in the Communications Office of Meridian International Center. He is originally from Puerto Rico and is a graduate of the University of Dayton with degrees in Entrepreneurship and Political Science. He is also an advisor, investor, and board member of various startups and nonprofits.

The traditional "move fast and break things" approach to AI development is shifting toward a more regulated model, akin to high-risk product development like medical devices. Today, the pivotal factor driving successful AI adoption in any organization is trust. How do you facilitate this crucial transformation, harnessing the opportunities that AI has to offer today, and ensuring your organization is equipped for a more regulated AI future? Kristina talks with Gerald Kierce of Trustible on extending ethical deployment and trust-building beyond governance and compliance, ensuring foundational elements are diligently established by any brand looking to succeed with AI adoption for today and the long haul.

AI, governance, digital policies, digital policy, digital transformation, AI regulation, AI governance, AI compliance, AI ethics, AI accountability, AI transparency, AI bias mitigation, EU AI Act, AI Bill of Rights
Episode number:
Date Published:
September 5, 2023

[00:00:00] KRISTINA: Everyone's talking and writing about AI. Many are claiming to be using it. Some think there are experts in it. Billions are being spent on it. But who's really governing it?

[00:00:10] INTRO: Welcome to The Power of Digital Policy, a show that helps digital marketers, online communications directors, and others throughout the organization balance out risks and opportunities created by using digital channels. Here's your host, Kristina Podnar.

[00:00:28] KRISTINA: Welcome back to another episode of the Power of Digital Policy. Developing and implementing a policies and sound practices to keep this frontier technology in line as it becomes more powerful and invasive is a monumental task. To help us make sense of the regulatory landscape and what your organization can do to balance the risks and opportunities that arise with AI, joining us today is Gerald Kierce, CEO and co founder of Trustible. Trustible is a leading technology provider of responsible AI governance. We'll hear more about what that means in a moment, but trustable software platform enables AI, ML and legal teams to scale their AI governance programs to help build trust, manage risk, and comply with AI regulations. Gerald has spent his career at the intersection of technology and policy, which personally makes me excited. Most recently, he served as vice president and general manager of fiscal note, where he led the AI solution division. Gerald, thanks so much for coming by today and spending time with us. It's exciting to have you here.

[00:01:25] GERALD: Thank you, Kristina. Excited to be here.

[00:01:28] KRISTINA: Before we just go any further, I want to be clear about one thing and get your opinion on this. AI. We use it as shorthand, but it's not a singular thing, right? AI is many Ais, and AI can have many purposes such as deep learning, computer vision, natural language processing, or we call it NLP, other purposes. People are talking a lot about ChatGPT, which, you know, obviously can't look away from the headlines these days. But that's really a blend of things like deep learning and machine learning, neural networks, NLP. It's a very vast field and it's complex. And I think for a lot of folks trying to understand the complexity can be a little bit hard. When you go home in the evening and kind of call your mom up, or you're talking to your aunt at a family reunion, how do you explain the space?

[00:02:17] GERALD: Well, it's a complicated one because it is fairly technical, but at the same time, it's use is fairly well understood by people because we've been interacting with AI for such a long time, right? We've been interacting with Netflix recommendations or Spotify recommendations. We've been interacting with social media feeds, recommending stuff to us. And so in some ways we've lived, let's say the past 10 plus years really with a notion that there are these statistical systems that can kind of recommend stuff to us, right? I think what has shifted over the last 9 to 12 months is really the accessibility of these primarily large language models, right? Like ChatGPT and related generative AI systems that have primarily made best in class models like these accessible for everyone really that's at the core of what it is. Any individual can now create an account with chat GPT and access the best tools. Any developer, as opposed to years in the past where maybe they needed sophisticated data science teams with PhDs and an entire data infrastructure. Now you just use an API for some of these best foundational models and you can incorporate it directly into your business. The challenge is that, you know, one model can be used for a variety of different purposes, but the risk of those models really does depend on its use case. And I'll give you an example, and I talked to my parents about this to give them some context as to how we think about governing these systems.

You can have a summarization model that summarizes an article, or it summarizes a medical record, and they present dramatically different risks. It's the same model, but ultimately, on the medical record side, you may have access to personal health information or HIPAA protected data. And the governing process for something like that is dramatically different than it would be for a simple summarization tool or an article. And so when we think about what is artificial intelligence? How do we govern these systems? How do we think about the value, the risks that they provide. We think about it very much from an organizational use case and model, right? And so we're really focused, at Trustible help organizations really at the use case level, determining what specific uses of the application or the models are perhaps risky or have outsized benefits and then organization, right? What are the policies and procedures that you as an organization need to take in place to make sure that you're treating this technology with the respect it deserves, shall we say.

[00:04:42] KRISTINA: That's really helpful because that was one of my questions that I was going to ask you about is really more of the breaking down of the taxonomy aspects for governance itself because I was thinking about it more in lines of, are you trying to align sort of the machine systems to human values? It sounds like there's a little bit of that, right? Not necessarily the human values, unless you're thinking about it from a corporation values perspective alignment. But there's a strategy, the law stuff, the regulatory aspect, the ethical dilemmas. So it sounds like a little bit of everything.

[00:05:14] GERALD: Yeah. There's a number of technical assessments, legal assessments, ethical assessments that organizations are going through as they think about their development and deployment of this technology. A lot of attention gets placed towards issues of fairness and bias and things of that nature. When we think about how do we support organizations or when we think about AI governance, we are kind of splitting it in those three levels that I described earlier. Model, use case, and organization. I think there is a, a number of tools and solutions that are really focused at that model level to help you do assessments around fairness around bias, that you can essentially train the or rather, influence the performance of those models to make sure that you're, you know, increasing and decreasing your chances of hallucination that you're training the data in a particular way. That's not really where we play. And it's in part because, we've noticed a gap between skills and understandings inside of organizations. So on the one hand, you have data science and AI teams that are responsible for innovating, are responsible for incorporating these models and these systems into the business, but they don't necessarily have familiarity with the risk management implications of this, perhaps nor should they. At least, you know, elementary outside of the elementary knowledge, on the second hand, you have these legal compliance professionals, these privacy teams that are very well aware of the risk management implications of organizational processes, but perhaps less competent on how the actual technology of AI works. And so we've seen a gap in understanding and collaboration, these systems primarily by that they work with every day don't necessarily integrate themselves. And so they're working on two different ecosystems, two different language barriers, and our goal is really in the future of AI, one that is really actively managed and collaborative across all the stakeholders involved, both internally and externally. The goal of our solution of our platform is to really connect these AI technical teams with these non-technical social technical teams. Shall we say.

[00:07:11] KRISTINA: Who usually brings you into an enterprise? I'm curious because when I think about it. What you're talking about, right? There's such a dichotomy. It seems like a lot of the responsibility for AI is in fact, being placed on privacy professionals or compliance professionals, but back to your point. Yeah, they know a lot about privacy, or they know a lot about compliance. Hoorah. We have those people who are there, but they don't understand the technical aspects and a lot of times they don't actually understand the business cases. Even forget about the AI and technology, just the sheer business case isn't there.

[00:07:45] GERALD: Absolutely. We see that all the time, right? I think what you're starting to see is with larger organizations, more, more so you're starting to see these AI task forces, these AI ethics committees, they're essentially a conglomeration of teams within a company that kind of represents all of the vectors that we're starting to think about with AI, the data science and the technical elements, the business elements, the legal and compliance elements, all of these. And then very importantly, ethicists that are part of this conversation, right? And so from our standpoint, you know, we love working with AI ethics committees with AI task forces that are really looking at the comprehensive suite of AI solutions and how do they think, how do they weigh the benefits and the risks of these technologies and how do they essentially empower their teams to be able to deploy it internally within their business in a responsible and trustworthy way. I think what we're starting to see though, is I suspect over the next probably one to three years, let's say, an entirely new category of team is going to emerge to govern these systems. If we rewind back to, let's say the privacy timeframe of maybe 2016, 17, 18, more or less right around GDPR was, was coming into effect. In some ways, organizations at that time were also scrambling who owns privacy, right? And I think you started to see the privacy function emerge as a result, in many ways, influenced by GDPR and some of these emerging regulations. I think, it is likely that we're going to see a similar behavior happening with AI governance, where whether privacy teams start owning AI governance, I think is still to be determined. There's some pluses there. Some minuses there. I suspect that there's going to be functions that are going to start and that are going to emerge that are going to incorporate these technical and social technical skill sets and understandings, whether they sit under the legal umbrella, whether they sit under the chief technology officer, your CIO's umbrella. I think that all is still to be determined. I think every organization is trying to figure out what their maturity level of AI governance is. And so the ultimate decision maker of AI governance inside of an organization may depend on that maturity and may depend on which, you know, sector you're operating in. So regulated sectors like finance and insurance, might have perhaps more of a compliance oriented approach towards AI governance, you know, perhaps smaller, more AI enabled organizations might have, you know, they're focused on trust and building trust with their stakeholders. And so maybe the CTO or the CIO or even a business that is, is owning this function.

[00:10:08] KRISTINA: Yeah. So that's interesting because one of the things that obviously we've all seen this past week is the CEOs of some of the most prominent AI-focused companies voluntarily commit to being more careful about checking the systems that they roll out, including testing for problems, anything from bias and discretionary output to cybersecurity flaws, et cetera, which implies that they have it all buttoned up. They've all figured it out, right? And then the rest of these corporations don't; that's the impression I've gotten from reading the headlines. So the select few have it all buttoned up. They figured out how to test or the cybersecurity flaws, and broader societal harm; they've actually figured this out based on the headlines. What's your take on that? Do they really have it figured out?

[00:10:51] GERALD: Look, I think the honest answer is that no one hasn't fully figured out not even the big tech companies that said they are very well positioned and resourced to figure it out in some respects. And so I am optimistic that the way that they are thinking about AI safety, generally is correct. Now there are a lot of nuances within that, right? So I think, for example, if we're to address the White House, the AI developers commitments to the White House with respect to focusing on AI safety. I think it's a step in the right direction. I think making sure that private and public sectors are aligned is an important step forward and I, I think the Biden's White House is clearly putting AI front and center of his digital and technology policy, but let's also not kid ourselves. It's a PR move on both sides. I think both sides want to be seen as really addressing the issue. These commitments are reasonable. They're, they're good. Again, they're a step forward in the right direction. But it's important to also know that the big tech companies aren't the entire universe of applications of AI. There are many other tools that don't actually have a seat in the table at the White House that have the ability to reach the hands of millions, if not billions, of people around the world. I haven't seen any organization or any government agency right now meaningfully address the issue of open source technologies. This concept of democratizing AI through open source is certainly a great way to add a low cost, incorporate amazing models into your business. But it also has the ability to lower the barrier to spread malicious use beyond what these companies are developing. And so I think I'd like to see a perspective on open source models. But again, I think big tech is generally thinking about this the right way, but there's still a lot of open questions, right? Where does the liability lie? If these models or these systems are to do something, inherently illegal or harmful towards consumer or citizen, a Is it in the model developer or is it actually In the company that's deploying the model and fine tuning it for their own use. I think those are still very open ended questions that from a regulatory standpoint, we probably don't have an answer. What I do believe is that these large tech companies are also probably thinking to an extent about how do we use regulations to ultimately kind of capture the type of market that we want to capture? And I think, they're well positioned for it. We saw it in the cloud environment where five to eight companies really dominate the cloud ecosystem. They generated a whole series of new software, platforms that, you know, through SAS kind of permeated business around the world. I think they're also the same types of companies that are going to dominate the AI landscape and going to build a whole lot of applications on top of it. But, I am somewhat concerned of these seven or so companies kind of dictating how AI is going to be used around the world.

[00:13:35] KRISTINA: Do you think there's an opportunity, especially looking outside of the United States, for governments to step up and maybe come up with a way to get the policy right? That creates more of a diverse ecosystem and a fair place environment and goes beyond sort of these big tech companies to allow other players to enter the marketplace.

[00:13:58] GERALD: Yeah. I mean, I, I think we're starting to see that, right? I'm seeing three models emerge around the world of how AI regulation is shaping up. You have the European model, the American model and the Chinese model. So in Europe, what we're seeing is very much a focus on being proactive towards regulating AI. Putting it in a risk management framework where regulatory burden increases, the more risky the use case is, and they define risk in particular ways. And it's very focused at the consumer protection level, right? They're really focused on civil rights and consumer protection as part of their regulation. The American model is in many ways taking the shape of US policy that we're familiar with, right? It's, it's a little bit non interventionalist. It's pro innovation. It's, you know, encouraging self regulation and perhaps looking at regulatory agencies or state and local governments to play a more active role in the policymaking for this. I think one of the things that we hear from particularly large stakeholders, multijurisdictional organizations is actually more policies is not a good thing. It creates complexity. And so one federal policy to help us understand what we need to do around the country is actually a way to accelerate innovation, but that's a topic for a different podcast. And then we have the Chinese model. And so in the Chinese model, it's quite interesting because they are in some ways, very focused on consumer protection. Most recently they enacted their most recent rule around AI regulations, particularly for generative AI used in the public. They do focus on, elements of bias and fairness and protection of the individual. But one very important component is they're also focused on making sure that these generative AI systems don't actually undermine the socialist state. They don't speak poorly about the Chinese government, have the potential to incite revolts, and really uphold the values of socialism. And so one of the things that we're going to start to see is the geopolitical conversation around AI is going to excel, right? Because each of these jurisdictions is incorporating their own definitions of fairness, their own definitions of national identity or national security. And we're going to start to see that a lot in politics. I don't think it will necessarily impact the 2024 U. S. presidential election in a big way. I do expect future elections to really be influenced by AI and AI's role in society.

[00:16:16] KRISTINA: It's interesting because it implies that there are some boundaries within the technology world that really don't exist, right? If I ask you about AI, do we have boundaries? How do we deal with these artificial boundaries and AI, and do multinationals especially have an easier way of governing across these boundaries? Is there a reason to continue to govern in different ways within those boundaries, your perspective?

[00:16:46] GERALD: It's a great question from the landscape and the viewpoint that we're seeing, the same model that is used in China will not be able to be used in the U S and vice versa. And so what this essentially means is that for multinationals, for example, operating in both jurisdictions, they will need different models and each of their models will have to be audited and assessed by the regulatory agencies, and 3rd parties that ultimately ensure compliance with the way the regulations have shaped up the technology. I can almost assure you that a model used in the US that, doesn't have baked in weights to ensure that it complies with China will not be allowed to be productized or commercialized in China. And so I think multinationals operating across different jurisdictions are going to have to think very carefully about what are the compliance requirements that are going to be necessary to operate in particular jurisdictions. And what we're starting to see is companies need to start thinking about this now, because the cost of compliance down the road is going to be incredibly hard. Retrofitting compliance requirements in two or three years is going to mean that your entire AI infrastructure is going to be built without a compliance by design or responsible by design framework. And it's going to make, again, the retrofitting of these new processes more burdensome down the road.

[00:18:03] KRISTINA: A lot of the policy frameworks are still in flight, it's hard to build the airplane in a way when it actually has to be flying at the same time but I like this idea that I think you're floating, which is by design. And I'm wondering if there's a way to start architecting at least and if you have to dismantle something, is it a swapping up, swapping out of the living room colors, which is like having to change maybe the doors and the windows to a different scheme altogether?

[00:18:30] GERALD: Yeah. Yeah. It's a great question. So our perspective is that there are things that organizations can and should start doing now to prepare for this eventual world of regulations. I think at the very minimum, you should have an AI inventory. You should know exactly where models and use cases are being used, what they're being used for. There should be a proper risk assessment, some basic documentation about these systems. At the very minimum, every regulation, every policy, every assessment is going to require some level of documentation that traditionally has not existed. Big companies haven't thought about this documentation. Small companies haven't thought about it to an extent that satisfies regulatory requirements. So what we're starting to see is companies kind of evolve their maturity on AI governance is making sure that they have kind of the basic documentation that's going to be in many ways necessary for this type of work. One of the things that we through our platform enable organizations to do is to be able to adopt the NIST AI risk management framework. So the National Institute of Standards and Technology is a sub agency of the U. S. Department of Commerce that essentially focuses on creating standards for technology applications. They've become very popular for their cyber standards and frameworks for their privacy. And now they actually released at the beginning of the year, their AI risk management framework, which by many industry experts is viewed as the gold standard of risk management for AI, in part because it's not necessarily burdensome in the sense that it is, , that it has a lot of Structured recommendations or pressure points on how to build your models themselves, but rather it's how does your organization from a process and people standpoint support these systems to make sure that the risk management of them is properly categorized, properly understood, and that there's a review process across the organization that assigns accountability and transparency to the right individuals. So we've seen that the NIST AI Risk Management Framework is a great framework. It can be overburdensome for some organizations. And so what we focused on the platform is how do we distill it into its simplest forms so that you can start kind of applying that periodically as, as your governance matures.

[00:20:37] KRISTINA: That makes a lot of sense. And I'm wondering about that because as you're talking, it strikes me that you're taking a very comprehensive view of AI, which makes me really happy , because it seems to me that a lot of folks would argue that we're trying to govern AI which is great, but we still haven't gotten, as entities, as enterprises, we haven't gotten to the point where we properly govern core ingredients for AI to function, things like data, I was talking to a colleague of mine last night, and she told me she's been trying to move her way up in data governance now for 16 years. So enterprises have invested billions into data governance. And they have gotten maybe a little bit closer to sound data practices, but arguably we're still at a point where we don't have trustworthy data for traditional purposes. Forget AI for a moment, like just think about marketing, it's still a mess out there. So how do we account for that from a governance framework perspective, because that's a little bit of a mess.

[00:21:33] GERALD: It is. Data governance is an absolute requirement in order to have proper AI governance because data is the fuel of AI, right? And I think, one of the elements of data governance that we've seen organizations still struggle with is the structuring of this data, right? The labeling, the classification, the cleaning of this data, knowing where it is being used, knowing how it is being affected. I think what is more true now in the age of AI that was maybe not true as much in previous years, Is there a representation of this data as it relates to the kinds of attributes like ethnicity, like age, sex, gender, nationality, et cetera, that I think are going to be really important in the age of AI. One of the things that I saw, I saw a statistic the other day that the probability of AI will generate an image of a man specifically on one of the big image generators. I'm not going to name the name. 99% likely that it is gonna say that an engineer is a man. 94% that is CEO o is perceived as a man, 87% a politician, 83% a lawyer. Meanwhile, when you asked it to generate an image of a teacher, only 12% are men. And cashiers, 12% are men. And this is in essence, a function of the training data that's being used, right? We all know this. The challenge is how do you tweak these models to make sure that, you know, the right data governance, privacy, and assessments are put in place where you can actually provide a more representative view of what the culture should look.

[00:23:05] KRISTINA: So what's the answer?

[00:23:08] GERALD: I think we're all starting to figure it out. I think, it, look, a couple of the best practices that we've seen is obviously, diversification of your data sets. Ensuring that you're consistently performing bias fairness assessments, ensuring that the folks that are responsible for that are representative of the end audience or of the end customer, to make sure that you're having diverse voices, in the room, and then ethical discussions, right? In many ways, AI deployment is necessary, like ethical discussions are absolutely necessary in the AI deployment life cycle. And so I think we didn't really care too much about privacy. So if I'm going to be completely frank, like we've kind of accepted the fact that, yeah, we use cookies to give us better advertising design, Instagram or whatnot, we sometimes view it as positive, but our medical records being used in an AI model that has the potential to diagnose, you might feel positive and negative attributes of that. And so I think the ethical considerations of how we design systems like that is going to be critically important.

[00:24:13] KRISTINA: No, that's fair enough. That strikes me really to be a conversation around risk and opportunity. And, we're talking a lot about risks today, but it strikes me that, especially for organizations out there, a huge risk is not moving fast enough to adopt AI in smart ways that really propel the business mission forward, right? So people are really focused on technology technology. Still, at the end of the day, a business exists for a business reason for a business purpose, and realizing a business mission. And so what mistakes do you see right now happening in many enterprises around opportunities, like people foregoing opportunities or just not making the right choices around AI that could be beneficial?

[00:24:57] GERALD: Large organizations in general need to understand that AI is coming and it's not going anywhere. So I think, acceptance is a more important, path forward. One of the things that we see is also organizations don't need to know, understand how to incorporate the risks. Maybe they know what the benefits are, but they don't actually know how to think about risks or how to manage risks. And so this concept of weighing the benefits against the risks is really important. I'll share an anecdote with one of the organizations that we're working with. It's a large financial services organization. When ChatGPT started getting rolled out, like many large regulated enterprises, they just banned the system altogether. I'm sure there are individuals that are using their personal computers with ChatGPT and, you know, we all know that that stuff is happening and there's not a great way to monitor and govern that, but that is probably happening, but they're actually trying to embrace generative AI. And so one of the things that they're doing, they're creating hackathon where they've identified five highly strategic use cases for the business. And they have created these essentially. It's not an open hackathon, how traditional hackathons are, but they've essentially allocated five business and product leads to own this hackathon for a particular day where they have a specific use case with identified benefits and identified risks. They'll spend 24 hours building this product. And at the end of the day. They're going to essentially provide, present the product, and it's benefits alongside its risks. We love that because it's essentially saying, Hey, in a controlled environment, in a sandbox, we're going to give you permission to test out these applications in use cases that we've identified as highly strategic to the business. And we're going to build risk management. On top of it and very much in the consideration as we're building these systems. And so I think that's a really great path for a lot of organizations that are perhaps apprehensive, but want to explore, want to see how this technology can value can be valuable to their businesses.

[00:26:48] KRISTINA: It also strikes me as a really great way to start to dial up the literacy around AI instead of enterprises. Something that I've been hearing at least a lot from folks about, and people are struggling out there, especially at the executive level. I'm curious, are you seeing the same challenge out there because at the end of the day we could wait, we could wait a while and eventually the problem solved itself, I think, as some people retire and you get younger people in the problem was just go away naturally. Or we can find a faster way to up literacy so we all understand the value of AI and govern AI properly and make really great use of it and benefit as a result of it. Considering the fact that Trustible is all about governing AI, presumably, you're also having to deal with the literacy issue. How are you seeing that play out? Do you have thoughts about how we amp that up a little bit faster?

[00:27:41] GERALD: Yeah, absolutely. So Look, businesses that don't adopt AI are going to be left behind. So I think it is very important for organizations to have an urgent, but thoughtful strategy around how they want to incorporate AI, both to improve their products and services and outcomes, and also to make their business more efficient and effective. Literacy is absolutely the first step, right? You can't build a governance framework or process or program without ensuring that the individual is making decisions and resource allocation towards these systems, understand what the benefits and the risks are, understand how these technologies work. And so I think at the very minimum, one of the things that we've seen from a lot of early stage or companies rather thinking about this still very early in their maturity is bringing in experts, either on a consultative basis or enabling individuals within their organizations have passion for this, making sure that they have a seat at the executive table, sharing with them an understanding of the technology, how they should be deploying it inside of the organization. Literacy is absolutely important, but you know, literacy should be met with urgency because every organization out there is really trying to figure out how do they think about the benefits of AI, how do they think about the risks, and then move very quickly. I think with this technology, we really don't necessarily have the patience of adoption that we've had with other technologies, right? You didn't necessarily have to create a website in 1999. You didn't necessarily have to build an app on the iPhone in 2010 or whatnot. I think every organization needs to have an AI strategy and alongside, you need to have an AI governance strategy to make sure that these systems ultimately get trusted by your stakeholders, right? This has the potential to go wrong and you want to make sure that you're building systems that are trusted and actually adding value to your customers, stakeholders, constituents, whoever that might be.

[00:29:33] KRISTINA: Well said, well said. And where do employees fit into all of this? Because a lot of this is externally facing, we're thinking about the marketplace. How do we stay competitive? But employees inside of enterprises are probably wondering, do I have a job in a year? Am I going to be replaced by an AI component, I think people are at different points in their careers. What would you advise folks right now to be doing if they're in a role, either a knowledge worker who thinks they might be replaced or somebody who's actually managing folks and thinking, wow, I could replace people with some technology here? What should everybody be doing now, except for panic, which I don't think is a great option?

[00:30:14] GERALD: No, so I, I think, there's a, there's a, like a meta argument here about what this means for society. And then I think what this means for individuals inside of organization, I think for the individual, it's like anything in life. Incredibly important for them to upscale, right? They have a responsibility themselves to make sure that they are learning about this technology that where appropriate, they're incorporating into their day to day lives. And the organization has a responsibility also to train them to make sure that they are up to speed with the requirements of the 21st century workforce. With respect to the efficiency conversation, the labor conversation. I actually, I think that's a policy problem. That's not necessarily a business problem. I think the government, public policy, in general has a responsibility to have a very strict perspective or a very strong perspective as to what role do we want AI to play in the labor market and what role do we not want it to play. I've been thinking about this a lot because, you know, I'm hearing a lot of conversations like everyone is, AI's gonna take my job, AI's gonna take my job. We don't necessarily know whether that to be true, but perhaps it is more true than not. But I think on the flip side, We actually have some incredibly important roles out there. They're actually sitting vacant, right? So number one in areas like healthcare and education, we have a shortage, why can't AI actually be placed in areas where we have shortages of labor to actually augment the existing workforce to give them a better working experience with more tools that don't get them burnt out and allow them to increase their outcomes, that's an area where I think public policy should get involved. The second aspect is we have a demographic problem, at least in the U. S., where birth rates are going down, where immigration is a little tighter, and where we have a very large baby boomer generation that is going to be exiting the workforce. And so this creates a structural challenge with respect to demographics. And so I think our public policy position should also be very strict or should also be very Focus on the element of how our demographics ultimately going to be playing a role in labor, and how do we make sure that we're deploying AI in highly productive environments that are actually important to our national priorities. I'm excited for all these advancements in AI that relate to sales and marketing, content generation and optimization of existing processes but as a country, as a society writ large, we're going to want to put investments in the areas where most strategic for the country. I'm not seeing that just yet. I'm hopeful that over the next few years that we will have a very dedicated investment in AI for national security, for education, for healthcare. I'm not seeing that just yet, but again, I'm hopeful that this is just the beginning of that conversation.

[00:32:56] KRISTINA: I think it's exciting times ahead. Certainly it seems to me that a lot of things are happening and coming together at the same time, hopefully in the right order. One thing's for sure. I'm really grateful that you're thinking about governance because I think that's a component we've historically forgotten to think about. At the right time, in the right order. And so awesome having you with us today, Gerald; certainly looking forward to continuing to track your journey over at Trustible and following the very important AI governance journey that needs to be part of this bigger conversation as we look to governance within our corporations, within our teams, the broader governance within the country, and within the policy realm. So really appreciate your time today. Thanks for coming by.

[00:33:38] OUTRO: Thank you for joining the Power of Digital Policy; to sign up for our newsletter, get access to policy checklists, detailed information on policies, and other helpful resources, head over to the power of digital policy.com. If you get a moment, please leave a review on iTunes to help your digital colleagues find out about the podcast.

You can reply to this podcast here: