
Camille Stewart Gloster
Camille Stewart Gloster is a strategist, advisor, and executive recognized for her leadership at the intersection of technology, cybersecurity, national security, and foreign policy. She is the CEO and Founder of CAS Strategies, LLC, a strategic advisory firm helping governments, companies, and nonprofits navigate converging risks across emerging tech, cybersecurity, and digital governance. Camille served as the first Deputy National Cyber Director for Technology and Ecosystem Security at the White House, where she led national efforts on AI security, quantum readiness, and tech workforce development. She has also held senior roles at Google, Deloitte, DHS, and CrowdStrike, driving innovation in product security, trust, AI resilience, and policy strategy. Her research focuses on adversarial AI, cyber-physical threats, AI agent governance, and IP exfiltration through legal systems. Camille’s work bridges technical fluency, policy depth, and systems thinking to strengthen the resilience, integrity, and strategic alignment of digital infrastructure worldwide.
AI strategies are everywhere — roadmaps, principles, and frameworks abound. But inside real institutions, the harder question isn’t what AI can do; it’s whether the organization is structurally ready to use it responsibly. In this episode, Kristina Podnar speaks with Camille Stewart Gloucester, whose experience across government and AI policy gives her a unique vantage point. They discuss how industry, more than government, is currently setting AI’s operational guardrails — a shift with significant implications for accountability, national security, labor markets, and institutional resilience.
The conversation digs into what truly blocks effective AI governance. It’s not just a skills gap, but fragmented ownership, legacy structures, unclear decision rights, and governance systems that fail to adapt. Camille describes what durable governance requires: leadership commitment at the top, clear delegation to an accountable integrator, and coordinated engagement across security, trust and safety, risk, product, and business teams. Without these foundations, organizations risk mistaking activity for readiness.
They also warn against false confidence. The absence of visible AI incidents doesn’t equal preparedness. Early design decisions — from data exposure risks to autonomous systems that are difficult to shut down — can embed long-term vulnerabilities. Rather than choosing between speed and control, the path forward is investing early in secure-by-design, privacy-by-design, and governance-by-design approaches that enable performance. For leaders deploying AI or advising those who do, this episode offers a pragmatic look at what sustainable oversight demands next.
INTRO: Welcome to The Power of Digital Policy, a show that helps digital marketers, online communications directors, and others throughout the organization balance out risks and opportunities created by using digital channels. Here's your host, Kristina Podnar.
KRISTINA: Welcome to the Power of Digital Policy, the podcast where we explore how technology, governance, and strategy collide inside real institutions and what leaders need to do before policy gaps turn into operational risk.
Today I am joined by Camille Stewart Gloucester. Camille served in senior leadership roles across government and the broader policy ecosystem, working at the intersection of artificial intelligence, national strategy, and institutional readiness. She's been directly involved in shaping not just how governments think about AI, but really how the world at large thinks about AI risk innovation and coordination. And more recently, she's been advising organizations navigating what comes next. What I value about Camille's perspective is that [00:01:00] she understands policy not just as regulation, which a lot of people leap to right away, but she thinks about it as infrastructure, the systems, the incentives, the decision rights, really everything that determines whether governance actually works inside of your organization.
So Camille, welcome to the show. It's great to have you here and I'm looking forward to having you geek out with us.
CAMILLE: I'm so excited to be here. Thank you for the invitation and that great introduction.
KRISTINA: Well, there's no shortage of AI strategies out there. We can open up any headline these days, lots of AI roadmaps, frameworks right now, from governments, companies, global bodies. You have this unique vantage point, in my opinion, what feels real and durable in today's AI policy landscape for you? What's more rhetorical than operational?
CAMILLE: From a national level, we're not getting too much guidance, right? There's a push towards innovation, that is unhindered [00:02:00] and kinda defaulting to the private sector. Right or wrong, that leaves us in a position where industry is kind of setting the pace on what guardrails look like, what our policy looks like, what our posture looks like, and how people experience AI on any given day, not to mention how their organization or their enterprise deploys it, and the guardrails that they're putting into protect not only their internal infrastructure and employees, but their users. That's an interesting landscape to be in. It is not the norm per se. Usually get some kind of national level guidance and then some state guidance to compliment that. And then industry operates within those bounds because the market has been set in a direction by government. So right now we are seeing the pace set, particularly because of the velocity of AI by industry and even when or if this shifts in terms of how national policy or state level policy shows up, it'll be a, a little too [00:03:00] late. We'll kind of have a foundation that is already set. And so what I see as durable is industry setting, the foundation for what AI is in our professional, personal lives and how it moves through enterprises. And what I see as temporary is the pace at which government is intervening. I think that will change at a certain point, whether that's now in a year, in three years, that remains to be seen. But those are the big shifts I see coming.
KRISTINA: One thing I keep encountering is really institutional readiness. Many organizations say they want to govern AI responsibly, but they're operating with legacy structures, a lot of unclear ownership, and also some fragmented accountability. From what you're seeing out there, what are the biggest blockers to effective AI governance right now? Do you see this as a skills issue, a leadership issue, structural, like where's it all coming from?
CAMILLE: It's a culmination of [00:04:00] issues. What I'm seeing is organizations by and large fall into two camps. The desire to move quickly, to keep pace with what they're hearing about AI and to capitalize on the innovation, which means that they don't take a moment to think about what their strategy should be, how their governance should evolve, how their organizational structure might help or hinder their ability to mobilize around the changes that this new technology will bring to how it controls a technology or implements a technology. We've seen a number of organizations on the other side be afraid to implement AI for all the risks and the reasons that it will change everything. They kinda wanna see what happens. But even that group is starting to realize that this is not a passing fad, and that AI is more like the internet than like, you know, some temporal change. And so they are starting to think about how they do that. The [00:05:00] organizations that get this right. Take the time upfront to have an AI strategy, to rethink governance, to redesign organizational structures.
One of the places, i've seen real change and actually is highlighted in the book I'm writing is the intersection or convergence of security and trust and safety. Traditionally, security was seen as, basically compliance, but really it is performance in this moment. But security tooling alone, security frameworks alone, cannot do the work. This is such a behavioral technology that a lot of the skillset that used to be buried in a trust and safety organization, which not every company has, are becoming integral to successfully enabling the performance of generative AI and agentic AI systems, but also limiting the blast raises should something [00:06:00] malicious or emergent happen.
KRISTINA: That's such a great point. With all of the AI developments moving so quickly, as you mentioned and you have these governance systems, which are designed to be very deliberative, not slow, but deliberative. How do we reconcile that mismatch without defaulting to paralysis or overreach, right? One way or the other, not being ideal.
CAMILLE: There is a balance. I always tell organizations that thinking about these things upfront allows them to be built into the fabric of how the technology moves through the organization and how the organization mobilizes around its technology transformation. When you bolt on things like security, privacy, governance, it always feels like a heavier burden 'cause you're having to restart. When you do them upfront and when you think of them as modular and able to kind of adapt to where you are in the evolution of your organization or the maturing of your technology organization it [00:07:00] becomes a little less cumbersome. The whole secure by design movement, privacy by design movement. They're there for a reason, right? When you think about these things upfront, they become less burdensome and in AI in particular, generative AI, agent AI, and whatever comes next, they really do enable performance, things that are traditional security tools and functions. Thinking about privacy, those are how you constrain the behavior of an agent, constrain the behavior of a generative AI system so that it does the thing that you wanted to do and nothing more. So that is the investment that I encourage organizations to make, is to really pause and make some upfront investments that unlock that capability and stop thinking about these governance structures, security, privacy, all of those things as hindrances to innovation, but really enablers.
KRISTINA: That's such a valuable [00:08:00] perspective, especially because one of the things that I worry about is this notion of false confidence, organizations that believe they're governing AI well because they've checked the visible boxes, the things that seem apparent or on the surface from your perspective, what are the strongest signals that a organization thinks It's more prepared than it actually is. How do people know?
CAMILLE: If your governance system isn't continuously learning, you think you're more prepared than you are. Just because you haven't experienced an incident or noticed a drift in a system or realized you have shadow AI or any of the other things that can befall an organization that has deployed this capability. Doesn't mean it has isn't happening or starting to manifest itself. It just hasn't surfaced itself yet. And so if your system isn't learning from every threat modeling incidents or occurrence. Every piece of threat [00:09:00] intelligence, every research that impacts the systems that you have deployed. If it's not learning from every incident you do have or every discovery of emergent behavior that doesn't change something tangibly every time your governance is out of date and you're likely to wake up one day and realize the systems have drifted in a way that you didn't intend, that authority that you delegated has now expanded in ways that you cannot measure.
I was talking to an organization the other day. And they talked about how they deployed some generative AI chatbots. It's been wildly helpful. But what they didn't realize is they hadn't done the work to really get their data in order to think about where data lives, what access had been granted to legacy systems, all of these things. And they quickly realized that in service of doing the thing that you charged the generative AI [00:10:00] system to do, it got access to treasure trove of data that it should not have had access to. And undoing that was much more difficult than they anticipated. Unrolling that access is a very tough proposition. So taking the time upfront to clean up your data, to understand what kind of access and authority you're giving to, really think about what you're implementing and why, and understand the business value, the risk you're taking off and mitigate as much of that as possible is really important.
KRISTINA: So that brings me to the holy grail question, which is, when we talk about learning continuously, when we talk about thinking through decisions, really AI decisions get hard, right? Not theoretical, but real trade-offs involving the risks, the speed, the harms that you've been talking about, and they almost always come down or seem to come down to who has the authority [00:11:00] to decide. So what does good decision ownership actually look like in practice in this model that you just described for us, which is the continual learning, making sure you're really being thoughtful about decisions. How does that really work in day-to-day practice?
CAMILLE: It has to start at the top. All the way up to your CEO, and your board needs to be bought in on the direction of AI deployment and how it supports business operations and what that means for employees. All of the things. That vision, that direction, that delegation of authority from the CEO then needs to go to somebody who is responsible for that integration and who works very closely cross-functionally. This is a moment where product, security, compliance, privacy, enterprise risk management, business unit owners that are deploying agents and AI systems all have to come [00:12:00] together to make some tough calls, to really be able to assess behavior, assess drift, all of these things. And so the structure that I have seen work best is a bought in leadership that sets a very clear direction. Delegates authority to an integrator, someone to lead the charge on AI, a chief AI officer. Sometimes it's the chief information security officer or the CIO, the chief information officer, or chief Innovation Officer, so some C-Suite leader who owns that integration, and then who convenes a body of cross-functional leadership. So there is a view across the business and you can harness the unique capabilities that comes from each of those groups so that they are less siloed. The harm doesn't live in one piece of the organization in ways that we traditionally built out organizational structure. And so organizational structure can often be limiting because security is looking at a set of data. Trust and safety is looking at a set of [00:13:00] data. Enterprise works management is looking at a set of data. And what really needs to happen is taking the skills that each of those functions are well positioned to leverage and pulling that together to get a better view across the business and across how a agentic systems, how generative AI systems are actually behaving and outputs and outcomes are manifesting themselves.
KRISTINA: You don't have to name names, but are you seeing organizations out there that are willing to invest in the operational plumbing, so to say? Are you seeing folks who are doing it right?
CAMILLE: Yes. Oh, and most of them after hard fought lessons, unfortunately, they may not admit it quickly, but whether it's workforce resilience or thinking about governance, whole scale, or understanding that they have to evolve their security capability even when they've [00:14:00] had a first in class security and privacy capability. Organizations, many of which have learned the hard way, or watch their peers learn the hard way, are making the investment. Now, it's not a uniform investment. Every organization isn't like, oh yeah, I gotta pause and do this. But many have started to, to do that. Especially the larger ones.
KRISTINA: That's great to hear. I think sometimes we also need to understand that it's a process, but there are leaders or clear leaders from what you're saying, which is hard in a vacuum. You mentioned government hasn't really given us a lot of direction, considering the fact that this is so heavily industry led in terms of responsibility, governance, do no harm. What is something that government underestimates. You think about how AI is actually deployed inside of organizations right now, and do companies actually misunderstand government at this moment and how it really works, and are they talking past each other or are we gonna get to a point you [00:15:00] think, where we're all kind of aligned to the same page?
CAMILLE: Government doesn't always, but in this moment it seems like, especially at a national level in the US government is not realizing just how much these decisions being made in organizations impact things that they feel very responsible for, like national security. Like international security, like economic stability, there are labor markets, all kinds of things. And so that is a real disappointment. Wholesale governments around the world I think have been reckoning with this. How do we align our values and the things we'd like to preserve to the technical realities and the pace of bureaucracy and the nuance of each organization, and that has been a constant struggle, right? Move [00:16:00] fast, as a policymaker, you potentially limit innovation and don't get a chance to see what could be. Move too slow, and values like privacy, safety, security are hindered for your citizens and for the institution itself. And we've got, I feel like government has gotten better at that, right? There's some balance in between understanding it, partnering with industry, that true collaboration to fashion policies and boundaries that allow for innovation, but protect the things we hold dear collectively. And as we continue to get that balance right, I think it'll help move things in the right direction.
The velocity of this technology though, makes that much more complicated because there's no way that the bureaucracies around the world can move as fast as the technology does. But one thing I know on the other [00:17:00] side, and you've seen it in many reports, is actually many of these companies would prefer some kind of at least outer bounds on where they should go and what direction they should face. And we're facing some real conundrums if AI is everything that we think it can be. What happens to the future of work? How are people making a livelihood? If AI agents and robots and all of these things are going to take over all of the day-to-day work, how does society orient itself around that?
And you've seen industry leaders feel ill-equipped to make decisions about that. They're like, government will figure it out. Maybe it'll be universal basic income. Maybe it'll be this, maybe it'll be that. But the inaction by some governments means that those decisions aren't being made. So there will come a reckoning point where, especially on the labor front, there will be some tough decisions we have to make about. Where we [00:18:00] shouldn't go with the technology, how we support people in its evolution, that kind of thing. And so there's some real tensions in the policy landscape, but I think there is a desire from most parties for there to be this top line governments at national and international levels. And for then industry to move within that. And industry is trying, it's seemingly trying to be responsible. Most actors. But they also were driven by profits. That is their role in the ecosystem. And so to expect that they will make every move with the best interest of society, at the forefront is a tough proposition to put on an entity where that is not their mission or their goal. That's why we have government to advocate on our behalf.
KRISTINA: Do you see a point where retrofitting governance will become meaningfully harder, where decisions made early will lock businesses into risk they can't easily unwind, or maybe even risks that they weren't [00:19:00] intending to necessarily kind of take on at this point. And if so, where do you think that point is showing up first?
CAMILLE: Yes. The example I gave earlier shows that at a small scale, unwinding just access to data across your infrastructure was a very difficult proposition for them to disentangle, for them to remove the access. 'cause once the model learned off, that information and all kinds of things made that much more complicated. So as these decisions get more consequential and have an impact on an organization's ability to operate or deliver to its customers. Once it impacts brand reputation or a whole host of other factors, it will be hard to disentangle. I had another client who talked about how he was the chief Information Security officer, got a call from one of his data centers, was like, we are effectively DDoSing one of [00:20:00] our clients. It is continuously sending a signal. One of our agents is continuously sending a signal to the client and they're frustrated. He's like, okay, we'll shut it off. And he's like, so we can't, they couldn't shut it off because of the way it had been configured and the role it was playing in the data center without it causing cascading harm. And those are the kinds of things that we will see. And unfortunately in certain industries and maybe wholesale, that will translate into some of the worst kinds of harm, like physical harm and potentially loss of life. Imagine something like that happening in a hospital system. Or something like that happening in the water sector, and it limits our ability to access fresh water. So these are the reasons that an organization should make these investments. I mean to agents and generative AI and whatever's to come [00:21:00]next has such a potential to pay dividends for organizations for all of us, but without understanding the scope, intentionally delegating authority, learning from each new movement, whether that is an emergent behavior or the intersection of human interaction and an agent or operating system taking an intended action. Those like lines will get less and less clear, but no matter what the origin, without the appropriate controls in place, the ability to observe and then to tweak and continue to evolve, organizations will feel more harm than they've ever intended.
KRISTINA: I know that for quite a few folks who are probably listening today, they're nodding their head going, yes, Camille definitely agree with you. How do we get started? What do we need to know that we don't know? How do we even make it safely through this whole process? Do it well and still make money. Because like you said, most everybody's working in a [00:22:00] business and that's the role of businesses in the ecosystem. You are actually writing a book to help us get through safely to the other side. Tell us more about this book.
CAMILLE: Yes, I am writing a book because to your point, while it is the components that I, I mentioned thinking about that across your environment, when you've already got an established organizational structure, these functions that do very important work, it may be a little daunting to conceptualize, but it's actually not that difficult. You can evolve and mature the functions you have within your organization to adapt around a governance framework that allows you to do that continuous learning, that has adaptable detection, that thinks about accountability less from a, who is responsible. And liability framework and more from a, how do we [00:23:00] deconstruct this as much as possible to understand what behaviors, what inputs, what outputs we need to tweak in order to get the best possible outcomes for our organization, and have these technologies operate in the ways that we're attending.
So the book really lays out a framework for that, how an organization can mature into it. It even gives you a business case if you're worried about getting your leadership engaged, because this does truly require top down and bottom up engagement, you can leverage the business case chapter to have a conversation with your leadership to make this happen. And the book isn't only for risk professionals or security professionals, or privacy professionals. It is for everyone who has some responsibility for agents deployed in their system. This is really focused on that evolution towards a [00:24:00] agentic system. If you're deploying, generating AI right now, you should still employ this framework because it'll have you ready for as that continues to evolve. But it also is well positioned to meet you at the future, whether that's world models or the intersection of agents and robots, or the myriad of other possibilities to come with AI. AI agents is literally a drop in the bucket of a tsunami of capability that is coming our way. And so doing this now helps you be well positioned to meet the moment and have that muscle memory on the learning and the intentional delegation of authority necessary. And this isn't human in the loop at every step. You can only have an agent or a generative AI system take a small action. It recognizes that as the scales. You'll be using Guardian agents to help you manage this, and automation [00:25:00] will be an integral part of managing this. But how do you create that compound learning such that the system learns the intentions and the limits of the organization as you build it out? So that's what this book is designed to do, and I'm really excited to get it in the hands of folks that are confused, excited, worried and eager to get started.
KRISTINA: I hope you'll come back and announce your book to us and tell us more when it's ready for the reading stage. But I know for a lot of folks who are listening, they are all of the things you just mentioned: excited, scared, uncertain, but they're also very frustrated, and I'm gonna speak for everybody now, whether that's fair or not. But I do hear quite a few folks saying, you know what? We've been at this digital governance thing for decades, not years. Decades now, people are really frustrated, because yeah, it wasn't at such a large scale as AI, right? It was at the scale of social media or the [00:26:00] scale of mobile apps, but people have been fighting the good fight around, we need governance inside of our digital teams, inside of our businesses. And folks have really like made their careers on this path for decades now, and sometimes it can feel like we're no further along. And here we are talking about gen AI and agents and assistant agents, and we're talking about AI, the next leap might be XR and the immersive space and all of the quantum components that are coming. What do you say to the folks who have been at this for decades and they're frustrated because they see the pattern, they've been preaching a lot of the things maybe that you're saying right now in terms of top down, bottom up, we need governance, but a how do we get past that frustration to really kind of crack this nut this time and get it done right? And do you think we can get it done right, not just for AI, but for whatever's coming next?
CAMILLE: I deeply empathize with the frustration as someone who has championed it from [00:27:00] government, has lived it in an enterprise. I recognize the challenges with. Truly achieving that top down, bottom up, meaning you're well resourced and you're supported and leadership is bought in and all of those things, and people are equipped and well-trained. That said, I do think AI has the potential to be a moment of reckoning and a pivot that catalyzes more actions from those that were a bit skeptical before. I think the harms become clearer every day, just as clear as the opportunities and the excitement and if narrated correctly, if the business case is made, digital governance leaders have a chance to get the buy-in that they need. Now, they might not get all the resources, but that's the reality of, you know, operating within an enterprise and, and trying to build any kind of function. Everybody feels like [00:28:00] they need more budget, but if you can get that leadership buy-in and the structure to train people to operate accordingly and the alignment to get that cross-functional , I think you can make some meaningful progress in organizations and like I said, I think AI has the opportunity to be that catalytic moment, and particularly because industry is in the lead. They recognize the need to play a position. They haven't had to play organically before they've had to play as a compliance function, like when we meet these thresholds as set by X, well now you're in the driver's seat and I think to really unlock capability, it's becoming clearer and clearer to leaders that they've gotta mitigate the risk and effectively govern this technology. At least that's my hope.
KRISTINA: I think that's a wonderful hope, and I think it's actually a wonderful call to action for all of us collectively. So Camille, thank you for joining us [00:29:00] and for continuing to push this conversation in ways that actually move the needle.
OUTRO: Thank you for joining the Power of Digital Policy; get access to policy checklists, detailed information on policies, and other helpful resources, head over to the power of digital policy.com. If you get a moment, please leave a review on iTunes to help your digital colleagues find out about the podcast.
Feel free to respond to this podcast here: