S 7 #3 Your AI strategy starts with content, not code

S 7 #3 Your AI strategy starts with content, not code

S 7 #3 Your AI strategy starts with content, not code

Guest:
Guests:
Carrie Hane

Carrie Hane

Carrie Hane helps organizations move from chaos to clarity. For over 20 years, she's guided all types of organizations—from nonprofits to government agencies—to align their teams, clarify their priorities, and build systems that turn strategic vision into sustainable action.

She's known for bringing structure to complexity: facilitating alignment among competing stakeholders, designing operational frameworks that enable teams to work more effectively, and building capacity that lasts long after the engagement ends. Her approach blends strategic thinking with pragmatic execution, always focused on creating clarity that drives real impact.

As co-author of Designing Connected Content: Plan and Model Digital Products for Today and Tomorrow, Carrie has shaped how organizations think about building sustainable systems. She also teaches graduate courses in content modeling and management at FH Joanneum University of Applied Sciences in Austria, where she guides future leaders to think strategically about organizing complex information.

Carrie has spoken at conferences worldwide and worked with organizations ranging from small nonprofits to Fortune 500 companies. She holds an MA in International Affairs from The George Washington University and a BA in Political Science from Northern Michigan University.

AI is no longer just generating content—it’s retrieving, reasoning, and increasingly making decisions. And yet, many organizations are approaching it as if better prompts or better models will solve everything. In reality, AI is only as effective as the content it relies on. The more autonomous these systems become, the more they depend on content that is structured, connected, and governed with intent.

In this conversation, Kristina Podnar sits down with Carrie Hane to unpack what that actually means in practice. They explore the difference between formatting and true semantic structure, why duplication and inconsistency quietly undermine AI performance, and how weak or nonexistent content models introduce risk at scale. They also get into a question that doesn’t get asked often enough: who actually owns the content model in an organization—and what happens when no one does?

This episode is a reminder that AI doesn’t eliminate the need for structure—it exposes where it never existed. If organizations want AI to perform reliably, content can no longer be treated as a byproduct of publishing. It has to be treated as infrastructure.

Keywords:
AI strategy, structured content, content modeling, semantic structure, content governance, enterprise content, AI foundations, content as infrastructure, data governance, knowledge graphs, content operations, information architecture, AI adoption, organizational design, digital policy
Season:
7
Episode number:
3
Duration:
25:48
Date Published:
March 18, 2026

[00:00:00] INTRO: Welcome to The Power of Digital Policy, a show that helps digital marketers, online communications directors, and others throughout the organization balance out risks and opportunities created by using digital channels. Here's your host, Kristina Podnar.

[00:00:19] KRISTINA: Hi everyone. I'm glad you're here. There's enormous focus right now on generative AI prompting automation, scaling content creation. But if we stop and think about it, AI isn't just writing anymore, right? There's retrieval reasoning, making decisions. It's basically AI is acting, and that raises the stakes. There's this growing belief. It seems that AI reduces the need for structure, that machines can simply sort of figure it out, quote unquote. And I've been thinking about the fact that the opposite is really true. The more autonomous AI becomes, the more it depends on structured, well governed content. And so as I've been kind of mulling around this idea, I thought, you know what? Let's explore that tension and nobody better out there to explore it with, I think, than Carrie Hane.

She's been helping mission-driven organizations, align teams, clarify priorities, build sustainable systems that turn vision into action. And what I think is really interesting, Carrie, is that you've had the staying power because what you do. Has been relevant and I don't wanna date you here, but it's been relevant for decades and I think there's not a better time to really be talking about this than now. So thanks for jumping into the fray with us.

[00:01:34] CARRIE: Yeah, I'm glad to be here. And I think while it does date us to talk about the decades of work we have, I think it's important because we have seen a lot of changes. And a lot of evolution over the last few decades. I don't think there's been a revolution since Tim Berners-Lee actually invented the worldwide web. That was a revolution and then it's evolved since then. And I think AI is part of that evolution more than a revolution, although its effects are revolutionary potentially.

[00:02:15] KRISTINA: And so when you look at the use of Gen AI specifically and the fact that organizations are now managing content using Gen ai, in that context, what worries you most?

[00:02:27] CARRIE: I think that the decisions humans make are taken out of the loop. Yeah, a computer can make decisions, but it doesn't have all the background we have in our heads it doesn't necessarily, know how to relate things to each other. It knows facts. It can get the facts. It can read what you give it, but it's not necessarily asking the right questions and making the right decisions about it. This is why we're about prompting and things like that. If you give it the right prompt you can get good information. Artificial intelligence really isn't human intelligence. It's artificial. And we need to remember that computers are good for some things and humans are good for some things, and we kind of need to stay in our lanes.

[00:03:24] KRISTINA: That's a great segue into this other piece, that I've been thinking about, which is gen ai obviously write a prompt, get a response. But when you move from AI that writes to ai, that also retrieves that reasons that acts, which is what you're talking about, what changes about the importance of structured content.

[00:03:43] CARRIE: Well, I think it's structured content gives explicit directions. It's the semantic, so we're talking about semantic structure here, not formatting structure. And I think this gets confused a lot. Formatting structure is do we have the right headings and the right orders? Do we have list id? Have we broken things? Have we given it? You know, for, for ease of reference, the right HTML tags. Some tags are semantic, like this is the body, or this is a title. But heading is just a heading a list item is just a list item. But in with semantic structure, you're saying this is a summary. This is, you know, this is a name, this is a date. And oh, this this event is related to this other event or to this person. You can give those relationships very explicitly. And eventually without getting too much into the technology, building a graph with all the nodes and relationships between them so that the computer can make the right connections. And not guesses or closest matches. So when you create semantic structured content, you're defining what matters and giving it the meaning, the explicit meaning that then the computers can reason with better rather than making guesses even if they're educated. You're giving it instructions when you use this semantic structured content.

[00:05:24] KRISTINA: So what are the things that you see breaking first when that architecture is either weak or it doesn't exist.

[00:05:32] CARRIE: I think accuracy whether you're generating something or getting answers, which is a form of generation. But so when you don't have this semantic structure for your content it raises the risk of duplication and redundancy because people just create whatever they want wherever they want. So then there's proliferation, there's the same content in multiple systems. And then you have implied intent rather than explicit meaning.

And so together those things can clog up the machines. So for example, like if you have multiple pieces of content that give different answers to the same question. On does the machine know which is correct? If one system refers to a product by one name and another uses maybe an acronym or a nickname. How does the machine know that those are the same things? It might say we have 10 products when you really have three and you've just called them three different things each how does a bot know things are related if we don't tell it? So it has to make things up because it will provide you an answer. It will never say, I don't know. And so the more we can feed it our knowledge and make that explicit, the better and then the more accurate it can be. And there have, there's been a few studies, case studies maybe where teams have compared structured content versus unstructured content. The same content, but structured and unstructured or, and data structured and unstructured, and run it through the same tests, and the results are infinite, not are exponentially better than with structured content than without it, because it can, it, it can be more accurate.

[00:07:30] KRISTINA: If step back for a minute and think about this big conversation that everybody's having right now around responsible ai, what I'm wondering is how can you, or can you even have responsible AI without structured content based on what you just said?

[00:07:46] CARRIE: I'm not sure you can. The other part of it you mentioned is governance. It is well structured and well governed. So you could also create all the structured content and not govern it. And so you still have duplicates and outdated information. So it has to, you have to have the right, what we're calling corpus, the right corpus of content. And to get that right, you have to have governance. And so it, it lowers risks. It's just less content it has to go through, so it's less power to store it and retrieve it. There's a lot that goes into being responsible with ai, so, but I think you can't just layer it on top. To me, it's irresponsible to just layer some new AI tools on top of ungoverned, unstructured content because it's garbage in, garbage out. What are you getting out of just adding a layer if you're not fixing or you're not governing that underlying pool of content.

[00:08:53] KRISTINA: So it's interesting, I was actually talking to a colleague this morning and I mentioned that I was going to be talking to you, and she said, oh yeah, content modeling is a design decision. And I thought about that for a moment and I thought about all of the things that you talk about or that you've mentioned to me previously around content modeling. And this kinda goes back to your point on governance, which is it's not a design decision it seems anymore because it's about what fields exist and what has to be filled in or what doesn't, what gets translated, what doesn't, what. You just said get surfaced. What doesn't?. What name gets used and what doesn't? And so at what point does content modeling stop being this design decision that we used to think about for so many years when we just had flat websites? And at what point does it really become a policy decision?

[00:09:39] CARRIE: I've never thought of content modeling as a design decision since my version of it has always been semantic and that's never been at the design layer. It feeds into the design layer and design systems, but for me, in my world. It was never a design decision. But at what point does it become a policy decision? I think when you decide to make one because it codifies your content and your content is your business. Choosing to do a semantic content model and how you do that there, you know, depending on what type of content you're creating, there's different standards for models. You could use schema.org, you could use JSON schema. Like there's different schemas that you can use and that's part of your decisioning. But it just choosing to do that changes a lot about how your organization operates because now you're creating the model by which everybody creates content. And that's a huge transformation to be able to do that. So it is a policy decision.

[00:10:54] KRISTINA: So when you work with organizations, who do you most frequently see as owning the content model?

[00:11:00] CARRIE: Well, it's often like a web or marketing team because I think even though it's not really a design decision, it is still, it has been seen as part of our website, the website decision and whether or not it gets proliferated out or adopted more widely depends on the organization, how big they are, how mature they are what kind of leadership they have to, to make those kinds of decisions. But it can also be a technical documentation team. Because they're already maybe working in data. And so now they're expanding more to domain modeling now to get the graph relationships and not just the document relationships. But those are mainly where I've seen it. But I hope that changes. I think AI is a lens through which people are reevaluating a lot of their business their operations, their content, their online presence from many different angles. For AI as a consumer of content, and as a tool to help with making the business more efficient.

[00:12:26] KRISTINA: For folks who aren't very clear of a single owner around their content model, or maybe nobody really owns the content model in the organization, what happens then, or what should they be thinking about right now in terms of, hey, at some point, if not already. We're going to be adopting various modes of ai, whether it's, gen ai, whether it's gentech, whatever's coming down the pike in your organization. So what really happens when no one owns the content model in an enterprise,

[00:12:57] CARRIE: Then it doesn't exist. I mean, or it might as well not exist because if one, one person could create a model and then what? If only they're using it it's not an organization content model, it's just for the one person. So I think it needs, who's owning the different AI systems. Unfortunately these kinds of things are never cut and dry, right? It's not like, oh, it's always the CIO or it's always the CMO or whoever else might play a part in the senior leadership. So who's being given responsibility or accountability for ai? That person should also own the content model. And make sure they're connected. 'Cause you're already doing the transformation if if you're moving to responsible AI use. So make that part of your transformation.

[00:13:56] KRISTINA: If I think about this more broadly and I think about the fact that AI systems are reasoning over a really broad set of content. You said really our knowledge, enterprise knowledge, are content models effectively becoming system models?

[00:14:11] CARRIE: I don't think so. The way I think about content models is they're modeling the content for an organization that can then, that model can be used in multiple systems. So you're working from the same framework. But, system A would have one data model based on what it needs to do, like CRM or an association management system where you're managing customers. They're gonna only use parts of the content model in there, and they need to manage customers a certain way. But you're web CMS needs different content and more than the CRM, but they need to somehow connect. If you're talking about members or certain types of customers or sponsorships those need to have the same structure in both systems so they can talk to each other. Any systems, e-commerce systems need to have, for your product, it needs to have the same structure as the web CMS. So all these systems also link together. And to me the content model is what connects them so that your content model is your organization. And then each system manages the parts it needs to manage.

[00:15:35] KRISTINA: Something that actually irks me a little bit because I still see people talking about content strategy and data governance and AI oversight as separate things, unconnected and it's actually concerning to me because it seems like it's something that does need to be more integrated.

[00:15:50] CARRIE: Yeah, well, I think so many people think of content strategy as content marketing strategy. And that's why it's separate. Or they think of content strategy as web content strategy or, or UX content strategy. That's just focused on how do we make our website have the right content? Rather than an enterprise level strategy. To me it, it fall like a, if you had a content strategy, which people don't really have. But they do lots of parts of a content strategy is just below a business strategy. It's one of the ways you fulfill your business objectives by having certain types of content based on what you want to achieve. So. I think that's why it's considered separate. 'cause it tends to be more tactical than strategic, even though it has strategy in its name, which makes people think that they have something more strategic when they don't. And I wonder, I hope that AI, we can use this kind of chaotic time with ai to consolidate some of that change, some of the notions that, that are widely accepted like that content strategy is just content marketing or marketing strategy. And elevate it so that it's on par with AI strategy or a governance strategy or a piece of it it's all related. And all of those should be working towards the same goals with different objectives because each area does something better, or ha not better does something different than the other things.

[00:17:46] KRISTINA: Let me ask you about some strong opinions and reflections that I've been hearing people have recently, which is where is the industry getting it wrong about AI and structured content at the moment?

[00:17:56] CARRIE: That machines will just learn. We can train the machines to, we can just keep teaching them until they get it right. First of all, that sometimes works, but it takes longer. It's not always accurate. It takes a lot more human power to continually prompt and refine, prompt and refine, prompt and refine. I think that's one area. And the other is thinking that they already have good content. Because this has never been an executive level conversation about what content do we have. And what does it do for us? It's down at the lower levels of the people. Just create content. And sometimes it's their job, sometimes it's not. And so, and it's very siloed. So again, then we get back to that governance and the duplication and proliferation of content everywhere because it hasn't been given enterprise level attention in the past. So if we haven't done that before, how is AI gonna solve that I, I don't know.

[00:19:21] KRISTINA: Yeah, I'm smirking, because I, I mentioned this to you as well, but I recently saw somebody, just email HIPAA information, clear text, email and it's an actual practice. And I don't know for anybody listening out there, I still think that like day camps, you remember the days carry of day camps for your kids where they would have you fill out all these forms and send them in via email, and they had all kinds of like personal information. And it seems to me like we really haven't moved that far beyond that. And here we are in the age of AI and it's a little bit scary, right? It seems like a lot of blind spots still because we haven't matured in some fundamental ways.

[00:19:56] CARRIE: Yeah. And again, this is a policy and human decision, but I have heard stories about people being told to experiment with ai. So what do they do? They get on chat GPT, and they use it for performance evaluations, and they put all this information about their employees or the people that they manage. Into a global thing that's taking their data. , They're doing it because no one has told them how this works and that this is connected to all the other data that's already in there. So I think that is a training issue. I had said a year ago when, we were just starting to think about, okay, what does experimenting with AI look like and what are the guardrails that that organizations have to give their employees. And having some of these scary stories be training just like, if you are in a corporation or a government or somebody who works with the government, you gotta do all kinds of cybersecurity training and data security. All of these different security trainings. I'm not working for a company right now, so I don't know if they've set up AI training. But it needs to be the same thing. Like these are things you get every year and if you haven't already seen it, you need to pay attention and pass those quizzes at the end. And then of course that gets all kinds of other security and risk layers on top of it that are really, truly are infrastructure. IT infrastructure. But I think people don't understand how AI works. And so they just go, they open up an AI and start typing and they don't know what happens behind the scenes. We need to put more guardrails on things. Explain, people don't have to understand how AI works, but they have to know what they can do with it. So we can't have just experiment and tell us what you learn without the guardrails.

[00:22:07] KRISTINA: So when you say that, I tend not to be highly optimistic, maybe just because we're at the start of this journey or because of what I'm seeing out there, and it feels a little bit like the very early days of the web, which I know dates me severely, but it really does feel like a little bit of a, whoops, and I hope we don't make too many oopsies along the way. What keeps you optimistic?

[00:22:29] CARRIE: I'm not super optimistic about this. I don't know how seriously most organizations are going to take this until there's a big oops. That's what gets people to pay attention is when there's a big oops. You know, for us in the digital experience space, healthcare dot, the rollout of healthcare.gov was a huge one. And for many years, every, living in the Washington DC area that was top of mind as well as being in the digital experience community was like, we just don't wanna be healthcare.gov. And that kind of kept people from taking on things that were too big for them. And then that kind of forgotten, got forgotten. And then, you have to have another big oops for people to stop what they're doing and and pay attention. And I don't think we've had that. We've had small oopses with ai, but I don't think there's been any big ones yet that are making people stop and slow down. So, I guess there are people out there trying to use AI responsibly to do things that matter to, to. But I think they're fewer than just people running with it without understanding the implications. And that's really what scares me because this, they're huge implications for using aI for things that shouldn't be used for things that don't matter. And it's, it's not just a company or even a government that puts out the wrong information. I mean, it's ecological and ethical and so it's a big deal. And we're running fast into the woods. And don't know what's in the woods waiting for us.

[00:24:24] KRISTINA: That's a great way to end our conversation, Carrie. And I think you're really reinforcing something that I believe very deeply, which is that AI does not eliminate the need for structure. It really exposes where structure maybe never even existed. Which is where the oopsies and the scary part of what's in the forest come at us. So if organizations want to reason and retrieve and act responsibly, they need to understand the structured content, make sure that it, it's not just a publishing tactic. Make sure that you're governing correctly and understand who's doing what in what way, and be a little bit more deliberate. Not slow, but deliberate, because it's governance infrastructure. Is what I'm walking away from our conversation thinking, and so thanks for this conversation. Carrie.

[00:25:12] OUTRO: Thank you for joining the Power of Digital Policy; get access to policy checklists, detailed information on policies, and other helpful resources, head over to the power of digital policy.com. If you get a moment, please leave a review on iTunes to help your digital colleagues find out about the podcast.

Feel free to respond to this podcast here: