S2 #15 AI, Natural Language and The Holy Grail of Translation

S2 #15 AI, Natural Language and The Holy Grail of Translation

S2 #15 AI, Natural Language and The Holy Grail of Translation

Vasco Pedro

Vasco Pedro

Vasco Pedro is a co-founder and chief executive officer of Unbabel, a company that removes language barriers by blending artificial intelligence with real-time, human translations. A serial entrepreneur, Vasco has led Unbabel since 2013, taking it through Y Combinator and raising a total of $91.2 million in funding.

Translation has come a long way in the last few years, but there’s still a lot of work to do. Automated website translations are horrible, and AI’s ability to understand dialect and cultural nuances is still far from perfect. But there are ways to leverage machine learning as part of the translation process. When coupled with the right human interaction, we not only get lower costs, but translation at scale. In this episode, Unbabel’s CEO Vasco Pedro talks all things AI, natural language, and his company’s search for the holy grail of translation.

AI, artificial intelligence, natural language, machine learning, ML, translation, localization, linguistic nuances
Episode number:
Date Published:
September 23, 2021

[00:00:00] INTRO: Welcome to the Power of Digital Policy, a show that helps digital marketers, online communications directors, and others throughout the organization balance out risks and opportunities created by using digital channels. Here's your host, Kristina Podnar.

[00:00:17] Kristina Podnar, host: Hello everyone. Welcome back again to the Power of Digital Policy. Today we have with us Vasco Pedro, the co-founder, and chief executive officer of Unbabel, a company that removes language barriers by blending artificial intelligence with time human translations. Vasco is a serial entrepreneur and has led Unbabel since 2013, taking you through Y Combinator and raising a total of over 90 million in funding Vasco, welcome.

[00:00:45] Vasco Pedro, guest:: Thank you. It's great to be here.

[00:00:47] KRISTINA: I'm excited to talk to you today because every single thing that we do in policy ultimately touches, it seems some issue around translation, mainly because we're dealing either with branding or we're dealing with taxonomies and metadata. And so, no matter which way I look at this, there's always an aspect to translation and localization. So, it's really exciting to talk to you. I just want to dive in quickly and start talking about translation itself because, on the surface, it seems like it should be straightforward, but it's anything but simple. And I'm sure you can tell us more about that, but I want even maybe to tell us about the complexities and the challenges and also the mistakes that we continually see, whether it's in marketing translation for local marketing needs, or if it's for translation for soldiers on the ground, it seems like, the translation problem is almost insurmountable, but is that really true?

[00:01:40] VASCO: Well, Kristina, that's a good question. I think, as you said, it seems kind of straightforward, right? I mean, all this is based on a very simple premise, which is people speak different languages, and that's never going to change, right? It's we're never going to all speak one language. There seems to be something intrinsic to human beings and the way they organize themselves, that language is such an important, strong component of culture and of grouping and. One of the reasons why people keep coming up with new languages, why teenagers' kind of almost reinvent the language sometimes when, when they're growing up, as a way of distinguishing and creating unity among a specific section. And so, this is never going to go away. And the problem is that most of the time, just companies scale, and they start growing into new markets, they face this problem, right. And that happens across all the enterprises. And it's not just, it's not just companies, as you mentioned is, armies it's in any kind of interaction and the human level that spans borders, cultures typically end up facing language challenges. And, and it's something that I'm sure that we've been doing since pretty much the beginning of humanity, right? At some point, someone was trading something to a different village and had to figure out how to communicate. And so, the translation is something that is inherent, probably as long as different human languages. I think that there's a, there's kind of a straightforward or simple way of doing it that hides a very complex aspect of it, which is we do it very intuitively language for us such a core aspect of how we think, right. I mean, it's integral to our intelligence. My thesis is that language is a scaffolding of thought. And so, you express thinking of your brain as a highly parallel machine, right? Lots of things are happening at the same time, you're crossing sound cues, and you're having visual stuff happening, and maybe you're managing all your bodily functions, and you're thinking about stuff like all this stuff is happening in the background. And then, at some point realizing to you that it is you. Right? And I think language is kind of a scaffolding that enables that to the realization of how you do, how to get that stream of consciousness. And so, it's so integral to us. We do it well. And so when we translate, it seems like a simple process for us. Like I've learned two languages. I hear one thing I can express in a different language. It seems straightforward, but when you try to decompose it and try to create a scalable process of how you do this, at scale with quality, it is hard. We used to have localization for the most part; that was kind of the area that handled this. And it wasn't just translation was dealing with, cultural changes were dealing with things like currencies and formatting and other stuff, but a big chunk of that was translation. And, and for the most part, because it was very human-driven, it was also hiding the complexity underneath. You ended up seeing that only sophisticated large companies would have the resources to invest the complexity necessary to manage the set scale. I think what's been happening now is AI starting to really change things. When we started in Babel in 2013, we had, it was kind of at the cusp where machine translation was useful to humans in the sense that if you've got the output of a machine totally for an engine and you put it in front of a translator, they wouldn't start by just believing everything. They were like, okay, this is starting to become interesting. Or at least somewhat useful and has been coming along. It's a lot of progress. I mean, we have neural machine translation, deep learning came along with the changes in a lot of things. And so now AI has a bigger impact, but it's not sufficient. We still need humans very much involved in the process. And so now the shift has been well. Okay. Well, we have AI doing, but part of it needs to be done by humans. How do we create this hybrid model? How do you use AI to augment the capacity of humans? So, to have intelligence, augmentation, and vice versa, how do you have humans improve the AI and have a strong feedback loop in this hybrid model? And I think what we see now is in some areas. It's still very much the old way of doing things, it's still very much, hey, let's something as simple as a website, for example, well, it seems simple because most companies start with that is probably one of the most complex things to translate. Not just translate once that's already hard enough; it's to maintain updated in multiple languages because a website is a combination of so many different things. You have stuff that happens in code, and you have things in databases, you have stuff coming from the social networks, and you have your blog and, and all of these platforms and content management systems would need to also be integrated and translated in a way that you can keep up in real-time. And if you do this for multiple languages is cumbersome. And so, it then ends up being a manual process and a bit of a headache for the most part. But then you have things that are now much more automated, which is kind of where we started within identifying within the problems of the enterprise saying, okay, well, what about customer service? What is something that people were doing before is this idea of human translation to enable multilingual customer support. And so we started by that, and we said, okay, here's an area that AI can have a huge impact. Humans are still required. You still need to really work on that feedback loop, but it can really create a new way of servicing customers across the globe and kind of bridging something that is sometimes referred to as a customer service gap, in the sense that if you are, you were born in a country, it's not English, not the primary language you tend to just not have access to the same level of customer service as other countries. Obviously, there's a coincidence in some way to developing countries versus more developed countries, but it's not even that. I mean, if you look at some European countries, you're going to find that the level of customer service, the accessibility in terms of hours and days of the week, et cetera, it's typically less than in the US. Yeah. So, so the point is. There is a very deep complexity of how do you actually get to a point where you create, you translate things at a quality that we expect because also it's easy for us to translate, but it's also easy for us to recognize when things aren't great because we're so sensitive to language. And so, getting that to that point is complex. What we see now, which is the part, one of the parts that I'm very, very excited about is the rise of kind of the chain, the creation of new category that we're calling language operations which is kind of the evolution of what used to be localization. There was more of a human driven. Area that was focused on primarily marketing materials and, in marketing content and products and marketing content is typically where you find localization teams and then kind of the evolution from a strategic perspective of, okay, how do we think of this as a stack, a bit of our, part of our infrastructure stack our language, stack that as we expand to different markets, we can really scale language-wise across the entire business and not just marketing product, but also customer service and sales and other things.

[00:07:58] KRISTINA: You have my head swirling with all these different kinds of questions that I have, and I'm sure that it's very interesting to our listeners as well; one of the things that I'm thinking about, you mentioned customer services and as an excellent area or I guess a good place to start in terms of translations. What are the other parts of an enterprise that are ripe? Obviously, marketing seems to me like a place that really needs translation and localization, but I'm also thinking about the product. One of the things in my household that's interesting is even though English isn't my first language. I'm much more fluent in it than my husband. My husband has this thick accent. And so, it's funny to watch us interacting with Siri; for example, I have a much easier time of it than he does. I probably have a harder time in a different language that I'm not so fluent in, but I think to myself, like, wow, all of these areas have the potential to improve our lives, how we work, et cetera. So, what are the ripest areas in, what are the ripest businesses at this moment that need to be looking to AI as a solution for translation and localization?

[00:08:56] VASCO: I totally identify with your experience with Siri because the speech recognition in people's minds at this point, it's like, oh, it's an old thing. Now we'd like, look at Siri. We can; we can do good speech recognition it works. But what you see in the real world is it very rapidly degrades when people have accents when you have multi-speaker is when you have noise in the background, like when real-world habits, right? I mean, like real-world conditions and once a degrade, and it was a very frustrating experience, not being understood. And I think humans really disliked not being understood. There's something about us that we're like, ah, it's a bit like maybe, it's, it's a bit like the connection with someone else. Right. Whereas it was like, this person gets me, and you're like, yeah, this is great. And then when someone doesn't understand what you're saying, if someone can't hear very well, for example, it creates, we get frustrated very rapidly. I think when we interact with automatic customers support systems, like the, you call the number and there's an automated system. People get frustrated very rapidly. As soon as we feel we're not being understood by a machine, we have no patients were like, yeah, we're done with this. Give me a human because there are no good ways of recovering context and dialogue. And so I think that that's an issue, I think. So, customer service is one. We'll get back to your question in terms of areas. So, customer services, one, I think that there are natural areas that already existed, like the product within there, you find, you start finding them in a lot of different verticals, kind of sub-specialization in verticals are particularly good. So, I think if you look into, in the kind of the path towards digitalization, the further along you are as a vertical or as a market, the easier it is to adopt technology in general, and translation is no exception. So, what we see. Internet first companies are prime for it. We, we see it in, in areas of, e-commerce travel companies nowadays are very digitally focused. We have gaming obviously very digitally as well. FinTech. So even though the areas are very different, I think one common path is companies that have gone, well, either start already in a, in a part of the journey of the digital journey that is already advanced or kind of, very quickly going there. And we also see, of course, so for companies like Microsoft and Google and Facebook and so on, they're kind of digital natives. So, anything digital native is a no-brainer. I think companies that aren't using technology to really automate a lot of the language operations. I think there's an opportunity there. Part of that has been that until now, we really didn't have tech first story. I first language operations platforms that really made all this simple. And so very quickly, you'd get to a level of complexity you need to create a team that needs to get a bunch of tools that need to create a bunch of processes that starts delaying things, so there was a bit of friction there that I think is also getting much better over time. And we'll see like Unababel, other companies, I'm sure coming out with platforms that will enable the management of that, those processes inside the company in a more efficient way, I think which we see a lot in health, and legal as verticals, like that, 's, it's kind of prime for disruption surprisingly in the legal. There is a very strong investment in translation. Both from an e-discovery perspective to just the business is more and more global, both from small things, like people buying houses in different markets to people moving to spend part of their year in one place or another to companies that span multiple countries and then have whenever there's litigation that ends up requiring which languages. And so, this is a more recurring thing. And so there, it's an immediate benefit in health. We see the same because we're also going, there's a global process on the digitalization of health in Europe, you see it where maybe we're getting to a point where you'd be able to outsource tests like clinical test evaluation x-rays and things like that to countries that are always on where you go to an emergency room. And even if there isn't an orthopedic surgeon on call, there is one in India where they can have a guy ready, whatever exam that you've just done. So, and that starts to require really a lot of the ability to not just translate, but to enable almost real-time communication. It's this constant updating and maintaining of information in multiple languages. Some of it is direct information like emails and chat and things like that. Others are more long-lived like, FAQs and forums and things like this. But I, I think legal, health and then kind of in the gaming, e-commerce, travel in tech, we see a lot of evolution in, in those spaces.

[00:13:22] KRISTINA: It sounds like highly structured areas, obviously; anything that is very structured isn't as volatile would be sort of the lowest hanging fruit to address. But I'm wondering how big of a leap is it to get to the point where we truly see AI translation without human intervention or with minimal intervention. Is that a real thing? Because even just thinking about the day-to-day changes that I feel like I'm seeing, things like changes in gender influences things like assuming a nurse's a female when it could be a male or maybe that we have a nurse, and that nurse wants to be referenced by the pronoun. They, instead of he or she. How do you account for those almost real-time changes when you're dealing with an algorithm versus an individual who understands the context?

[00:14:09] VASCO: So that's one of the good points, right? So, we believe that we're still far from having a fully automated solution, that humans are a very, very important part of this formula. Now, the answer is a bit more nuanced because it's the role of humans that is really changing and the amount of effort required, for example, so if you were to look at chat nowadays, we already what we do is we train we do heavily domain adaptation of different AI systems. And then we deployed that to production so that because of the real-time nature, you're using AI to enable you to translate into communicating, but humans are still involved continuously after the fact correcting things. So, humans are still there, but that's kind of probably the most automated scenario. And then you start evolving into something like an FAQ, for example. Well, even emails, email is still communication. And what you see is about 50% of them still require some human to go in before delivering the translation to correct any errors you might find. Now, those areas might be very minimal, maybe just some tweaks. We were able to identify on a sentence level which sentence? That we're very, very sure that they're fine. And which ones were not so sure we need a human to come in, but then you go beyond that into FAQ and other more long-form content. And right now, a hundred percent of it needs to go through a human. A lot of them need to look to do to humans is the more typical translate review part of the process that you find in, in translation in normal translation LSPs. And, and if you go beyond that to marketing, you start adding layers of transcreation and just creativity, part, and cultural adaptation. And so, the more that happens, the further away we are from fully automated systems for now. And then there's another variable, which is language fairness. So, if you're translating from English to Spanish, just there's been so much research and data on English to Spanish. That is probably the closest you'll get to, state-of-the-art, but if you go Turkish to Italian, not a lot of data sets, right. I mean, the actual underlying in LP part of the AI is not nearly as developed as English to Spanish. Even the underlying stack to process language becomes a hurdle to be able to get it to that level. So, what you see is kind of these two variables, content type language pair that drive how much human effort is required to deliver a certain level of quality, a certain amount of speed, et cetera. And our vision is always, how do we keep reducing the amount of human effort required? Because also, the other side of things is we're still translating in aggregate and the world a very, very small part of the content that needs to be translated. And part of that is because there are not nearly enough people to translate the content needs to be translated. And so, the less effort we get to on a per unit translation, the more humans can do, right. The more you really amplifying you're going to translate more. And I think what we want to get is to a point where you're translating everything that you need. 95% of that is going to be the effort is going to be listed by AI in different forms. But the sheer amount of volume of things will mean that the humans are still very much involved, and you'll need more humans than you have right now to be able to deal with this, even at that scale.

[00:17:21] KRISTINA: I'm thinking about if we're using about 95% of our we're outsourcing 95% of our translation to AI, which sounds great. I think to most organizations out there, that's almost Nirvana. What does that look like from a QA perspective, especially for things that are more real-time things like customer service chat, for example? And I'm thinking about it mostly because you don't want to have blunders. There's always this great opportunity of having an AI help you out doing translation, but with that opportunity comes great risk. You have the risk of either having something be translated wrong; you managed to upset a customer, or perhaps it's even a larger, faux pa that's more public; how trustworthy are the algorithms at this point, how much trust should we be putting into machine learning? And should we be worrying about things like bias, or is that really why we have the additional 5% of a human involved?

[00:18:12] VASCO: So, I mean, we should always be worried about bias because bias is not a machine-specific issue. Right? In fact, a lot of humans are biased. So, training your customer service agents is super key. And we also sometimes have unrealistic expectations that machines will be to be perfect, right, when humans aren't. And right now, at the state of the art that we have, are people that are going to make blenders, right. They're going to sometimes not be empathetic with their customers, or a lot of the English might not be their first language. So, they're speaking a second language and maybe that, or they're dealing with someone in a different culture or all these things. And so, the expectation that machines are only worth it when they're, that will take a long time to get there. And if it's possible because the errors are easy to occur. Now, I do think that the other piece of technology that we're seeing so that Unbabel is a pioneer in this; as far as I know, we're the only ones with this kind of technology production, which is a quality estimation. The quality commission is particularly relevant because it's a neural network, in our case, deep learning-based. And what it does is it tries to identify if the output of the machine translation, if we believe is good enough, or if we need a human involved, and it does that in real-time, and you can tweak the kind of thresholds you're looking at to be able to say, actually, I want to be super conservative and only enable the machine, pretty sure that this is absolutely perfect. And what, and this is kind of one example of handover systems, and I think that's kind of the key is the handover system is, Hey, how do I get a machine, for example, in customer service to handle that those initial interactions, a lot of times are, repetitive and it's easy and straightforward, but also have a very graceful handover to a human, whenever it feels like, Hey, I'm not able to handle this anymore, or I'm not very confident to what I'm saying. And I think that's, that's kind of the key, right. It is almost like using AI as a triage to kind of offload simple tasks. So, AI will have more impact on commoditized, repetitive tasks that are done by humans right now. The more commoditized, the better, the more repetitive, the better I think that's when you see a lot of the impact, and then you want to start very much have humans to be able to handle the corner cases, exceptions the nuance. And then the question is, well, how do I do the handover? How do I figure out when I need to bring in a human in order to maximize the human's capacity to deal with more things while at the same time maximizing the ag capacity to deal with the basic stuff? And I think there's; there's an area that is still it's still early. We see a lot of benefits from it already, but we also see how much space ahead is there for us to evolve in it. And what I would expect is that you will see the same on dialogue systems chatbots, things like that.

[00:20:50] KRISTINA: Is there a need for organizations? Do you think to disclose when a customer or consumer is interacting with an AI versus a human?

[00:20:59] VASCO: That's very good and debatable. That's a good, very strong debate that we have because we have some of our customers that absolutely don't disclose it. We're not allowed to even say that they use Unbabel, and part of that is not ill intent. It's just; they feel that their goal is to always deliver the kind of level of service to their customers. And so if our customers aren't happy, it doesn't matter if it's a machine or whatever it is, right. It just hasn't happened. And so, they basically use this to make sure that the people are measuring against what they expect from humans rather than saying, oh, I'm talking to a machine, so it's okay if it's worse. And so that's an interesting approach while others are very transparent, like, okay, we're not talking to a machine. And, and what we find is nowadays humans have interacted with machines enough that give a bit more leniency and a bit more space while comes with a bit more of distrust. Like if anything goes wrong immediately, humans are like, okay, I don't trust it anymore. I need to talk to a human. So, we've seen both cases, right? I think there are good arguments on both sides on why if you should really disclose.

[00:22:02] KRISTINA: That makes a lot of sense. I do have to ask you a question because it makes me also wonder when we're talking about humans versus machines, what is the percentage of accuracy in what is the certification level of a machine versus a human. You mentioned earlier that we expect machines to be perfect, even though humans are not. And so, one of the things that. Interesting or what are the challenges that I think exist in the translation world is really this certification around translation and translators. We have the American translator certification; the US Army and the government tends to put categories of native speakers. They have cat one, cat two, cat three, depending on how fluent you are. I'm curious if you deal with that at all. Or if you have a position on whether it's possible to use AI for certifying, sort of a universal standard of what quality of translation is?

[00:22:56] VASCO: I have some ideas on that. I have some; I have an opinion. You're tapping one of the biggest problems. I think in the translation industry in general, which is what consistent quality is, right? How do we define it? How do we, how do we identify, and how do we agree on a metric for it? And it hasn't been easy at all. When companies are evaluating the quality of translation, a lot of times, we're still at the stage where they'd go like, oh, so-and-so speaks that language. I'm going to show this to them. And they will tell me, is this good or not? Right. This means that it creates a very subjective evaluation, depending on an individual's perception of quality. And some things obviously are more obvious than others, but very quickly you get to a new one where, you know, maybe particularly region of a country. And they said, oh, we don't say it this way. We would say it this other way. But the other way, it's kind of fine if he was in a different area of the country. And, and so you get into a lot of this argument that it is hard to get out of. And so, a few years ago, some folks in the translation industry started developing, trying to develop something a bit more objective, which is now becoming a bit more of the standard. It's MQM metrics. Well, QM is a metric and what it's based on is you have a source and a translation, and you take a linguist, and they go through manually inadequate errors, some types of error, the severity of errors. And there's a score associated with every error and the severity of the error. And so, at the end of it, you end up with a score, right? And so, and that score gives you the score of the quality of the documents, and you can then drill down if you understand kind of what's based on it, what kinds of errors, et cetera. And so, we've started developing a few years ago a very scalable pipeline for being able to do quality to MQM at scale, which we're using internally to have an objective view on what is the quality we're delivering to customers. We can send annotation batches at will. Typically, there's a percentage of content that automatically goes through this process. And there's a lot of benefits for us with this because we're generating a lot of data on the annotations there are. And then we can use that as a base to expand. And so, try to score things you've never seen before by looking at the type of errors that we've seen in other places. And so that gives you starts giving you a good picture of quality, but it gives you an objective picture. And I think that's been; the struggle is really driving this discussion towards an objective perspective where we can agree on, Hey, here's an error. Is this an error or not? What type of error? And then based on that, we can start thinking, okay, here's the quality we're delivering. And I think, for the most part, we've now also launched something on top of that. It's called the Customer Utility Analysis because what we felt, a lot of times, as we're talking to folks talking about MQM, was a bit overwhelming. People sometimes, the feedback I've got sometimes like, it's great, but it sometimes feels like I need a Ph.D. to understand this, which is kind of fair. It's very, detail and there's a lot of research behind it. It's very used to get into explaining all of this like different aspects of quality. A lot of times, customers want to understand how. How is this, like, give me a general sense of, are we delivering things with the kind of quality we expect? So, the Customer Utility Analysis tries to put things in buckets of, hey, this was excellent grades, good, or, needs work. And then what we have is SLA repressive and say, Hey, our expectation is that 99% of your translation is going to be on great buckets. And there's going to be 1%. It's going to be on the, or half percent is going to be on the good, and you're going to have, sub half percent on something. So, but you can define that, and you can refine SLA is based on that. And I think that's been very helpful because it creates a very simple graphical way to look like there's a dashboard that can see, okay, this is the quality delivery. They can also drill down and then look at the underlying MQM scores. And, and especially, as language operations rise as a category, you'll see Lang ops as kind of the manager of the translation layer side of the enterprise, being able to have a stakeholder version with a customer utility analysis where maybe the folks in marketing and the folks in customer service can look at that. But then the Lang ops can go in and drill deeper and understand, okay, what's working, what's not. Maybe we need to retrain this particular engine, or we need to give better instructions to humans that work on this, or a combination of things. But I think we're seeing a drive through data-driven quality two-way pipelines and understanding how the perspective on quality as much as possible that is objective, as you have software coming in, computer science coming in, there's more engineering, right? So, there's a stronger engineering perspective of how do we make this scalable. And I, I feel that before you, you have a lot of people and translation that came from language and linguistics, which is great, but you tend to see it more as an art. Like if you're translating a book, there's certainly a lot of art to it, right? It's, it's not straightforward at all, but it's when you want to scale in communication, you start needing much more objective measures that can really tell you, Hey, we're making, we're going the right progress or not. And so give you a sense also we then QM also you can do MQM on the output of machine translation and the output of humans. And until you can compare, you can see what do we get each of them? And it's been interesting because this is changing all the time machine translation is getting better, but the latest numbers that we, that we had was, a general machines relation to engine kind of off the shelf, you'd get, maybe 70, 75. And then QM scores in general, obviously there's exceptions depending on content type language fairs, et cetera. But, from an overall perspective, if you have a domain adapted, with data specific for that customer and domain, et cetera, you'll get a, to like a, 85 to 90, something like that. If you have a human, you get to 95 and above. And I think that's ultimately really where we're trying to always deliver. Right? It's like, whatever we deliver needs to be on that professional level while we work is in reverse the same. Do we need to have a human to get to that level, or do we believe that it's already there? And if it's already there, great, we can deliver it. If not, let's add a human. So, our philosophy is never, Hey, we'll deliver whatever we have, and hopefully, we'll be good is no, we need to leave here really good things is, is really how do we reverse engineer and optimize it so that getting there is less costly and more automated.

[00:28:52] KRISTINA: I love that summary because I'm thinking that for most teams and organizations out there, that's exactly how they should be thinking about it. This is all about risk, and it's all about the opportunity. And at the end of the day, you need to decide what mix you are comfortable with? And so whether your organization needs to be at that 95 plus, or if you're comfortable enough with 85% or 75%, is going to really determine the solution you go after and the approach that you adopt. But the key obviously needs to be. Having that conversation and being highly deliberate around what are your translations and localization goals versus just sort of surrendering, I guess, to the way we've always done things regardless of the cost or the output, really getting that balance right. Well, Vasco, we're out of time, which I'm sad about because I feel like we could continue talking to you for quite some time and never kind of get to the depth of knowledge that you have on the topic. But certainly, appreciate you joining me today. It's been a pleasure having you. And to everyone listening, thanks for spending time with us. I hope you'll tune in to the next episode. And I certainly hope Vasco that you'll come back and tell us some more about the work that you're up to because it is a fascinating area and we have used it.

[00:30:03] OUTRO: Thank you for joining the power of digital policy. To sign up for our newsletter, get access to policy checklists, detailed information on policies, and other helpful resources, head over to the power of digital policy.com. If you get a moment, please leave a review on iTunes to help your digital colleagues find out about the podcast.

You can reply to this podcast here: