S2 #16 Rebuilding trust in the Internet, one human ID at a time

S2 #16 Rebuilding trust in the Internet, one human ID at a time

S2 #16 Rebuilding trust in the Internet, one human ID at a time

Guest:
Guests:
Namik Muduroglu, Bastian Purrer

Namik Muduroglu, Bastian Purrer

  • Namik Muduroglu, Co-Founder humanID, an anonymous, bot-resistant authentication for safe online communities.
  • Bastian Purrer, Co-Founder humanID | Entrepreneur fighting Fake News & Erosion of Privacy | Between US, Indonesia, Germany

Any organization that allows human interaction on its digital platforms, including user-generated content, runs the risk of bots and fake accounts hijacking the user experience. humanID is looking to change that and turn the tide of bots and disinformation polluting social networks. In this episode, Namik Muduroglu and Bastian Purrer discuss how organizations can balance user validation and user privacy so that everyone wins.

Keywords:
humanID, humans, bots, validated users, user generated content, UGC, disinformation, truthful information, political hacking
Season:
2
Episode number:
16
Duration:
35:07
Date Published:
October 7, 2021

INTRO: [00:00:00] Welcome to the Power of Digital Policy, a show that helps digital marketers, online communications directors, and others throughout the organization balance out risks and opportunities created by using digital channels. Here's your host, Kristina Podnar.

KRISTINA PODNAR, host: [00:00:20] Welcome to the Power of Digital Policy. Again, I am so excited because I have two folks I'm going to let introduce themselves rather than having. I'm going to focus on pronouncing their names correctly. We were just debating this before we started recording. So Bastian Purrer, I could see his name like that because that's how his parents would say it or at least something close to that. And then we also have with us Namik Muduroglu. I'm going to let you each take a stab at introducing yourself rather than having me do it because I want to make it get right. So Bastian, let's start.

[00:00:53] BASTIAN PURRER, guest: Yeah, thank you so much, Kristina, for having us on. My name is Bastian; I am originally from Germany and then spend the last ten plus years running marketing for e-commerce companies across the US and Asia. I saw how bad a privacy situation has become. My job was buying data from Facebook and Google and then targeting people with these creepy ads that follow them around the internet. And then, around a few years ago, I lived in Indonesia. And I got involved in politics there helping one of the political parties, and they sort of came across really face to face with these bot networks and fake accounts that I used to spread. Inflammation and very often misinformation across the world and influence communities and websites with all kinds of, from commercial interests to political interests. And that led me to start human ID, which we'll talk about more later. I'm now living in Los Angeles. With my wife, we moved here two days ago. So fresh.

[00:02:06] NAMIK MUDUROGLU, guest: And I'm Namik, I'm originally Azerbaijan and Turkish so a bit about myself. Because of my interests in geopolitics, growing up in a region such as Azerbaijan and Turkey as Russia right there, I watched the firsthand all kinds of bot networks and like misinformation at the scale of preventing journalists from speaking. I've always been really, really fascinated with the individual on the internet, their rights on the internet, and especially in 2021, where we start seeing this complicated conversation about, hey, how do we really value authenticity now? If the rise of AI is the sword, these are problems I've seen firsthand in my region of the world. And that's kind of why I started working on humanID as well.

[00:03:02] KRISTINA: Thank you both. That's really helpful in terms of understanding your background. I just want to dive in and ask you both the same question, which is, as I thought about today's conversation, I was thinking back to how the internet evolved and the internet was conceived of without a way to know who and what you're connecting. It's working as anticipated if I think about it, but without identity in place, we seem to be limited to what we can do with it. And it's exposing us to growing dangers. You mentioned bots; you mentioned data privacy, selling of personal data. There's a lot of things that are different today than there were 10 or 20 years ago. And it seems to only be getting worse. Do I see that correctly from your perspective and how big of a problem do you think we have in front of us?

[00:03:48] BASTIAN: I am quite a bit older than Namik, so I can talk about the slight difference to 2 15, 20 years ago. I remember when the biggest concern with the internet was that you might, you don't know who you're talking to. You might be talking to a dog. It was like just kind of in, and probably a lot of people know. And now the problem is real; you don't know who you're talking to, that you might be talking to like a bot that is one of the hundreds of thousands that are pushing the same message. That is a political concern. I know many of you listeners would have business concerns as well. Of course, you have massive, massive hacking if you really like the danger of leaking data. That is how you handle your data that is affecting legal aspects because of the new legislation, but there's also huge business with a and a 60% of companies that have a data lead go bankrupt within the years after, because of either the legal or the reputational costs of that. So, I think these are things that become more global and have a bigger risk.

[00:04:49] NAMIK: So, my concern, long term, is if we continue to be in a situation where there's very, very unknown, or very low-key bias in some of these articles that we've published and what these spots are pushing. We're going to have a harder and harder time communicating with one another. And we've seen this happening in Indonesia, in Turkey, and in in the U S. Yeah, 2016, and even more so in 2020. So, I say like, there's like, indirect roles, but I'd say long-term my biggest concern is effectively we can't really get any more involved. Really. We don't agree on basic acts anymore, and we don't trust each other. So, a big problem.

[00:05:38] KRISTINA: Right. And I think that that's happening, or it seems to be happening across the board. I mean, there seems to be a lot of discord, like you said, the inability to communicate or understand what we're communicating. There's a lot of noise in the ecosystem, and it seems to be being a broad spectrum. Politics. Although I think that that's what comes to mind first because of elections. But beyond just the elections and politics, we're also dealing with like climate change and the COVID vaccine and all kinds of disinformation and individual parties going at the system for their own self-interest. Who do you see as the biggest players? Is it corporations and enterprises and governments and the citizen is being left out of that equation, or it's to the detriment of the citizen or who are the parties that are really manipulating these the internet channels at this point and creating such chaos?

[00:06:24] BASTIAN: It started from, from governments We have seen, I mean, you've seen it very clearly and like the optimism of the 2016 election, because it was all over the news, but the truth is, and we know this from whistleblowers, from Facebook and others, the truth is that it's happening across the world. And really every, every country where governments are either in democracy is sort of like, using as a tool to win voters or in dictatorships using as a tool to spread nationalism or like, they could show support to the leaders, but now I actually just talked to someone in the crypto space which is a leader in one of the big crypto exchanges who was actually talking how they use broad networks to spread just find information, advertise their own product, probably manipulate prices. And I know that in that is becomes more and more common goals that you used as against competitors, for example. And I think there's a; we only see the tip of the iceberg, the stuff that you read on the news, the botnets that get exposed. Those are the bad ones. But there is there are hundreds of millions of other bots out there that we don't know about. And we can only guess what's happening right now.

[00:07:44] NAMIK: And this can be super problematic for businesses all around, Facebook gets accused of killing people. And it's not necessarily directly Facebook, that's filling the people, but its part of their business model is to roll up all these fake accounts I was watching. And so, a lot of these platforms have these massive content moderation costs, and especially if they want to be this new generation Facebook cares about its users is trying to create a space to communicate. You're going to see a very, very difficult time combating these massive, sophisticated networks. And you just touched on the COVID amount of COVID misinformation. Terrible, horrifying, countless of died because they're just not hearing the news or news that is objectively true. And same goes with climate change. There's not nearly enough information about the UN report that came out a couple of weeks ago. And meanwhile, we're still disputing basic facts, and that's because there are people, certain interests that go against that, of the greater population. They use this; they use these bot networks to have more than one voice.

[00:08:50] BASTIAN: When we talk to potential clients and companies like you see that even like networks or communities where like a few, 10,000 users already see these attacks, it doesn't start immediately after launch, but even after they get a little bit of charge, then they suddenly see these partner networks come in. Oftentimes it's not even clear what the exact purpose is, but once once a company hits a few 10,000 users, then they usually start having the issues.

[00:09:17] KRISTINA: So, let's talk a bit of that. So, Bastian from your perspective, what does this mean for companies? Where are they getting impacted? You're talking about bots sort of coming onto their platforms. I'm assuming we're talking about user-generated content and the platforms that they're creating for the exchange of information, but what are the risks to the company? What should they be looking for? What's kind of at their front doorstep, if not already inside of the barrier?

[00:09:44] BASTIAN: We see the sort of two risks. Seemingly seem at odds with each other, but we don't believe they are. The first one is what we've talked about a lot already as the spread of some sort of misinformation or wrong information on marketing spam or whatever it is on the platforms, which comes with new trusts, high motivation costs on the one hand, like trying to delete all this stuff, you delete someone. And two minutes later, if they can create an email. It takes like two minutes or if they're like a professional, like a bot network, then they will be much faster than they already come back. So you keep deleting the same things. And from the same people over and over again, driving up your internal costs that are one issue. And, of course, it also affects the user's experience, right? Like we all know how unfriendly the environment in YouTube cards or on Twitter. And lots of people just refuse to use these platforms because of that affecting the value of the users you're getting on the platform. And the other, the other side of the, what we're seeing is just the lack of trust in platforms. And the lack of trust that they, that the information provided that the user data provided will be kept secret. And that's completely understandable. It's from the user's perspective; you've seen leaks of data from everywhere left and right, including companies such as even the CIA has leak data on any of that database that is on there cannot be really trusted. So what does this means for companies? Those are whistleblowing apps that are like apps for political activists that are apps for any, any minority that has problems for people with mental health issues. Like all these communities will see much less engagement from the users. Fewer people are signing up fewer people sharing private information because they don't trust. And that's something, I mean, some of our early use cases, first clients are whistleblowing apps, for example where really the data provider is so crucial and so sensitive that only if it's truly anonymous, will it be, will they use us be willing to, to engage at all? So, these are sort of the two things, privacy and condom moderation, that seemingly good odds. Get a lot of data to prove, to prove that my users are humans or do I not? And we are trying; we have been the sort of trusted identity layer to solve both issues.

[00:12:08] KRISTINA: Namik, tell me a little bit more about what you're doing because I think the last time you and I kind of chatted and geeked out on this topic, we were talking about reporters specifically and whistleblowers, but your solution at humanID is far broader reaching, or I think you have a vision that will be far wider-reaching. Tell me, tell us a little bit more.

[00:12:27] NAMIK: So if I can be humanID, it's authentication internet, that's fully anonymous, also accountable, and yet these two do not go hand in hand, but when you look at it, they really do. As a nonprofit, we really focus on effectively saying, hey, we don't have any incentives that would mean we're going to hold onto your data of any sort. So, what we do is we verify you're a real human being, and then we complete the anonymized information. So, the point of we doesn't have it either. We delete your PII, and we give a unique identifier to whoever's using our SSO. You also recapture on the way and resolve these companies know they have a real user; they know as much status, and the user wants to give. And there are perks all over the place. Users are going to be way more willing to share or any sort of information, and they might not want links to their profiles or who they are on the internet. At the same time, you're not going to have to evolve with massive bot networks, these campaigns where you can't really get rid of bad actors anymore, right on Facebook, you can't get through the bad actors. They really want to do something. They have enough resources. We use SMS. So, the cost of having a phone number and us or the UK, those two, I can talk to more comfortably. It's a lot; you're not going to be able to run a massive bot campaign reform with $20 per account. Cause then you got bad, and it's just not financially feasible. So that's kind of what we're doing right now. We're an authentication for the intranet. That's fully autonomous and accountable. We keep no data whatsoever. And best of all, we're nonprofit; we have no incentives to kind of go back on that award, and we're not going to change our definition of encryption so that we can kind of skim, skim something off the top. And, you can believe all of this or not. We're all open source. You can check it out.

[00:14:24] KRISTINA: And that's helpful. And as I was thinking about having this conversation with the two of you today, I was chatting with several of my colleagues, and they were mentioning that within their enterprises, all our for-profit organizations, they're looking at sort of having a digital ID solution, a one in one entity concerned, mostly because they have a quasi-social media network within the enterprise that they stood up. And so they're concerned obviously about having a lot of bots or having folks that shouldn't be on the platform really kind of voicing any kind of opinion, mostly because it's a professional networking solution. The other person basically said they are concerned about having anybody be able to post anything because. They're in the pharma regulated space in the concern there, of course comes with, what are people contributing in terms of information? Are we following the information from a UGC perspective and ensuring that if there's an adverse effect that we're reporting it to the government and, as you mentioned Bastian, and this is such a big issue in terms of resource spend. So a lot of people seem to be looking for that digital ID solution, right? How do you go about choosing the perfect digital ID? Like why humanID versus another solution currently?

[00:15:37] BASTIAN: So humanID is not, not for every use case and we don't aim to be for every use case. Like if you are offering in an e-commerce shop, for example, it uses to buy something and then pay usually with their credit card. Not just as they're a legit use as a legit reason, why you want to know who the user is. You also have much better ways of locking abusive users by a credit card. That's, that's great competence to solve this problem in a commercial way. What are you going to use for us really is the equivalent of an open discussion that you have offline? You, we are different people when we talk to our friends. When we talk to employers, when we talk in, I don't know, in a bar or when on a date or whatever it is. And we're like, and that will be assumed. And we take for granted that that information is not posted on the internet, where everyone on the next day, humanID generally, like we any use case where the user doesn't have to be identified with a real ID afterward or doesn't want to be, or wants to have the choice to stop. Something important here is actually every time you log into different services, we actually creating a different ID. So even if our clients would leak these like randomized hash like IDs that'd be providing, they could not be matched together. You're different, you're accountable, but you're accountable only to that community. And then, we will be able to follow different standards on a different community because we are not the same person on a dating profile. Then we aren't on a LinkedIn profile, for example

[00:17:10] NAMIK: Yeah, this is where we really are a great solution out of data leaks. Right? So if I, it's, it's like common understanding. It's not; it's impossible to prevent a data leak of a hundred percent accuracy, right? No one is going to tell you, hey; there's no chance that we will ever get hacked. It's all like mitigation, right. And move human ID. If I can be hacked the one platform, my PII is not there. So, I'm not worried about every single other account I've ever logged on to also be a bridge. So, there's a lot of effects here where the consumers, tremendously and businesses, get to prevent worst-case scenario.

[00:17:56] BASTIAN: And that's a massive impact for the business. Like if that means the cost of your database leaking, which can happen to every company, there's not a hundred percent security. The cost of that happening will be massively, because of, instead of private information that you legally must keep secure. And that can be abused in a thousand different ways. Instead of that, there's just a random database out there, and you can just reset your login keys with human ID, and you're basically safe and not much happened. It depends on a little bit what other information you're collecting from users, but if the point collects personal identifiable information PII, you don't have like the verification, the authentication, the login is not a reason to collect.

[00:18:44] KRISTINA: I'm wondering, because, on the surface, it makes a lot of sense, especially for large social media platforms like Facebook, Google why do not see them jumping on a solution like this. If we could be in a world where we don't have bots, where we have individuals, humans, having conversations, which is maybe a purpose of social media, we have open conversations; different viewpoints may not agree with them. But it's an open conversation by humans; why are we not seeing more entities jump on board and say, yes, that's what we want to have. And what does it take really to get to a point where maybe we are moving in that direction? I don't know. Do you have thoughts on that?

[00:19:25] NAMIK: Yeah. I mean, definitely and kind of a tragic comedy of sorts. And it goes down to Facebook is massive, and they have a lot of investor money. To stay competitive, they need to grow; they need to grow rapidly. And then, at the end of the day, bot networks, they don't necessarily, and I mean, hopefully, long-term and we've started to see this lately, but actually these kinds of platforms because of the community, but the short term, it can rile people up, right? It's called CIB, coordinated, inauthentic behavior that a bunch of fake accounts talking about something. And you bring real people in such a short term it makes money. It makes people, especially if you are in the space where you are valued on how many users you have: fake users, real users in the middle. We'll see what happens. Facebook deleted an absurd amount of fake accounts last year. And that's the not they acknowledge we're not living in a world where you believe to, the bots gone. They say, Hey, we have a really successful run as triple the ad spend, and they become larger. I wasn't an advertiser rightful for big tech companies spending millions of dollars on ads. The truth is that that's Facebook's product Facebook says they have 2.7 or something billion users, but studies show that and on my own experience seeing those bar numbers show that 30 to 50% of those users are fake accounts. Now Facebook publicly acknowledged that they deleted around 5 billion, 5 billion with a B, accounts last year alone. So now, let's look at this from a step back. So, but Facebook saw around 8 billion accounts, and they say, okay, 5 billion of those are fake. The truth is Facebook can shift us a little bit, that they can get a little bit stricter, a little bit less strict, but they have a financial incentive not to be too strict because otherwise, because how fast they use us grew last quarter is one of the main metrics each quarter on judging their company success and therefore their total market cap. So Facebook has a very strong incentive not to be too strict and not to block too many users. The other thing Facebook actually needs this private information to run what they would say, like, like more personalized stats, if they wouldn't collect any personal information, they couldn't add at data together Facebook more than 50% of the data that Facebook has on us it's not collected from Facebook, WhatsApp, or Instagram it's data that they buy externally from data brokers, a billion-dollar industry, and then match without, without profiles. If they didn't have any personally identifiable information, they couldn't do this match, and they would have less data on us. There are two reasons why Facebook has. They, the way they built their business, they can't acknowledge, hey, we just lost 30% of users. They also couldn't acknowledge to their investors. Hey, we got to make less money per user. Now we are crispy targeting less specifically. Although, even if they would do these two things, they would still have a massively profitable company. And that's where the obstruction back there can come in. That's where we do believe that the next that they will be disruption, the social media space with more focus on privacy, as we already see it in other spaces, a Duck duck go a private search engine has just raised over a hundred million dollars. There's disruption happening all across the place. And we will see that also in the social media space. I hope, either with humanID or inspired by humanID. If you're a nonprofit, it doesn't have to be us. We just want to spread this out.

[00:22:57] KRISTINA: And that's the thing that was impressive to me. So, Namik, you and I were talking about this, and he said, look, we don't really have any pride and ownership, please, take the source code go into your own thing. We believe fundamentally in the idea; we don't necessarily need to be the solution or the only solution. I'm wondering, from a government perspective and a regulator's perspective, governments to maybe push your solution or a solution or a basket of solutions, a solution to social media networks.

[00:23:26] NAMIK: Yeah. So this is where it gets super complicated because government identity is, can be very, very problematic depending on the government and even overarching. Right. We see the status quo. Yeah. Any government, once any data from any of the social media, as they will get this information and they can see this because of the relationship, they have a great example is Open Russia, activist space out of London. The Russian government says you need to shut down the stricter account where you don't have Twitter anymore. There's a lot of concerns around my opinion of government and control identity in the same way. Right? Government surveillance can be very, very problematic because not whole governments are equal, right? So in general, I'm not the biggest fan of a government identity, but I do believe that the government should have a lot larger role in saying, Hey, these are the innate rights power citizens. And you need to respect that, right? And that could be an open source identity that protects their data, right that I'm a big, big fan of. It's a bit more concerning when we create one identity that's the links all the data's not private whatsoever. And the government has access to that, et cetera, that becomes a lot more concerning, and we see groups like Accessnow with the WhyID movement also touching on that spot.

[00:24:55] BASTIAN: I think where government comes in already in. Having an impact is privacy legislation. We have seen this at the beginning of that with GDPR in Europe and CCPA and California, but these would only get stricter and don't would, it would be only become more common, which means there will be more and more issues or any company that doesn't. There would be more work for the companies that work with PII. And there will be more issues with companies that don't comply with this legacy—creating more and more reason to instead use something like human ID that is like a low or zero data approach.

[00:25:31] KRISTINA: Should that fundamentally be something that is part of our human rights, if you will. Like, if we think about the declaration on human rights, privacy is a key component. Would you see somebody like the UN or another sort of neutral quote-unquote broker facilitating the requirements or the pushing of a solution that does allow us to be anonymous, especially in whistleblower settings or areas, like you said, where our data doesn't have to be shared to accomplish the purpose?

[00:26:03] BASTIAN: I think it goes back like the UN will not protect privacy. That's like, it's not going to happen. But I think when it comes to what does it human? Right. I think encryption needs to stay sacred. Like if there are some movements, even democracy is to undermine encryption. I think that ultimately means if you undermine encryption, that means one bad government can access anything they want on any phone. I think that's just a red line that shouldn't ever be crossed. And I hope the US stays strong there because the US stays strong, then I think we can continue building. And the other thing is something like the device, like Apple, just across the huge redline, by saying they will go through pictures on iPhones to scan that. And of course, that sounds good as like a first step because they do it about child pornography and so on, but that was like, it does cross a massive red line where suddenly the device makers to hardware makers could be enlisted in surveillance ultimately. And that's really, really the interest. Those are, for me, like the two red lines that we really should be meeting with the district on.

It's interesting. And this is kind of there. We're leaving the humanity conversation. The way a lot of these companies are structured, right? Like the data is helping Russian citizen data is held in Russia, right? The same situation with China, what'd you what's going to happen when one of these governments say, Hey, you need to, we're going to look for some photos, and you kind of have to say, yeah, So my opening this door, it's like Pandora's box of sorts. Right? So, what we're going to see happening, my sad opinion is a lot of countries. This child pornography tree will result in pirate information on people at risk being really the harvest and source. And that's the kind of the ethos which humanity is.

[00:28:02] KRISTINA: I'm happy that you brought that example up. I was thinking about that fundamentally. We don't necessarily trust governments, but we don't also necessarily trust these large enterprises such as apple. I'm wondering, for people who don't want to place trust in either, is there a solution, or is that where human ID comes into place? I mean, there are only so many things that you can protect and make anonymous, right? If I want to have an iPhone, unfortunately, I think I must give Apple my information or some of my information. Is there a way to really stay anonymous anymore? Or is it the case that I really do have to trade at least some of my data to function in today's world?

[00:28:44] NAMIK: This is a puzzle. So yeah, at the moment, that's really sad. But if you want to be plugged in on the internet, it's extremely the course to be private. Right. What we're trying to say is, hey, in web two, which is where the vast majority of users are, we need to figure it out frameworks and that keep them into them that we'll build out a new model or, be adopted by these bursts. So at the moment, it's pretty difficult. If you want to be using an iPhone, I use an iPhone as well. You're not going to be able to be fully, fully secure unless you do some really, high-tech stuff, which is not convenient for the average user. And that's why humanID, we could have gone the route of like authenticating with this or that or doing all these crazy projects, but at the end of the day, no one's going to be doing this, right. No, one's going to be logging in. If it takes 20 minutes to prove you're a real human being. So that's why we went the SMS route as well. So, there's a very important conversation, convenience as well as privacy, and hopefully and we see a shift with user sentiment. Hopefully, we see it consumers and customers, and everybody kind of move in that direction.

[00:30:03] KRISTINA: If you're part of the solution, which I think is great because we need to have a starting point, right? So being part of a solution is better than no solution. What are the other components we should really be vying for or asking for?

[00:30:15] BASTIAN: I mean private companies will build a lot of these like on, like for example, in your social, a better social networks and better ways to communicate generally that hopefully can use human ideas sort of like, ensuring, ensuring like trust where it's needed. I do think there are lots of amazing other companies, for example, on the browser market, Brave as a for-profit company that disrupts the browser market with a privacy first solution where humanID doesn't help. So, for example, the cookie part, like the privacy problems of having cookies set on your browser, and that's something where I'm, for example, Brave is working on Wikipedia, is doing an amazing is amazing how been fighting fake news and misinformation with their nonprofit community that they having, that I'm really worried about but also be on the attack from misinformation players more and more. And it's hard to imagine a world where Wikipedia doesn't exist. So that's some of the examples for I'm looking at this for-profit online process, depending on the use case.

[00:31:19] KRISTINA: For anybody who's listening today, finding themselves either in a profit or nonprofit, are there sort of immediate two things that they can go off and do? Like what should they be thinking about, whether they're part of a digital marketing team or part of a digital ops team? Are there one or two things that they can say, look, I'm going to take this back to my enterprise because a, it makes sense to me or B, I think it makes sense for our end users? Like you've convinced me. So, what should I do?

[00:31:46] BASTIAN: I think the number one thing that people estimate until the day it happens is just that the danger and the cost of a data leak that can happen from a million different reasons and people. The one thing that will happen until the day it does. And there are tons of studies that show the average cost for a US enterprise data leak is somewhere on 4 million. As I said earlier, 60% of companies go bankrupt if they have a major leak. B the thing about this now, before it's too late and thinks about if you really need that data, you are saving for running your business. And then the second thing is just like looking at studies. There's this myth that people don't really care about their privacy. They only say they care, but like more like study after study has proven this wrong, especially since 2017, the time of the US has massively changed. So, think about it, if you could differentiate from your competitor to offering more privacy and being a first mover in this market.

[00:32:44] NAMIK: Yeah, I can talk about human ID. We're primarily a volunteer-driven organization. There's been over a hundred of us. These are students, recent grads, and these are also users, right? So, this is like the next generation of users' belongings organizing. And what we see is we really care. So, we're privacy active. We want to know what's happening with our data. We want to know, do you need my PII, who you're selling this to? And I would say like, that's, that's important. That's what I, that that'd be my biggest takeaway is to see, hey, am I completely honest with my users? Do they know how I'm using it? And do I really need to PII. All right. So it's, it's almost a misconception that for a lot of these companies that you need PII to make any money, it's just not true.

[00:33:34] KRISTINA: It has been great having you today and hearing more about humanID and what we need to be thinking about when it comes to privacy authentication and the opportunity to make the internet a better, safer place for all of us. Thanks for hanging out. If people want to hear more or learn more about you, they should go where?

[00:33:53] BASTIAN: Yeah, for anyone who might be interested in our login or capture like human ID can replace solutions, like log in with Facebook, login with Google or the Google reCAPTCHA, and it's extremely easy to implement, and they can find all this human-id.org

[00:34:09] KRISTINA: This has been great. I just want to reiterate because I think this is an important point. You have started this as a nonprofit. Again, you want to make a change, but this is not about money. This is about changing the landscape. So, appreciate all the efforts that you're putting in on behalf of all of us.

[00:34:28] OUTRO: Thank you for joining the Power of Digital Policy; to sign up for our newsletter, get access to policy checklists, detailed information on policies, and other helpful resources, head over to the power of digital policy.com. If you get a moment, please leave a review on iTunes to help your digital colleagues find out about the podcast.

You can reply to this podcast here: