Bryce Goodman is a technologist, philosopher, and strategist working at the intersection of emerging technology and global challenges. He works with leading private, public, and non-profit organizations to invest in, develop, deploy, and govern artificial intelligence and other exponential technologies. He is currently Chief Strategist for Artificial Intelligence and Machine Learning at the Department of Defense’s Innovation Unit, where he oversees a range of large-scale projects, including the implementation of AI for humanitarian assistance and disaster response, countering human trafficking and illegal fishing, open source intelligence and leads the Responsible AI Working Group. Bryce also runs programs using AI for ocean monitoring and habitat restoration with Oceankind, teaches bioethics and ethics of exponentials at Singularity University, and is an advisor with The Cantellus Group, a boutique consultancy helping organizations navigate the adoption of frontier technologies. His current and past clients include KKR, TPG, Google, NEC Corporation, Deloitte, LG, Citibank, Barclays, World Bank, Blackstone, United Nations, NASA, US Department of State, CIA, and the UK Cabinet Office.
Previously, Bryce co-founded a cleantech startup and successfully raised over $100m led by Kleiner Perkins. Bryce's awards include Forbes 30 under 30 (Energy & Industry), Harvard Business School Best New Venture, and World Economic Forum Technology Pioneer. He graduated from Deep Springs College, the University of Oxford (Clarendon Scholar; first-class honors), and Singularity University. He holds data science, philosophy, political science, and economics degrees. He is completing a Ph.D. in AI and Philosophy at the University of Oxford.
Artificial intelligence (AI) can make the world more sustainable by using natural resources while also helping to reduce environmental costs. It can also make us more efficient and effective. But to do the right thing for your organization and ensure sound business decisions, you must consider ethics and governance. What does that mean? Bryce Goodman, an AI ethicist, and Cantellus Group advisor, guests in this episode and provides perspective and considerations for all organizations.
[00:00:00] KRISTINA PODNAR, host: Preparing for today's conversation was easy. I asked ChatGPT what I should ask Bryce Goodman; I'm not sure I got good answers. We'll check it out.
[00:00:09] INTRO: Welcome to The Power of Digital Policy, a show that helps digital marketers, online communications directors, and others throughout the organization balance out risks and opportunities created by using digital channels. Here's your host, Kristina Podnar.
[00:00:26] KRISTINA: Bryce is a technologist, philosopher, and strategist working at the intersection of emerging technology and global challenges. He works with leading private, public, and nonprofit organizations to invest in, develop, deploy, and govern artificial intelligence and other exponential technologies. Bryce has done many cool things, and he's definitely been acknowledged by leading business and industry names, including Forbes, which named him to its list of 30 under 30 for energy and industry. Harvard's Business School's, best New Venture and World Economic Forums Technology Pioneer. Today, I invited Bryce to help answer many AI questions we're all thinking about. Bryce, welcome to the show. Sadly, I have to say that when I did ask ChatGPT Bryce for my suggestions, they weren't that exciting. So let's start with a broader question that's on everyone's mind. Should we be afraid of AI?
[00:01:19] BRYCE GOODMAN, guest: Thanks. It's great to be here. I would be curious to hear what ChatGPT had to say in response to that question. Although I imagine it would be something pretty much along the lines of ”No, you don't need to worry about AI. I'm a friendly, large language model created by open AI, blah, blah, blah.” But should we be worried about AI? I mean, I think it's sort of like asking if should we be worried about electricity. In that, there are plenty of things that one can do with electricity that are scary and terrible, and horrifying. And then there are lots of things that are totally banal. And you know, with AI I think it's, it's similar. I think where things are different is that the pace of diffusion is much more rapid, right? There's very little physical infrastructure needed to at least consume AI. I mean, we can talk more about the physical infrastructure needed to build AI, but from a consumer's point of view, AI can be pushed to you almost instantaneously. And we've seen that with the very short window in time between cutting-edge AI techniques being published in academic settings and then products being spun up and delivered to consumers based on those ideas. And I can't think of any other technological field that has followed that pace in terms of new ideas and then that idea is quite literally put into the hands of lots and lots of people. So, I think that's one way in which the AI technology wave is certainly different. Whether that's something to be afraid of, I think probably boils down to your overall view of humanity and human nature and whether we have a higher likelihood of doing positive things with that power or negative things. And I think that the only thing I can say is that the results are very likely to be quite mixed.
[00:03:27] KRISTINA: Many creators and scholars question the ethical uses of AI tools, and it's certainly harder and harder to identify AI-generated content from human-created content. Do you see a difference in terms of where we're heading with that battle?
[00:03:40] BRYCE: Yeah, I see a lot of parallels with at least some of the reactions from certain communities to what we can call generative AI, which is the use of AI that produces as output some sort of media, whether that is text media or video, or in some cases audio. Or images, generative AI certainly is having a moment right now and we can trace that back to things like DALL-E and image generation. And then, of course, more recently ChatGPT and Text Generation, and I think some of the reactions that people are having, especially on the image generation side, parallel what we saw when photography was rolled out back in the 1800s, and there was this, this real palpable concern that by enabling people to access this capability that up until then was, you know, province a small handful of people, right? Those people that could paint and those people that were wealthy enough to afford to commission people to paint them. There was this fear that, okay, we're taking this capability and we're putting it out to the masses, and the way that they're going to adopt it is going to be totally unpredictable and potentially catastrophic. There were some, some cartoons that were put out in newspapers depicting hordes of people obsessively, staring at and interacting with these cameras, these new, you know, this new photograph device to the exclusion of everything else. I mean, I do think there's something funny there about that, but not in terms of photography so much as maybe mobile phones and the way in which we use our cameras on our mobile phones, where we have arguably become completely obsessed and enraptured. But, anyway, the point is that: whenever there's a new technology, there's inevitably some level of hysteria around it. And I think that doesn't mean that we say, okay, this has happened before; therefore, there's nothing novel new here. But I do think it means that it's always worth taking a minute to pause and ask, what exactly is novel and is new here? I think in the case of AI and generative AI, let's just stick with generative art for a moment; I think what you could say that is different in the way in which composition takes place, is very abstract from the end product. What I mean is that if you want to generate an AI image, the way that you would do that is by selecting a bunch of input to give in, saying, you know, okay, do a picture of a koala bear riding a motorcycle, and I want it to be in the style of a Pixar film, and I want there to be volumetric lighting and, and this and that. And the skill that is associated with that description, I think is very, very different from the skill that would be associated with producing that image in another way, if that makes sense. Like it's, it's in no way analogous, whereas I think, you know, there's maybe more analogy between a photographer who has a really good eye for composition and a painter who, yes, part of what they do is they mechanically put paint to, to paper. But a lot of what they do also is come up with a concept and frame it, et cetera. So, I think is different and has some pretty interesting and profound implications for this technology as compared to others. Meaning I think that in the case of images, but then also intact. We'll get into that later. I think that the mode with which we interact with these AI systems is so different from traditional modes that it may in fact lead us to have to completely rethink what it means to do certain activities such as produce art, or produce an article. So yeah, I think that's to me is one of the more interesting aspects.
[00:07:45] KRISTINA: And business is certainly starting to get involved in an AI kind of lens at large scale; Microsoft is now in extended partnership with OpenAI, investing in a multi-billion dollars in AI. You were just talking about what it really means maybe to create an article or a piece of art. Can we expect the not-so-loved Microsoft Office to suddenly become our new office best friend? Because we now have AI embedded in a much more, in a deeper way than previously?
[00:08:18] BRYCE: I haven't spent quite a lot of time messing around with these programs. I think that they are quite good at augmenting an idea that you already have but I think that the idea that we're going to just punch go and then get our a corporate memo or a report, a market survey written for us is probably not just some ways off, but probably not really something that if I were Microsoft, I would be even trying to drive towards. And I say that because these tools ultimately are producing quite literally the most average output that one can produce, right? I mean, from a statistical point of view, that's what they're trying to do. They're trying to guess like what is the most likely token in this sequence. If you're looking at words, it's trying to predict what's the most likely word to come after the previous one. That's how it builds things up at a sentence level. Then at a paragraph level, it's doing the same thing. So now, of course, you can tweak this when you put in really wild prompts, then even though it's trying to find a kind of average response, the prompts, the sort of start space is so wild that you're going to get something wild as a result. And that's, cool and fun and amusing, but I think that where I see this being, being quite useful in my own work anyway as an academic is doing things like literature reviews. So, let's say that I've read a bunch of papers and now I'm, I need to do the kind of obligatory, as so-and-so wrote in their 2004 piece. And it's this kind of; it's this part of the paper that you have to include as an academic to sort of show that yes, you've read the things. I don't think anybody reading the paper spends that much time reading that section of the paper. It pretty perfunctory thing in a case like that, like, absolutely. Like, let's just default to a ChatGPT, you know, answer. That's kind of the average and is going to, not have any sort of novel ideas, but that's not the point. However, in other cases, we're going to see really interesting things less in the pure content creation space like Microsoft Word, but, and more in the ability to translate, use natural language to translate somewhat seamlessly between very different domains of inquiry. For example, like maybe in the near future you no longer are going to need to know how to write code and Python in order, and yet you'll still be able to perform very complex data analysis just by asking natural language queries. I think that would be very profound.
[00:11:08] KRISTINA: Does it also make you sad? As somebody who learned to code early on in my career, it makes me sad that people might not code in the future.
[00:11:16] BRYCE: Well, I mean, yeah. But I feel like everybody's probably when Python was rolled out, people were probably upset. That, okay, it's a higher-level language. And, I'll say, first of all, I've never been, I've enjoyed coding in the way that I enjoy sorting through puzzles, but I've never been very good at it. So it's not like I feel this awesome skill that I have. So, I'm probably the wrong audience in some respect for the question because I don't think I'm losing a whole ton, but I will say kind of like what I was trying to say earlier about how I think this analogy with photography, there is very much an art to composing queries and composing prompts, right? And you hear this title of prompt engineer, who is the person who knows how to speak to the AI, or gives input to the AI in such a way that you get the output that you want; that's not yet a trivial thing, and I don't think that will be a totally trivial thing for quite some time. It's not quite programming per se, but if you go onto these image generation sites like MidJourney, which you can go onto, and you can see what everybody's putting in for their image prompts. There is definitely a sort of vernacular that's emerged around image generation, image generation prompts, which you are things like including the prompt, octane render or volumetric lighting or in the style of X, Y, Z. And so again, it's not programming in the way that we traditionally think of it, but it's also not like too dissimilar, I don't think.
[00:12:53] KRISTINA: It'll be interesting to see the evolution also of the skillset, as you pointed out from a human perspective. I read the other day just an interesting piece that pointed out that humans will always have to be in the loop somewhere because you can't necessarily teach AI certain things that are inherently human. And so certain things will still need humans for, which maybe make us less afraid, potentially, of a lot of things that we see on the headline shows, in the news, et cetera. That makes a lot of sense. I'm kind of curious, so we're talking about just AI in ways that may be everyday folks will encounter it and be aware of it and recognize it or, use it, contribute to it. But your senior contributing scientists at the Environmental Defense fund as well, where researchers are tackling a much bigger issue around AI, it's that ethical side, the bigger piece that maybe is harder to solve. What are the key themes affecting the use of AI that you're seeing in academia, business, and government? Things that might not be top of mind to all of us.
[00:14:01] BRYCE: Sure. I should caveat and say I was at the EDF for a while. I'm no longer there, but I do work with a number of other environmental organizations like the Nature Conservancy, and what I'm seeing in general, just in the space of AI and conservation; it's super exciting. So a lot of the work that I've been focused on is bioacoustics, which is going out into the natural environment, placing basically microphones, and recording natural sounds. So this could be, in a terrestrial setting, the sounds of birds and wind going through trees. My focus is primarily on the marine setting, so we're putting microphones underwater. In coral reef areas and listen to the sounds that are made by various fish and shrimp and even the sound of sand as it's being pushed through the reef. And what we found is that when we run that data through algorithms that have been developed for computer vision or other kinds of purposes, we're able to extract very interesting signals about the environment. And what's so cool is that. If you look at conservation biology, the work, up until very, very recently, was always done by conservation biologists who would be a professor and a couple of graduate students who are, in the case of ocean-based stuff, strapping out scuba gear and going out and trying to make notes underwater. Going out as often as they can. But realistically, you're talking about maybe once a year. If you're lucky to a particular site. And by switching to this bio-acoustic modality, we're able to do many more long-term site surveys. And also, on the backend, we're able to automate the analysis of all the data we collect, and that's just completely opening up and changing our way of understanding the natural world, both from a conservation point of view but also from more of a fundamental science point of view. So, there's a group that I've been advising and working with called Project Ceti, and they won the TED Audacious Prize a few years back. They're led by a scientist and a friend named Professor David Gruber, and their task is to decode whale language. So they are working down in Dominica, which has a pod of whales that have been extensively studied, probably the most well-studied whales in the world. And they have worked with scientists at Harvard to develop these special robots that can attach to the whales and collect all sorts of data from the whales. And they've got bio acoustic arrays and all sorts of things. And all of this data is, is then being fed into these machine translation and large language model kind of algorithms, and they're trying to extract all sorts of very, very interesting signals. And there's some work that the group's going to be publishing soon on sort of some of the foundational work towards non-human communication and understanding that field. All of which is to say, as you could probably tell, I'm incredibly excited about the way in which we can use AI to unlock some of the very fascinating phenomena in the natural world.
[00:17:17] KRISTINA: To me, that's, it's fascinating. I can understand why you're drawn to that. I'm wondering if there are parameters that you see as indicative for academia or business and government around things like ethics. Are there certain things maybe we shouldn't be doing in that space, or is there a lineup to which we should be utilizing that knowledge or that data? Because I can see it again being so useful and relevant for many different purposes. But as you pointed out at the beginning of our conversation, it's about how you use AI and its capabilities and insights. And I can think of some use cases where it wouldn't be used for the good of humanity.
[00:18:00] BRYCE: Sure. I mean, I think that there are areas where we have what's what we would call dual use of AI technologies, right? So we can think of this in the case of AlphaFold, which was an algorithm developed by DeepMind that was very good at predicting protein folding, which is relevant to all sorts of pharmaceutical work. Now, that same sort of technology could potentially be used to identify not necessarily therapeutic drugs but harmful drugs. And I think, in fact, there was a British company that did this kind of as a proof point. They identified lots of different compounds that are theoretically far more deadly and less detectable than anything that currently has been synthesized. So yeah, that would be an example of a technology that has a very positive impact and also a very potentially negative one. How do we deal with that? How do we regulate that? In some ways, there's, that's, this is not like a novel paradigm. We have almost all new technologies, again going back to electricity, but we can think about the automobile having risks. And just sticking with cars for a second, I think something, some massive, hundreds of people, in the United States alone in car accidents, and yet that's something that we've decided as society is a tolerable risk in return for the convenience that the cars bring. Now that being said, we've seen huge advances in automobile safety over time. I talked to my mom about when she was a kid and how seat belts were something you had to pay extra for, and it wasn't until a few decades ago where that shifted. And nowadays, it'd be completely insane to see a car being that didn't have a seatbelt in it. My hope is that with AI, we get towards a similar paradigm. So, do I think we will ever be able to completely eliminate risk? Absolutely not, nor do I think that's even really a terrible goal. But there certainly are cases where we need to see more built-in safety mechanisms. Without going into this, we can go into specific projects and probably talk about what those are, but just at a high level; that's the kind of approach to risk that I hope we see, which is one, again, recognizing that we need rules of the road so to speak. But then also we need to try to build in safety mechanisms as, and where we can.
[00:20:29] KRISTINA: An area especially true is probably military, and for training, the US Army's developing things like digital twins, high fidelity photorealistic terrain data, a mix of augmented reality, video game graphics. Is a military use of AI far more advanced than, for example, what we publicly see available from the big tech, like Google or Meta and others?
[00:20:52] BRYCE: Um, is the military use of AI more advanced? No, in a word. It is undoubtedly not well distributed, so, obviously speaking only to things that are publicly accessible, you can see DARPA is an innovator and is spending billions of dollars pushing the tech envelope forward in AI. And that's great and amazing. However, I would say that, as a whole, the United States has been relatively slow, compared to certain militaries and definitely incredibly slow compared to the private sector, to adopt artificial intelligence in any kind of meaningful way. We are still, in many cases having humans perform functions that not only could be automated, but in my opinion, we have an ethical obligation to automate; that's simply because of the rate of error when we rely on an exhausted 18-year-old who's been staring at a screen for four weeks in a bunker somewhere, is just going to be unacceptable. And has proven to be catastrophic in the past. We absolutely need safeguards in terms of what we choose to pursue, at a policy level in terms of where we decide to deploy AI, and there are some pre-firm commitments that the military has made in terms of humans in the loop and AI needs to be explainable and governable and reliable. I don't think it's a case of anything goes, but, as somebody who works extensively with the military, the thing that scares me is not too much AI, but, frankly, not enough of it in certain areas that where I think that AI could really result in a much, much better outcome.
[00:22:39] KRISTINA: I always get excited when people say things like risk and opportunity and look at different technologies through that lens. You've got my vote, and I'm gritting from ear to ear, so I like that concept, no matter what we're looking at, whether it's military or anything else, looking at the risks and the opportunities, not just one side of the corner or the other.
[00:22:57] BRYCE: Just before we move on, I think one of the headline things that people think of in terms of AI in the military, understandably, is autonomous weapons, and this idea of a sort of terminator or whatever, and. And I think that is to focus on that, I understand from a, you know, it's certainly evocative, but it very much misses the point of, like, the military is just this massive bureaucratic organization that has all the back office functions of any major corporation. And so you could spend all day long in your entire career just focusing on how can we use AI to enhance the way that we do payroll? And you'd probably be able to save hundreds of millions of dollars to taxpayer funds just by doing that. So I think that's the other thing that's important to recognize. The US military is certainly my budget, the largest organization on Earth and there are all sorts of things that have nothing to do whatsoever with warfare in the way that we think of it in a kinetic sense. But where AI could and should, I think, be deployed.
[00:23:59] KRISTINA: And we can't forget that DARPA gave us the internet and GPS as well and some really cool things. Are we really looking at some other cool things you think coming from the space of maybe secret military, R&D, and yet they're going to be applicable to our everyday public life in a good way?
[00:24:19] BRYCE: Well, I actually think that the pendulum has really swung the opposite in that the military is and should be looking to the private sector to lead the way in AI research and development, outside of course, certain very specific areas that are military. We wouldn't expect, or I don't think, want private companies to be working on hypersonic weapons and things like that. But when it comes to overall kind of budgets and talent, that's, we, yes, we have some amazing talent inside of the government and government affiliated groups. Um, but the bulk of the talent and definitely the bulk of the money is in the private sector. And so I think that one of the things that the military needs to get much better at is, is recognizing that and then figuring out ways to work with the private sector to adopt what's being built and, which, you know, as you point out like in many cases the fundamental research was paid for with tax dollars, to begin with. And I think it's, as an American, I'd like to see our investment returned.
[00:25:23] KRISTINA: I would love that too. I'm helpful that we'll continue on that trajectory. Let's find out. But I want to go back to a question that ChatGPT suggested I ask you, Bryce. This is a really good one. It's probably the only good question that I thought I should ask you: how do you stay current on developments in AI, and how do you incorporate them into your work?
[00:25:46] BRYCE: It is a good question. I use a new different method. So first of all, if you have the luxury of people that are smarter than you, that's always going to be the best source. And I am very fortunate to be surrounded by people that are far smarter than I am. And you're willing to update me on what's going on. But barring that, you have access to things like Twitter, where you can definitely locate people that know more than you do. And I'm pretty active, not posting, I don't think I hardly ever post, but following folks from all the major research groups you can think of, DeepMind, et cetera, my own university, Oxford, and then collaborators that I've got outside at Berkeley and elsewhere and trying to follow those folks who are very good at publishing the thing that I love about the AI community in general, it's a fairly open, transparent, at least up to a point, community and oftentimes if you reach out to somebody, who's published the paper and yet explain, who you are and why you're interested in connecting. I've found positive results, being willing to take the time to explain things to me, so that's been really helpful. Then, there's a handful of newsletters that I really like. I'm happy to give you a list if you want to post it.
[00:27:03] KRISTINA: That would be great. Well, I certainly appreciate the fact that I get to interact with you at the Cantellus group and learn a lot from you, day in and day out. You've given us a wonderful list of places folks can go as well to learn. They can also follow you, I'm assuming, through other places that are non-Twitter. The best places to get in touch with you or to follow your work would be where?
[00:27:24] BRYCE: Yeah. That's a good question. LinkedIn is definitely an easy place to get in touch with me. I am trying to post more on Twitter. So you can definitely follow me on there. And at the very least, I like to repost other things that other people have posted that I think is worth paying attention to. So those are probably the two best places.
[00:27:45] KRISTINA: Awesome. Well, thanks so much, Bryce, for taking time out today. It's a pleasure to talk to you, and I'm excited that I had this opportunity.
[00:27:51] OUTRO: Thank you for joining the Power of Digital Policy; to sign up for our newsletter, get access to policy checklists, detailed information on policies, and other helpful resources, head over to the power of digital policy.com. If you get a moment, please leave a review on iTunes to help your digital colleagues find out about the podcast.