#13 Leading your business to responsible AI

#13 Leading your business to responsible AI

#13 Leading your business to responsible AI

Guest:
Guests:
Shalini Trefzer

Shalini Trefzer

Shalini Trefzer is based in the Basel region in Switzerland. Since 2016, she has helped over half a dozen emerging AI and data companies expand their enterprise footprint in Switzerland and Europe across manufacturing, healthcare, smart grids, energy and to a lesser extent, financial services. Discussing use cases with business and technical decision-makers on leveraging data to create a competitive edge is a regular but exciting part of Shalini's business week. She often serves as a sounding board and negotiating partner for CEO's.

Continuous learning through close engagement with industry leaders, machine learning experts, and academics helps Shalini deliver quick results. She has mastered the art of juggling a portfolio, so you benefit from her intersectional knowledge right away when you work with her. Shalini has a B.S. in Environmental Resources Engineering. After initially working on hazardous waste management projects in civil engineering consulting, Shalini embarked on her career in technology during which time she served as an engineering program manager of Cisco System's CRS-1 terabit routing product. Additionally, she worked as an operations manager in drug development at Novartis. Deeply convinced of the power of lifelong learning, Shalini is currently investing in yearly AI-related course work at MIT. Since summer 2019, she is the Executive Director of the Think Tank World in 2050, of the global affairs media network, Diplomatic Courier.

It is easy to get wrapped up in a great A.I. idea and hard to build it into a solid, reliable business. It is harder yet to do a trustworthy business without A.I. ethics. Shalini Trefzer provides us with ample resources and advice to get this policy area right.

Keywords:
Season:
1
Episode number:
13
Duration:
27:26
Date Published:
April 29, 2020

KRISTINA PODNAR, HOST: Welcome to another episode of The Power of Digital Policy, where I bring you subject matter expertise and ideas necessary to create sound digital policy and practices for your organization. Today I am happy to have with us Shalini Trefzer, who I have known for many years, and we will be discussing ethics in A.I. Since 2016, Shalini has helped over half a dozen emerging A.I. and data companies expand their enterprise footprint in Switzerland and Europe across manufacturing, healthcare, smart grids, energy, and to a lesser extent, financial services. Discussing use cases with business and technical decision-makers on leveraging data to create a competitive edge is a regular but exciting part of Shalini's business week. She often serves as a sounding board and negotiating partner for CEO's. Deeply convinced of the power of lifelong learning, Shalini is currently investing in yearly AI-related course work at MIT. Since the summer of 2019, she is the Executive Director of the Think Tank World in 2050, of the global affairs media network, Diplomatic Courier. Shalini joins us from Basel, Switzerland. One of the A.I. hubs of the world, and I can't honestly think of a better person to have with us today to discuss ethics in A.I. so Shalini welcome.

SHALINI TREFZER, GUEST: Thank you. Thank you, Kristina. This is it's wonderful to be here and to be speaking with you. I must say that I'm not an expert in ethics by any means. However, I do work with a number of companies who are deploying A.I. in the solution of problems in several different industry verticals as well as interdisciplinary efforts. So naturally, the topic of what does doing the right thing mean comes up quite a lot. And so I'm very happy to explore and offer a couple of resources.

KRISTINA: Well, I don't know, you know, I personally kind of consider you my go-to person. I think it's hard to be an expert in everything Ai, and I think ethics is a slice of A.I. people can specialize in, but also, we have to recognize that it's a new frontier in many different ways. And so it's somewhat of a relative term in terms of expert here. But let's just jump in. You mentioned that obviously you work with incubators. You work with startups in the A.I. space tell us: what role, if any, do ethics play right now in that space for businesses? Are folks thinking about ethics? What does that look like?

SHALINI: I want to say yes, and essentially I just wanted to sort of set the context of the kind of work that I'm doing. So you know, I think that's important because there is certainly a lot of difference between somebody who is pre-funding or very early stage, still sort of in an ideation phase or someone that's doing a pilot versus, you know, somebody that's got plenty of training data and they're moving forward into commercialization.

So it's a different world just by the nature of the work that each of these stages takes. So I did work with a lot of early-stage companies in the beginning who were actually looking to see how they could solve narrow A.I. problems. So by that, I'll offer an example, for instance. Let's say that you've noticed that there is a problem to be solved in telemedicine. You're thinking ahead to when cities are getting really huge, and maybe there's a lot of traffic involved in having to go to a doctor and what if that telemedicine doctor could diagnose you for the common cold over the phone just by listening to your voice. Is this possible? Is this useful? Who which population groups do it serve, and how can you use A.I. to do something like this, and that's just one of the examples of the many different types of problems that I've seen people trying to solve. And all the way from there to you know in my own work has actually shifted quite a lot now to companies that are either established in scaling up or organizations who are not necessarily developing AI-based product, but who are who are convening thought leadership as well as who are writing grants and looking to raise funds and money to deploy solutions in various problems that could involve so I kind of quite a lot.

KRISTINA: Your experience ranges Shalini, from having worked with startups, to now working with more established organizations and those looking for grants, many of which have an AI-aspect involved in their product or service. Talk to us about the startup arena, specifically. Is it a struggle for startups to slow down and consider ethics? Usually startups are trying to build out an MVP so that they can get funding, so these two things seem at odds.

SHALINI: I think that it depends on who you are and what your team is capable of, you know, what kind of resources you have access to. I believe that there are certain people who are narrow problem solvers and then there are others who are solving a wider problem, and I believe that the business world, as well as just the word enablement, are by nature big problems. They are the big view. So they're basically solving for the problem at its at the most interconnected level possible. And if you do that, then then all the little decisions and the narrow application decisions will sort themselves out and because you've built the right framework and I think some companies are extremely good at doing this. It doesn't mean that you know, your A.I. has to solve every single problem for every single vertical that's not the point. It's more about saying here's vertical in which I'm solving this problem. But what is the biggest, first level of this problem? What does it intersect with if I'm if I'm increasing the efficiency of a machine in what is the highest level at which I can solve for that problem? Can I enable that workforce to learn something and improve themselves? And can I enable a machine operator to become a data operator? You know, that's just one example, in but there are certain levels in setting up a framework of thinking and if you operate at a very high level, which means that they look at intersections within your framework include them already in your work. Then from that point on, you can become very fast. I think it can be a learned skill, and some people are extremely good at it just naturally. Others aren't and but that's not a reason for people to not be able to practice that and improve their ability to think at the highest level of the problem constantly.

KRISTINA: I love the fact that you're referencing a Swiss report. That's a go-to for you. It sounds like there's an excellent framework or at least a perspective that's been put out there for consideration.

SHALINI: I know that the European Commission has published its report on automation on the impacts of automation, Ai and robotics on the economy as well as on topics of the future, so they have that report for instance, and they've made recommendations for European Union countries you know, I don't work that much in the space of A.I. would in the U.S. where I do see a difference, and I don't know if this is being driven by necessarily by any A.I. specific topic or if it's a cultural thing, but in the U.S. I think that companies which are building algorithms or solving for problems using A.I. are able to pilot a lot faster with new customers and people are willing to try out their technologies and their applications a lot more so than in Europe or in Switzerland. So it's a much more the consideration of impact is done ahead of time here much more so than in the u.s. So this is one difference. So now what happens as a practical consequence of this sort of difference in sis by the time a an A.I. solution has come to a pilot in let's say Switzerland. A lot of these unintended consequences have at least been discussed. They've been considered they've been given a little bit of air time or a lot of air time, and people have thought through how they would like to see successful enablement happen. I don't see that necessarily being done ahead of time in the U.S. and there's a lot of like willingness to sort of build as you go along and of course, it makes things a lot more agile, but I find that taking a little bit of time to think through the enablement jointly between the provider and the customer is just absolutely fantastic for the long term. And this is where it gives you an opportunity as well to think about ethics and to think about the social acceptance of your eventual solution. So gives you that bit of time to really put those pieces in place.

KRISTINA:  So do you essentially think that Swiss companies are at a disadvantage, for example, compared to companies in the U.S. who are focused on A.I. because they are considering the ethics upfront? Or does that sort of net itself out over the lifetime? How does that work in your perspective?

SHALINI: No, I don't think there is any kind of disadvantage because ultimately, if you are in Switzerland and building an algorithm for anything in your local market, really small, that's eight and a half million people. So obviously it's extremely likely and not just likely but I would say it's almost a given that you will reach out to other markets. So the Swiss have done business like this pretty much throughout their entire history. So I think that it's no different in the field of A.I. and again here we this is purely my opinion because I don't have an A.I. company and Switzerland but I think that the design process is done very nicely. It's well thought out, and there are a number of interdisciplinary experts who are actually available to support the companies in their development phases. So it's a slightly different. It's a very well nurtured environment. I want to say and I'm not saying that it doesn't it. Isn't that way in the U.S., but I do feel things are a lot more. You know, let's give it a shot. Let's break things. Let's break it fast and keep moving forward, and you can't really do that in a smaller market. You have to be careful. You've also got your reputation to think about; you have limited resources because it is a smaller market and so you have to use them carefully a lot more carefully than if you had the type of venture funding that flows into the U.S.

KRISTINA:  You can't see me, but I'm grinning because I'm all about balancing the risk and the opportunities, right? That's really what digital policy is. And I think you've netted that out for us really? Well, right, which is it's about getting those two things out and…

SHALINI: ..absolutely yeah, absolutely because you've got to think too as: I haven't done this research, but here's my speculation is that if I were to follow the life cycles of startups that are building A.I. solutions here versus selects in the Silicon Valley and if I followed let's say I took a case if I took a pool of 10 startups there in 10 here all of which started in the same month of the same year and if I followed their trajectories and I suspect that the Swiss teams and startups and the founders and the co-founders. I mean, I think that any given start of would have a longer lifespan than then the one that started in the same month of the same time in the U.S. for instance things are a lot more. There's a lot of transitions in the U.S., and I think things are just more fluid than they are in Switzerland, and here you really have to think about who you bring onto your team, what your customers want. Will the Market support you or not. So these are considerations. You have to be extremely serious about here, and they have long-term consequences. So if we tie this back to the whole question of ethics, when you have this sort of push to think before you build or think before you design and I think that's a conducive environment actually for ethics.

KRISTINA:  So that makes the question I think of it. Is there a point where it's too late for ethics. Is it almost like we've built a six-story building and we just realized that we needed to add an elevator. Can we still add the elevator in or..

SHALINI: We have to write..

KRISTINA:  ..well, I mean maybe you don't have to write I mean, can you do it? Maybe you can't say, you know what you're stuck with stairs to the 6th floor forever. I mean, you know, is it ever too late to build ethics into A.I.?

SHALINI: I don't think so. I think it should be done from the beginning. It should be done in the thinking face and it needs to be done as part of the ultimate enablement of a customer and I think that companies who don't do it will ultimately self-destruct because I believe that the world itself is changing in ways that are going to make unethical applications. Simply unpopular. So yeah, I don't want to buy this. I don't want to imply that you know, if you if you refuse to consider it in the very beginning you're doomed to failure mean I can't predict who's doomed to failure. That's that's nothing I can do, but I'm what I'm trying to say is that as a society we are, we need to be more and more careful about ensuring that enablement happens and as a business, we need to ensure this because if we don't somebody else will do it and will do a much better job at it and the customer will go to them. So I think that both commercially as well as societally we are in a place now where we must think these things through very carefully before running off and deploying a product.

KRISTINA:  And so let me come back and ask you the same question again because I think that we've touched upon a really important point, which is yes. It's important to build ethics, and it's especially important longer-term. Not just because it's the right thing to do but because somebody else is going to do it and they're going to do a better than you and ultimately you're going to lose out but is there a point where it is too late to build an ethics? If you haven't done it at the beginning or early on in the process, you know, can you do it later on? Is it possible, you know from your perspective for A.I. company to not consider ethics as a startup and then build them in after they do a proof of concept? Is it just harder or is it impossible? What are your thoughts on that?

SHALINI: I think after a proof-of-concept certainly you can, you should be in fact set up in such a way that you can pivot on unintended consequences at various stages of your maturity as a company and this is something you just have to do and that's part of the resource. Outline a plan for to what you need. Okay. Well here I'm doing my proof of concept or I'm running a pilot or I've even gone commercial when I'm starting to notice things in new market that I hadn't prepared for. You should be in a position of having to adjust to that. Now the question comes in i terms of do you have enough resources, access to resources because like anything else construction after the fact is expensive and it will cost you time as well as money to change unless you've built yourself in a flexible way from the beginning and can you survive that and I think this is a question every business needs to ask itself. And, I also think another thing that I've learned from what I've observed is AI is too many people a buzzword and they a lot of people who would be better off not writing algorithms and simply providing a good product, as a customer service platform or a customer service platform are out there trying to figure out how they can bring A.I. into this into this mix and that always is got that's also a very interesting dilemma because what are you enabling at the end of the day if you can enable without writing self-learning algorithms, then that's what you should do. So, I think that it again brings us back into this whole question of clarity. So those businesses who haven't thought through their product and have basically built a technology that cannot be modified quickly or in an affordable manner, perhaps they haven't thought even the business through very well. So they run whatever risk their run of not being able to continue their business because of that and I think that's a very real risk and it's not just because of ethics. It's a number of topics starting from do you even need to deploy any kind of artificial intelligence.

KRISTINA: What are the things that leaders need to be thinking about? Like, how do they know what they don't know? They might not be really thinking about all of these different aspects of ethics, or as you said, they might be excited because they have a new concept, but the concept has more than one way of getting realized. What are the things that they should be thinking about, or what are the resources in place that they can turn to that you've seen? Is there like a secret checklist out there somewhere?

SHALINI: Yes. Actually, there are a couple of extremely good resources, and there's a book called Link written by Dr. Lorien Pratt, (https://www.lorienpratt.com/ and dr. Pratt is 30 plus year practitioner of machine learning, and one of the pioneers of the commercial applications of machine learning and she has written this book about essentially about decision intelligence as well as and it covers a lot of topics around the unintended consequences of artificial intelligence. I think for me as well this is one of the reference books I use, and I would recommend that people, use Link they can read that it's an extremely readable book and if they like they can also go to her company's website, which is Quantellia (https://quantellia.com/) with two L and there as well Lori and has a number of links to her blogs, her writing, her TED talks and so on so I would recommend that as a great resource because she's extremely involved in commercializing A.I. solutions. She has also an academic background, a teaching background plus a scientific background. So I think she brings all of this together beautifully and this is something I would encourage business people to go educate themselves with. And then the other thing is as a business person start with it start with your society's standards bodies. So I go to this SATW report, for instance, because they have brought together a number of cross-disciplinary policy experts from Switzerland into the writing of this report. So I know that you know, there's a good perspective here from academia from the government from the bodies who are concerning themselves with how to standardize and regulate, so those are the keywords I look for. What is the standardization body or bodies? And what are the regulatory body or bodies and which kinds of reports are they participating in writing and I pick the report that's the easiest for me to read and comprehend and I use that as a go-to in in my case the report that I mentioned to you the SATW report on the state of A.I. in Switzerland. And this is a good report for me and the E.U. as I mentioned has their recommendations on A.I. and both of those reports actually I have access to I can send you the links and you can share them if you like. But but there are in every country. There are bodies of policymaking as well as academia which concern themselves with defining ethics, in defining ethics within the field of A.I. who are concerning themselves with how to build and some manner of standardization around this and how to involve the soul societal political and philosophical points of views and who are trying to figure out how democracies can adapt to coexist with A.I. and benefit from it. So these are the bodies of knowledge that I think should be in everyone's tool kit and you can start with your country and then you can expand from there as your own knowledge and comfort grows with researching and implementing these topics and this brings me to another point, which is a lot of the reason why people don't do this kind of homework is because they can't afford it and they may be saying to themselves. Yeah. Okay fine. If I find out that there happens to be an ethics policy and implementing that will put me out of business then you know, what am I going to do with that, the answer to that question as hard as it is is take a good look at your business plan. If you are a viable enablement company if you really have use for this so-called A.I. that you're developing then you will attract investment. It's as simple as that it cannot be that you have an amazing product that the industry needs, and no one's willing to pay for it. So if no one's willing to pay for it, you need to go back and look at your business plan, not cut ethics out, and it's a very hard message, but I think I think it's a it's a true message.

KRISTINA: That's actually great advice. I think that for me at least that has the nail on the head, right which is making sure that you understand where the problem is, and I think it drives to another point, which is it sounds like if you do the right thing, you have a good business model you include ethics into the base of that product or that service that you're developing. It sounds like there's actually really great potential for an evaluation perspective. If you actually are looking for a buyout, you're looking to an acquisition; you're setting yourself up for increased profitability. So it's another way of thinking about the problem that is going to get you a good end-result or an exit.

SHALINI: Definitely. And for that, you know, you can turn to a couple of places that I can suggest are in if you look at The World in 2050. It's a think tank of the Diplomatic Courier and just for the sake of transparency. I'm the executive director of that big tank. So yes, I'm I've got a stake in what I'm saying here, but I think that it's actually a very useful resource because it lays out seven major categories of problems that that we are solving for in the future.

It touches upon the futures of humanity, society, travel, health, and off-world civilizations' artistic visions of the future of Cetera. So, it gives you a nice sweep of what are the sorts of clumps of areas of the future that different people are trying to build. Like, let's say in one of those areas and then go to the web the World Economic Forum because what they've done this year, they have written down the connection between economic stagnation unrest among global public and climate change. And so if you follow through with all of the points that are discussed underneath that, you'll see that enablement and ethics are inherently built into the recommendations that they make to businesses for the year 2020 and beyond. So, you and a company can use that as a sort of your basis to say. Okay. Well, what does ethics look like, you know, what are the different types of thinking around it? How can I integrate that into my business? And how do I sell it forward to my customers because all of the arguments that you can make forward can be found within the discourse of various corporate leaders and government leaders that the web captures and its websites? So the World Economic Forum. It's https://www.weforum.org/. And for the world in 2050, it's https://www.cocreate.world. So these are some resources that I can share off the top of my head in, but I'm always talking to people, you know, as a businesswoman. I'm talking to people and talking to those that I know are working in the field of ethics, asking them questions asking them for new research that's come out. How do I tie that into my work? How do I use that too? Create a unique selling point for a particular A.I. product in so this is homework that we simply have to do. There are no two ways around it.

KRISTINA: Great. Well, thank you so much, Shalini, for taking the time today. We really appreciate you being with us and what a great perspective on business ethics and A.I. I'll make sure that we post all of the resources you shared with us. And look forward to having you back on and talking about the some more in the future.

SHALINI: Thank you so much for this opportunity, and I really wish you the best, and I hope that your podcast supports people who are, you know, in the middle of this search of how I can use A.I. to enable value in the world.

You can reply to this podcast here: