#11 The policy of ethics

#11 The policy of ethics

#11 The policy of ethics

Guest:
Guests:

Digital is quickly evolving and is increasingly shaped by the idea of ethics. If you are not factoring ethics into your digital policies, you are missing out on opportunities and creating risk. I look at three specific areas (cloud, data privacy, AI) where ethics questions are particularly important.

Keywords:
Season:
1
Episode number:
11
Duration:
18:50
Date Published:
April 8, 2020

Hello everyone and thanks for joining in! Today I want to talk about ethics and how ethics are influencing digital policy evolution, especially when it comes to areas such as cloud, data privacy, and artificial intelligence.

To understand how we got to where we are today, let’s take a quick look at policy through history.

Some policies, such as signage and logos, are as old as history itself. Consider that early versions of logos were developed in the Middle Ages (around 1300 AD), as shops and pubs used signage to represent what they did. The first modern logo designs were created in the early 1900s, evolving alongside mass printing. Since then, we’ve gone on to develop all kinds of policies from language and writing - think about the style in which content is written, as it could be Associated Press, New York Times style or Chicago Manual for English, yet others for foreign languages – to copyrights. With digital specifically, we have evolved to creating policies around search engine optimization (or SEO), taxonomies used to support content management systems, and appropriate content. In recent times, even those policies have been superseded by what ought to be done in social media, whether official or personal use, user general content, disaster recovery and content localization.

Today we find ourselves pivoting yet again in digital policy, as we – users (you and me), governments and the industry - take the next step in maturation. We have a set of legacy policy that will continue to be relevant, such as the logo one I mentioned earlier as well as others, such as: Accessibility, Cookies, Copyrights and protections, Data breach response, Email marketing/spam, Emergency response / business continuity, Online piracy, Domain names, certificates, email addresses, social media handles, app names, Hosting and content storage, Search engine optimization (SEO), Technology identification and selection, Analytics and metrics collection, Appropriate content/prohibited content,Branding, Content ownership/management and others.

So, while policy is not new, they are changing daily by what I call the policy of ethics. Policy is evolving from the previous plateau of practices such as these to ones that incorporate ethics and accountability. None are more impacted these days than cloud, data privacy and AI.

First, let’s take a look at cloud and how cloud policy is impacted by ethics.

Cloud

Cloud policy typically addresses the business requirements for individuals wishing to use cloud for hosting a digital property, selling a product, or delivering a service. In addition to ensuring security and privacy, it also addresses uptime, requirements for geographical localization of content and services, among other business-assurance concerns. The types of considerations historically factored into the cloud policy include:

  • Do cloud vendors meet the same legal and regulatory compliance requirements that the business applies to traditional, on-premises hosting?
  • Is there a clear cloud assurance policy in place and incorporated as part of all cloud vendor contracts?

But now, cloud policy is being influenced by ethics. That means altogether new considerations, including:

  • How green is your cloud? While cloud computing is generally more energy efficient and has a smaller carbon footprint than on-premise server rooms, not all clouds are created equal. Are you doing right by the environment, in an ethical way, that aligns to your customers’ expectations?
  • If you have highly sensitive data, is it better to use strong encryption and host on premise? An on-premise server room that implements energy efficiency best practices can be a greener alternative than a “brown” cloud. That may result in your team asking vendors questions such as:
  • Server utilization factor: How much of the server’s total processing capacity is utilized effectively?
  • Hardware efficiency: What is the energy efficiency of the servers, data storage, and networking equipment used in the server room and data center?
  • Electricity carbon emissions factor: What is the carbon footprint of the electricity used to power the data center?
  • Power usage effectiveness (PuE): The efficiency of the facility housing the servers, including cooling, power distribution, and lighting.

If you decide that you ought to be offsetting carbon emissions from your cloud hosting costs and if so, how will that happen?  Most certainly it is a decision that needs to reach back not only to the C-suite, but to the board of directors.

In summary, cloud policy is being influenced by ethics and your organization needs to take these changes into consideration. Failing to do so now may not pose an immediate risk, but in time will impact the perception prospects and customers have of your organization and brand and that is not only a business risk but an opportunity of you can land on the right side of the equation.

Now, let’s take a look at data privacy and how ethics is shaping how we think about privacy and what your organization should address to balance the risk and opportunity that are coming  in this arena.  

Data privacy

For years now, most organizations have focused on data privacy by writing a hard-to-read legalese and posting a link to it in the footer of every page. The privacy policy has been heavily focused on the organization, citing the rights of the organization to own data collected from users, and noting that by not abandoning the site and continuing to use it, users consented to give away all of their data. The tide has been changing for a while with new regulations in place. The biggest wakeup call came with EU’s General Data Protection Regulation (GDPR), but make no mistake about it: before GDPR there were privacy laws, they just were not as strong nor were they as enforced as the EU regulation. Most organizations these days know well the that GDPR is a thing and the California Consumer Protection Act (CCPA) is just another data protection requirement being imposed on businesses by a regulatory body. But there is a whole element of ethics that is starting to permeate data privacy.

As technology becomes an increasingly important part of people’s lives, data ethics must be translated into sound business practices to ensure that both internal and external interests are balanced. This begins with considering the human impact from all sides of data use, the impacts on people and society, and considering whether those impacts are beneficial, neutral, or potentially risky. That is why organizations today are asking themselves and demonstrating to their customers values that correspond with questions including:

  • What is fair with regards to data collection?
  • What is the right thing to do?
  • What, beyond out immediate selves, ought we to be thinking about and doing and are we adopting those practices across all channels, including web, social media, advertising, mobile applications, voice enabled assistants?
  • How, if at all, does our approach to data privacy differ by market?
  • Are we paying attention to all aspects of data privacy, with regards to the following area: privacy-savvy defaults, In-product transparency, considerations for and documenting privacy risks and data flows and assigning data owners up front and throughout the data lifecycle about data security

Organizations are having to content with these policy shifts as the economy continues to globalize, and physical lives increasingly are altered to be online. It should be of no surprise that what started out as new marketing practices to meet an implied user push is now becoming a baseline consideration, affecting new technology and data uses beyond the common user understanding. Data ethics and its impact on privacy is becoming a board-level topic. And while it is a costly endeavor, it is one that in the long term is a sound digital policy for most organizations, as it will drive customer loyalty, competitive advantage, and the business bottom line.

In summary, data privacy is quickly evolving to a new level of consumer, regulatory, and business awareness heavily led by what we think of ethics. Yet again, these changes may not pose an immediate risk nor opportunity, but this is the time to consider and make appropriate changes in your digital practices. If you choose not to act, you should at least identify the triggers that will cause you to act or reassess in the future. Not paying attention will  have a greater impact than most of us can foresee.

Now, let’s talk a bit about artificial intelligence, which has been around for a while but organizations are only now commonly embracing for digital marketing and communications. Not surprising, this area is a keystone of conversation when ethics arise.

AI

Have you ever considered that if AI can generate human-like output it can also make human-like decisions? Not only can it do so, but it is doing so already. The problem is, like much of anything in digital, we are building and doing first and not stopping to think about the implications of what we are doing. As a result, we are losing the moral principles by which we will be judged, by customers and future generations alike. Therefore, it is important to start factoring in ethics into AI and what you are doing with regards to products, services, marketing and communications.

As of late, I am hearing organizations evolve (and rightfully so) from basic considerations, such as:

  • Are algorithms we are using for AI biased?
  • Have we included the right set of developers and stakeholders to ensure we uncover what we don’t know or have not thought of?

And incorporating more ethically-focused questions. They include things like:

  • Is the AI we are developing universally good or only for some?
  • Is it good under certain contexts and not in others?
  • How will we know if what we are creating is good? What are the ways that what we are using can be exploited?
  • If the technology we are developing falls into the wrong hands, what are the worst possible outcomes? How do we mitigate against them?
  • What moral obligations, if any, do we have? How do those moral obligations align with our mission and balance out with our business objectives?

In summary, as awareness of AI bias and the prospect of societal harm increasingly enter our consciousness, the bottom line is having your organization placed ethics at the core of its digital policy. If you are already doing AI in an ethical way, that is great! But you need to keep asking the questions regularly, looking for new and different voices in the conversation. Ask your colleagues about ethics in the AI space and make sure to seek external input and voices as well. If you have not yet started using AI or have not factored in ethics, this is the time to think about evolving your policy and practices.

So, cloud, data privacy, and AI. Three of the biggest areas of digital policy impacted by ethics today. But that is not the end of the conversation. Ethics as we know and think about them today, will continue to evolve and influence other areas of digital policy as well.

This month, The Power of Digital Policy is focused on what I call the policy of ethics. My guests include Pernille Tranberg, an independent speaker and advisor in data democracy, data ethics, data understanding and digital self-defense for companies, authorities and organizations. She will share with us perspectives on what it means to incorporate ethical data and practices into digital. We will also talk about ethics in AI for startups with expert Shalini Trefzer, with lots of great advice not just for startups, but also small businesses looking for new markets, and balancing between ethics and growth.

As digital technology evolves, our organizations must adjust. We are already seeing a new wave of support amongst the younger generations for socially-minded and more inclusive policies. This is the time to consider your stance and how you will proceed moving forward, from leveraging AI in products and services, to its use in marketing, communications, and customer support. Weigh your options, balance your risk and opportunity, in order to drive innovation and creativity amongst in your organization and with all digital workers.

Until the next episode, be well and do good policy work.

DataEthics (politically independent ThinkDoTank) - https://dataethics.eu

Center for Digital Ethics and Policy - https://www.digitalethics.org

You can reply to this podcast here: