I urge IT professionals and digital policy stewards alike, to broaden the scope of discussions to include other situations that may require a change in digital policies such as changes in technology. And here is how to get started!
The COVID-19 pandemic and the accompanying lockdowns happened so quickly that most companies were left to make a “Morton’s Fork” decision: They could either shut their doors until the pandemic burned out or they could have employees work from home on unsecured devices.
Obviously, most did a quick risk assessment and decided that the risk of keeping the business running from newly minted “home offices” was less than the risk of stopping work altogether. But I bet a lot of data security folks (and hopefully their digital policy stewards!) are putting their heads together, trying to figure out how to make such an abrupt shift more safely next time.
But I want to urge IT professionals and digital policy stewards alike to broaden the scope of those discussions to include other situations that may require a change in digital policies. Examples include things like natural disasters (which I addressed here ) and changes in technology.
Why it’s important to review your digital policies when technology changes
Addressing this topic properly requires a shift in mindset: We’re way past the point where a company’s digital policies determine their use of technology. To paraphrase a quote from The Tony Blair Institute for Global Change, in today’s world, technology itself determines a business’s operating environment. Rather than adopting new technologies, businesses will have to adapt to them. Once upon a time, for example, Kodak thought in terms of “adopting” digital photography. But they missed the point where it ceased being optional and became necessary. Now, the focus is on adapting to digital photography. And that includes developing digital policies like what kind of images you’ll post on your social media accounts.
The important thing to remember is that, when technology creates the operating environment and businesses operate within it (rather than the other way around), any change in technology can change how a company does business. The corollary is that it’s imperative for changes in technology to trigger a review of an organization’s digital policies.
Let’s take a look at a few examples.
Employees’ personal devices
For many companies, shifting to a “work from home” model pointed out gaps in their policies about the use of personal devices. It’s hard to imagine, for example, that they all had sufficient policies in place to protect consumer privacy when employees were using their own devices to work from home. (Not to mention that simply using an employee’s router constitutes the use of a personal device.)
As personal devices become more powerful and apps extend their functionality, organizations need to give serious thought to policies addressing their use for business purposes.If an employee needs a quick answer from a vendor or customer, for example, it’s a heck of a lot easier to send them a text from your smartphone than it is to wait for your email to work its way through their email inbox. But does that serve the organization’s business purposes?
Digital policy implications
For each new capability, decide whether you need to create or change digital policies to regulate its use. Think broadly around personal devices and whether data on an employee’s personal device violates regulations (e.g., GDPR) or whether some of your secret product design might walk out the door with the employee when they leave the company.
Much like digital photography, artificial intelligence is here to stay. The question of whether businesses will use it is asked and answered; the only remaining questions are how it will be used -- and how digital policies will be developed to make sure it is used according to the organization’s intended purposes and tolerance for risk (reputational, legal, etc.).
Some uses of AI are fairly low-risk -- like using traffic and anonymized cell phone data to give a driver the fastest route to work, deciding which emails should land in a user’s spam folder, or recommending items based on previous purchases.
Other uses, however, are more high-risk -- like using AI applications to make decisions about loan or employment applications. The real danger in situations like these is implicit bias -- meaning that, if the training data is biased, the AI will learn to make biased decisions. Amazon, for example, discovered that its recruiting tool was dismissing female candidates. The problem wasn’t too hard to figure out: The AI was trained on historical data, which was heavily skewed toward men. So the AI learned that Amazon “wanted” to hire men rather than women.
Fixing the problem was harder. The data engineers re-trained the tool to ignore words primarily associated with females, but that didn’t fix the problem. Why not? Because they had created a tool that was much more subtle, focusing on words that, in the context of job applicants, were more often associated with men - words like “took charge” or “executed.”
Digital policy implications
The digital policy implications of AI are practically endless. They require data scientists and digital policy experts to work together to determine not only what the organization wants AI to do, but also what any unintended consequences may be. That includes identifying every point at which unintended bias may creep in, from skewed training data to flaws in telling the AI which data points to focus on (as well as how to rank them) and which to ignore. And the answers could extend beyond the digital realm into the real world, like making sure that development teams are diverse enough to minimize the risk of unintentional bias.
The capabilities of AI will undoubtedly skyrocket over the coming years. And each new use should prompt a review of any related digital policies. That’s the only way to be sure it does everything the organization wants it to do and nothing the organization doesn’t want it to.
And what about censorship?
Just as AI determines what you see in your social media feeds, it can also be used to determine what you don’t see -- something that’s garnered a lot of attention during the current pandemic.
Right now, just about everyone is weighing in on a trending topic that, at its core, was made possible by artificial intelligence: the censorship of misleading posts about COVID-19. Twitter, Facebook, and other platforms now have artificial intelligence and machine learning in place that can identify posts that may be misleading (or totally made up) and do anything from blocking them completely to placing a variety of warning labels on them. And they’re using that technology to control which coronavirus posts people see in their social media feeds.
That’s a big issue that extends far beyond the current situation, and it should trigger some soul-searching for both brands and the digital stewards who work to protect them, such as:
- Do we believe in censorship? Is it our job to protect the public from potentially misleading or incorrect information, or should we trust them to make their own decisions? Which position best aligns with our core values and our customers’ wishes?
- Who will decide which information is labeled, blocked, etc.? And will those decisions be permanent, or will they have an expiration date?
- How good is our technology? How likely is it that it will miss some posts that should be censored or block some that shouldn’t be? Twitter, for example, programmed its algorithms to block Tweets spreading the conspiracy theory about a link between COVID-19 and 5G technology. The algorithm did its job almost too well, slapping a warning label on a number of Tweets that contained both terms but had nothing to do with the conspiracy theory. It also missed some Tweets that did contain false information, potentially leading people to believe that, because they weren’t labeled, the information was correct.
Censorship (even under the more socially acceptable name “moderation”) is always a touchy subject -- one that invariably triggers debates about whether the good outweighs the bad. But the more artificial intelligence is able to identify specific messages by words and context, the more brands will have to develop policies that address whether and how that capability will be used. This is a question of corporate responsibility, but it’s not one that has an easy answer, since the public is strongly divided on the topic of censorship. Developing effective digital policies will require a deep knowledge of who the organization is and who its customers are.
The gig economy and remote work
Thanks to technology that makes it easy for teams to communicate and collaborate with each other remotely, the gig economy was well established way before coronavirus started grabbing headlines and changing daily routines. As far back as 2018, the “gig economy” had grown to 60 million workers in the U.S. alone. Assuming that those were contract employees, the number would be even higher if you counted permanent employees working remotely.
Since the imposed lockdown has proven that remote work is a lot more feasible than some people thought, it’s unlikely that we’ll ever completely return to our former ways of commuting to an office, logging in to a workstation, and commuting back home. After all, as Indi Samarajiva wrote in an article on Medium, "When everyone's face is milliseconds away on a Zoom call, who cares where their butts are?"
Digital policy implications
Assuming that remote work (at least on a part-time basis) will be the new normal, digital policies related to job descriptions and the entire hiring process will need to be brought up to speed. Why include “willingness to relocate” in a job posting, for example, if it’s not really necessary? All it will do is drive the best applicants -- the ones who have plenty of other options -- away.
A bigger issue for many organizations is the security of their data -- and the specifics of their cybersecurity insurance policies, many of which exclude breaches from a device located outside of the U.S.
Therefore, while sourcing talent globally has a lot of appeal, it doesn’t come without consequences. Digital policies need to decide how to handle the risks of hiring workers outside of the U.S. and then address those risks through digital policies. Some might decide to hire only workers based in the U.S. Some might be fine with hiring foreign workers if they can connect to a U.S. server in a way that data never leaves the U.S. And some might decide the benefits of sourcing globally are great enough to merit revisiting their cybersecurity policies.
Measurements and KPIs
As technology reduces decision times and speeds up workflows, the productivity calculation will necessarily change. Just look at grocery delivery services. When they first sprang up, customers were thrilled with the thought of overnight delivery if it meant they didn’t have to make a trip to the store. As delivery services have become more efficient, however, expectations have changed. Customers now expect their groceries in hours -- overnight delivery is no longer impressive.
Digital policy implications
As specific technologies make work faster and easier, digital policy experts need to examine the downstream effects and make any needed changes, like updating the employee review process to reflect the new reality.
Vetting tech vendors
You probably have a standard process for vetting tech vendors...one that makes sure they meet all of the requirements for security, privacy, etc. But has technology outpaced your vetting process? If your organization is moving to the cloud, for instance, do your digital policies provide guidance for evaluating cloud vendors? Or are they stuck in the days when your employees were evaluating physical data centers?
Digital policy implications
As your organization purchases new technologies, make sure your practices for vetting vendors keep up.
Now for the big picture...
I’ve identified some big responsibilities and dropped them right in the laps of digital policy experts everywhere. But I’d bet that, for many of you, your organization lacks the infrastructure you need to do your jobs.
In other words, how are you supposed to know when technology changes?
Fortunately, organizational trends are skewing in your favor. IT as a “buy it and fix it” cost center is riding off into the sunset, leaving behind highly specialized professionals embedded within specific business groups. Because they understand the purpose of the systems they’re tasked with developing, they’re also more likely to anticipate any downstream implications.
And IT professionals aren’t your only sources. Their plans have to pass through procurement and finance employees long before they’re implemented, so they usually have a lot of insight into what might be just over the horizon (or not!).. After all, they control the money!
So I’d start making friends with these folks, hosting monthly meetings where you supply coffee and donuts and ask about the things they’re working on. But don’t be satisfied with simple answers like “AI applications.” You’ll need to ask probing questions about what the AI application will do and if it’s an “upgrade” to a previous AI application -- or something altogether new that’s been made possible by advancing technology.
But that’s just a temporary solution. For a permanent solution, those lines of communication have to be codified into your work processes. At the very least, the tech workflow should include an item on a checklist that says, “Talk to Sally about digital policy implications.” At best, the process would be automated, and you’d be notified any time new tech was purchased, a new cloud vendor was hired, or a new project started. Or, even better, you’d be notified as soon as the request was submitted for approval!
For those of you who have been down in the trenches with all of the upheaval due to COVID-19, what digital policy ramifications have I missed? What’s been keeping you up at night regarding changing technologies and digital policies? Let me know.
Want more practical advice on digital policies? Read other articles in “Shifts in Technology” series:
Part 1: Digital policies are no good if they’re static
Part 3: Is there risk hiding in your digital policies?
Part 4: How to set your digital policies up for continuous improvement
Photo by Kurt Cotoaga