For a long time, the EU’s General Data Protection Regulation (GDPR) was barely a blip on the radar for American companies. That changed on May 25 as many companies in the U.S. started to realize that the law’s reach extends far beyond national boundaries.
So far, most of the conversation has revolved around marketing, with various thought leaders offering their input on what the GDPR will mean for things like building email lists and using targeted marketing.
But there’s another aspect of the legislation that could have just as big an impact: automated decision-making. And it will affect everyone from banks and mortgage originators to insurance carriers and, yes, even marketers.
What does the GDPR say about automated decision-making?
The legislation addresses automated decision-making in Article 22. It applies to situations in which businesses use algorithms to make automated decisions that could have a significant impact on a person’s life. Examples include things like loan or employment applications that make data-based decisions without human intervention. The goal is to give individuals the right to know how those decisions were made and to request a human review.
GDPR Working Parties -- groups charged with deciding what the legislation means -- have spent a lot of time trying to figure this out. That’s resulted in a lot of legalese and haggling over particular words, but they’ve had a tough time pinning down what Article 22 will mean in real-world business situations.
Nonetheless, since the potential implications of Article 22 are so huge, all businesses that use automated-decision making need to be thinking about this. So even if I can’t give you rock-solid answers at this point, I want to bring you up to speed on some of the discussions that have been taking place and the questions that still remain.
Does Article 22 grant consumers affected by automated decisions particular rights, or does it prohibit such decision-making entirely?
Some people think Article 22 prohibits automated decision-making (any decision based on algorithms without human involvement) altogether.
Others, however, think that, instead of prohibiting such decision-making, Article 22 provides consumers with a way to get more information about what data was used and how their particular data led to the decision they received. They also believe it gives consumers the right to human intervention.
What does that last part mean?
Good question. One interpretation is that the requirement to provide an explanation could be satisfied merely through providing the consumer with an overview of the algorithm itself. Others think it requires an explanation in terms that can be easily understood by average individuals who don’t have a background in data analysis. That could look something like this:
“Our models use historical data to predict the likely outcome of giving a particular customer a loan. The data points we’ve found to be relevant are how often the customer has a negative checking account balance, how old they are, and whether they have children. Historical data shows that combinations of those factors tend to result in predictable outcomes, and that’s what our algorithms calculate.”
Alternatively, some think it requires an explanation of that individual’s particular decision, identifying the exact data points that determined the outcome and explaining why those data points had the impact they did. Here’s an example of that approach:“The factor that had the biggest impact on the decision was the number of days you’ve had a negative bank balance over the past six months. The tipping point is somewhere around 30 days, depending on other factors. Your account had a negative balance on 45 days over the past six months, and that’s why your application was declined.”
As far as “human intervention,” there is disagreement there, too. Some experts say the law merely requires businesses to give consumers an opportunity to express their concerns to a human representative. Others take it further and believe the law gives consumers the right to appeal the automated decision and to have it made by a human instead.
What kind of decision qualifies as one that has a “significant impact” on a consumer?
This is another good question that hasn’t been thoroughly answered yet. The Article 22 Working Party says that the decision must have the potential to significantly influence the circumstances, behavior, or choices of individuals.
Clearly, that would include things like automated decisions about bank loans, credit card applications, etc. But then there’s that pesky word, “choices.” What does it mean to limit a consumer’s choices?
The Working Party has suggested that it could include things like targeted advertising, especially when using specific data points to present certain types of advertising. One example would be using things like a consistently low bank balance, job history, and ZIP code to present ads for a business that provides paycheck loans. That could be construed as influencing a consumer to consider such high-interest loans over other, more financially sustainable options.
The GDPR already requires businesses to get explicit consent before they can contact consumers with promotional materials. In this case, they’d likely have to get separate consent to use a consumer’s personal data to determine which offers they receive.
Does the law apply only to the personal data of specific individuals?
No. Article 22 also addresses profiling, which the GDPR defines as:
In other words, businesses can collect data from multiple sources -- everything from social media posts to the Internet of Things. They can then group people according to certain commonalities -- age, race, income level, etc. -- and, from there, make predictions about how a particular individual is likely to behave.
Article 22 treats automated decisions made by profiling no differently than those based on a specific individual’s data.
There are obviously still a lot of unanswered questions about how Article 22 should be interpreted and how it will be enforced. That puts a lot of organizations in the unenviable spot of having to make important business decisions without complete information.
However, I wouldn’t recommend a “wait and see” approach. There are common-sense steps you can take now that will reduce your risk and demonstrate a good-faith effort toward compliance (something that can carry a lot of weight when it comes to such sweeping legislation).
Here are some things to consider doing now:
Compile simple explanations of your processes -- preferably with visuals -- and make them easily accessible to consumers.
Provide consumers with several easy ways to contact a human representative of your company.
Identify the people who will work with consumers who have questions or objections.
Clearly identify who, if anyone, has the authority to override an automated decision.
Test your algorithms to make sure they’re working as expected and aren’t generating results that don’t make sense or that could be seen as discriminatory.
Conduct a cost/benefit analysis of changing your processes so that all decisions are reviewed by a human.
In addition, one of the most important things you can do right now is to monitor updates on the GDPR website, the UK’s Information Commissioner’s Office, CSO, Shoosmiths, and BHBIA. Most GDPR experts expect to see a lot more clarity on these issues once the law is in effect, so monitoring these sites will give you the insight you need to make good compliance decisions. And, if you need one-on-one guidance, I’ll be happy to help, so please get in touch