EU's Right to Explanation: A Harmful Restriction on Artificial Intelligence

By Special Guest
Nick Wallace
January 25, 2017

Last September, a U.K. House of Commons committee concluded that it is too soon to regulate artificial intelligence (AI). Its recommendation comes too late: The EU General Data Protection Regulation (GDPR), which comes into force next year, includes a right to obtain an explanation of decisions made by algorithms and a right to opt-out of some algorithmic decisions altogether. These regulations do little to help consumers, but they will slow down the development and use of AI in Europe by holding developers to a standard that is often unnecessary and infeasible. 

Although the GDPR is designed to address the risk of companies making unfair decisions about individuals using algorithms, its rules will provide little benefit because other laws already protect their interests in this regard. For example, when it comes to a decision to fire a worker, laws already exist to require an explanation, even if AI is not used. In other cases where no explanation is required, such as refusing a loan, there is no compelling reason to require an explanation on the basis of whether the entity making the decision is a human or a machine. (Loan companies have to tell applicants what information their decisions are based on whether they use AI or not, but they do not have to explain the logic of their reasoning.)

More importantly, those who drafted the GDPR do not seem to understand that it is often not practical or even possible, to explain all decisions made by algorithms. For example, autonomous vehicles are controlled by a multitude of algorithms that make many kinds of decisions. It is possible to log these actions, but it would take hours of work by highly-paid data scientists to render them intelligible. Often, the challenge of explaining an algorithmic decision comes not from the complexity of the algorithm, but the difficulty of giving meaning to the data it draws on. The machine-learning tool SearchInk can predict fairly accurately whether a written name is male or female—not based on the name itself, but based on the pen strokes. The software’s creators do not know why. This is not because the algorithm is a “black box,” but because they cannot make specific claims about the relationship between psychology and graphology. An algorithm can spot a correlation, but it cannot explain the link between them because it cannot infer meaning the way a human can. AI can only imitate human semantics, as in translation software; it does not actually understand anything.

AI detractors might suggest that an inability to explain some algorithms suggests that they should not be used. But the real problem here is not explaining algorithms, it is explaining correlation. This is hardly a new problem in science. Penicillin has been in mainstream use since the 1940s, but scientists are only now coming to understand why it has been so effective at curing infections. Nobody ever said we should leave infections untreated because we do not understand how antibiotics work. Today, AI systems can recommend drugs to treat diseases. Doctors may not understand why the software recommends a drug any better than why it might help; all that matters is they use their expertise to judge the safety of the treatment and monitor how it works.

So if transparency is not the answer, what is? Monitoring behavior is a far better way to maintain accountability and control for undesirable outcomes—and again, this is true regardless of whether a decision is being made by a human or algorithm. For example, to combat bias, transparency can only ensure that an algorithm disregards ethnic markers like skin color, but that would not even come close to covering all the subtle characteristics that are used as bases for racism, and which differ tremendously throughout the world. A better approach is to look for evidence of bias over time and make adjustments as necessary.

In short, policymakers should create technology-neutral rules to avoid unnecessarily distorting the market by favoring human decisions over algorithmic ones. If a decision needs an explanation, this should be so regardless of whether technology is used to arrive at that decision. And if algorithms that cannot be easily explained consistently make better decisions in certain areas, then policymakers should not require an explanation. Similarly, the need for regulators to monitor outcomes is independent of how decisions are made.

Unfortunately, when the GDPR comes into force throughout the EU in May 2018, it will impose unnecessary restraints on AI and stifle many social and economic benefits. The regulation should be amended before that happens. And if EU policymakers refuse to revisit the issue, member states should use national legislation to establish reasonable alternatives for when AI cannot produce a legible explanation and provide legal authorization for the most important uses of AI, such as self-driving cars.

Nick Wallace (@NickDelNorte) is a Brussels-based senior policy analyst with the Center for Data Innovation, a data policy think tank.




Edited by Stefania Viscusi
SHARE THIS ARTICLE
Related Articles

Four Reasons to Reach for the Cloud after World Earth Day

By: Special Guest    4/23/2018

The World Earth Day agenda offers a chance to flip the rationale for cloud adoption and highlight environmental benefits that the technology brings pr…

Read More

Bloomberg BETA: Models Are Key to Machine Intelligence

By: Paula Bernier    4/19/2018

James Cham, partner at seed fund Bloomberg BETA, was at Cisco Collaboration Summit today talking about the importance of models to the future of machi…

Read More

Get Smart About Influencer Attribution in a Blockchain World

By: Maurice Nagle    4/16/2018

The retail value chain is in for a blockchain-enabled overhaul, with smarter relationships, delivering enhanced transparency across an environment of …

Read More

Facebook Flip-Flopping on GDPR

By: Maurice Nagle    4/12/2018

With GDPR on the horizon, Zuckerberg in Congress testifying and Facebook users questioning loyalty, change is coming. What that change will look like,…

Read More

The Next Phase of Flash Storage and the Mid-Sized Business

By: Joanna Fanuko    4/11/2018

Organizations amass profuse amounts of data these days, ranging from website traffic metrics to online customer surveys. Collectively, AI, IoT and eve…

Read More