EU's Right to Explanation: A Harmful Restriction on Artificial Intelligence

By

Last September, a U.K. House of Commons committee concluded that it is too soon to regulate artificial intelligence (AI). Its recommendation comes too late: The EU General Data Protection Regulation (GDPR), which comes into force next year, includes a right to obtain an explanation of decisions made by algorithms and a right to opt-out of some algorithmic decisions altogether. These regulations do little to help consumers, but they will slow down the development and use of AI in Europe by holding developers to a standard that is often unnecessary and infeasible. 

Although the GDPR is designed to address the risk of companies making unfair decisions about individuals using algorithms, its rules will provide little benefit because other laws already protect their interests in this regard. For example, when it comes to a decision to fire a worker, laws already exist to require an explanation, even if AI is not used. In other cases where no explanation is required, such as refusing a loan, there is no compelling reason to require an explanation on the basis of whether the entity making the decision is a human or a machine. (Loan companies have to tell applicants what information their decisions are based on whether they use AI or not, but they do not have to explain the logic of their reasoning.)

More importantly, those who drafted the GDPR do not seem to understand that it is often not practical or even possible, to explain all decisions made by algorithms. For example, autonomous vehicles are controlled by a multitude of algorithms that make many kinds of decisions. It is possible to log these actions, but it would take hours of work by highly-paid data scientists to render them intelligible. Often, the challenge of explaining an algorithmic decision comes not from the complexity of the algorithm, but the difficulty of giving meaning to the data it draws on. The machine-learning tool SearchInk can predict fairly accurately whether a written name is male or female—not based on the name itself, but based on the pen strokes. The software’s creators do not know why. This is not because the algorithm is a “black box,” but because they cannot make specific claims about the relationship between psychology and graphology. An algorithm can spot a correlation, but it cannot explain the link between them because it cannot infer meaning the way a human can. AI can only imitate human semantics, as in translation software; it does not actually understand anything.

AI detractors might suggest that an inability to explain some algorithms suggests that they should not be used. But the real problem here is not explaining algorithms, it is explaining correlation. This is hardly a new problem in science. Penicillin has been in mainstream use since the 1940s, but scientists are only now coming to understand why it has been so effective at curing infections. Nobody ever said we should leave infections untreated because we do not understand how antibiotics work. Today, AI systems can recommend drugs to treat diseases. Doctors may not understand why the software recommends a drug any better than why it might help; all that matters is they use their expertise to judge the safety of the treatment and monitor how it works.

So if transparency is not the answer, what is? Monitoring behavior is a far better way to maintain accountability and control for undesirable outcomes—and again, this is true regardless of whether a decision is being made by a human or algorithm. For example, to combat bias, transparency can only ensure that an algorithm disregards ethnic markers like skin color, but that would not even come close to covering all the subtle characteristics that are used as bases for racism, and which differ tremendously throughout the world. A better approach is to look for evidence of bias over time and make adjustments as necessary.

In short, policymakers should create technology-neutral rules to avoid unnecessarily distorting the market by favoring human decisions over algorithmic ones. If a decision needs an explanation, this should be so regardless of whether technology is used to arrive at that decision. And if algorithms that cannot be easily explained consistently make better decisions in certain areas, then policymakers should not require an explanation. Similarly, the need for regulators to monitor outcomes is independent of how decisions are made.

Unfortunately, when the GDPR comes into force throughout the EU in May 2018, it will impose unnecessary restraints on AI and stifle many social and economic benefits. The regulation should be amended before that happens. And if EU policymakers refuse to revisit the issue, member states should use national legislation to establish reasonable alternatives for when AI cannot produce a legible explanation and provide legal authorization for the most important uses of AI, such as self-driving cars.

Nick Wallace (@NickDelNorte) is a Brussels-based senior policy analyst with the Center for Data Innovation, a data policy think tank.




Edited by Stefania Viscusi
Get stories like this delivered straight to your inbox. [Free eNews Subscription]
SHARE THIS ARTICLE
Related Articles

ChatGPT Isn't Really AI: Here's Why

By: Contributing Writer    4/17/2024

ChatGPT is the biggest talking point in the world of AI, but is it actually artificial intelligence? Click here to find out the truth behind ChatGPT.

Read More

Revolutionizing Home Energy Management: The Partnership of Hub Controls and Four Square/TRE

By: Reece Loftus    4/16/2024

Through a recently announced partnership with manufacturer Four Square/TRE, Hub Controls is set to redefine the landscape of home energy management in…

Read More

4 Benefits of Time Tracking Software for Small Businesses

By: Contributing Writer    4/16/2024

Time tracking is invaluable for every business's success. It ensures teams and time are well managed. While you can do manual time tracking, it's time…

Read More

How the Terraform Registry Helps DevOps Teams Increase Efficiency

By: Contributing Writer    4/16/2024

A key component to HashiCorp's Terraform infrastructure-as-code (IaC) ecosystem, the Terraform Registry made it to the news in late 2023 when changes …

Read More

Nightmares, No More: New CanineAlert Device for Service Dogs Helps Reduce PTSD for Owners, Particularly Veterans

By: Alex Passett    4/11/2024

Canine Companions, a nonprofit organization that transforms the lives of veterans (and others) suffering PTSD with vigilant service dogs, has debuted …

Read More