Artificial intelligence is increasingly present in our daily lives. As AI takes on a wider range of more serious responsibilities, we will have to deepen its understanding in order to avoid machine-generated errors of judgment. Many experts point to an explanatory function as the solution—is it the only answer?
Author: Julia Rodrigo, IE Law alumni and Associate in Uria Menendez
It is widely accepted that adults make around 35,000 decisions every day. All these decisions, no matter how simple or complex they are, require a prior assessment of the alternatives: What should I have for breakfast today? Should I wait until that pedestrian crosses the street, or am I going fast enough to drive through the pedestrian crossing before he gets there? Should I hire this person for this job? What is the best treatment for this patient?
Throughout history, we have relied on ourselves or other individuals to carry out decision-making processes. In all these cases, even if we have not directly carried out the assessment in question, we can always ask the person in charge to explain why a certain decision was made. As humans, we make the decision-making process understandable (at least to some extent).
This modus operandi has characterized our decision-making method since the beginning of humankind, making the role of explanations very clear: to bring certainty where there is doubt.
The new standard
However, with the arrival of artificial intelligence systems, the paradigm has changed substantially: certain relevant decisions concerning safety, health, or education are being carried out by AI systems. And the truth is that while being able to explain the reason behind a certain decision is intuitive for humans, this is not the case for AI systems. This new decision-making actor has raised many questions and instigated various challenges, inspiring impassioned debate around the role of the explanatory capacity of AI systems.
AI systems are vague, unclear, and difficult for humans to understand. Within this context, the aim of providing explanations is to allow the information used by AI systems (input) and the results (output) to be interpreted.
But why is it so relevant? AI continues to lag in common sense reasoning[1], which has ultimately raised some concerns about the negative effects of AI. Examples of the danger or damage that could be caused by AI systems include self-driving car accidents and racial discrimination in granting a university scholarship via AI system.
AI, a new decision-making actor, has raised many questions and instigated various challenges, inspiring impassioned debate around the role of the explanatory capacity of AI systems.
Reliable robots
The “Ethics Guidelines for Trustworthy AI”[2] submitted by the Independent High Level Expert Group on Artificial Intelligence—set up by the European Commission on April 8 2019—established that: “It is important to build AI systems that are worthy of trust, since human beings will only able to confidently and fully reap its benefits when the technology, including the processes and people behind the technology, are trustworthy”. It added, in relation to the explanation system, that “the explicability is crucial for building and maintaining users’ trust in AI systems”.
Previously, the General Data Protection Regulation[3] already envisaged under Recital 63 that “a data subject should have the right of access to personal data which have been collected concerning him or her, and to exercise that right easily and at reasonable intervals, in order to be aware of, and verify, the lawfulness of the processing”.
And in China, the “Beijing AI Principles” issued on May 29, 2019, by the Beijing Academy of Artificial Intelligence (BAAI)—an organization backed by the Chinese government—established that AI shall “be ethical […] making the system as fair as possible, reducing possible discrimination and biases, improving its transparency, explainability and predictability […]”.[4]
In the same vein, on May 22, 2019, the OECD also adopted the “OECD Principles on Artificial Intelligence”[5], to promote AI technology that is innovative, trustworthy, respectful of human rights and democratic values—and capable of providing explanations. These principles are already being approved by the OECD members and other countries outside the OECD.
However, the regulatory framework is still in a very early stage and there is no consensus on the role of regulations in this field. Most of the legal documents that have already been produced are not considered legally binding and are only issued as a set of principles that aim to assist the creation of an AI that is beneficial for humankind.
AI vs humans
Authors and institutions in favor of explanation systems for AI believe that the technology should be able to explain the decisions carried out in the same order as a human would. For example, if a human is asked how an omelet is cooked, that human would be able to describe the ingredients and necessary steps, providing details of the process in a systematic manner. An AI system would be expected to allow an objective viewer to understand the consequences of those steps.
However, conscious of the difficulty of its applicability to AI, the EU’s “Ethics guidelines for Trustworthy AI”[6] qualifies that: “the degree to which explicability is needed is highly dependent on the context and the severity of the consequences if that output is erroneous or otherwise inaccurate”.
A price to pay?
Certain authors are radically opposed to an explanatory function for AI. David Weinberger[7], for example, points to Deep Patient, a machine learning program developed by the Mount Sinai Hospital in New York in 2015: the program can calculate the probability of patients developing certain diseases more accurately than human doctors, but no one knows why or how it comes to its conclusions. “Deep Patient simply cannot explain their predictions because in some cases they are significant[ly] more accurate than human doctors”, D. Weinberger explains. He reasons that “this come[s] at a price: we need to give up our insistence on always understanding our world and how things happen in it”.
There is no doubt that AI has already proved hugely valuable for humankind, and there are many more benefits on the horizon. However, it is also true that AI tools are not created with an inbuilt explanatory function and, in many cases, the information is not even stored for very long, eliminating the possibility of explanations for any problem that may occur in the future.
We must find a true equilibrium between technological advancement and the granting of a framework of legal certainty. This may only be achieved through the knowledge and design of legal solutions with technical application and sensibility.
Artificial intelligence ethics: Moving forward
In my opinion, our ability to interpret decisions taken by AI shall only come into play ad hoc if there is reasonable suspicion surrounding an element in the decision-making process that was not adequate considering the results. However, this requires a pre-existing effort to design, implement and train AI systems while considering the relevance of explanatory, accountability and transparency functions.
AI systems will be created and adapted to human and social needs, and their existence will be constrained to benefitting our lives, complying with the ethical principles and established values that have been evolving in our societies for centuries.
As a result, both positions—for and against AI having an explanatory function—have contributed substantially to the state of the issue. Leaving aside their polarizing positions, they must seek common ground in order to find a true equilibrium between this technological advancement and the granting of a framework of legal certainty. This may only be achieved through the knowledge and design of legal solutions with technical application and sensibility.
Finally, leaving aside the corresponding suit for damages by the injured persons, the explanatory function must be complemented by optimizing the AI system. Returning to our previous examples, it is reasonable to believe that the accident caused by the self-driving car would, in the future, be corrected by the tools’ optimization: it is in the interests of the manufacturer, the owner and the user of the self-driving car to reduce injuries and fatalities. In contrast, optimization may not suffice when commercial interests and social benefits are not aligned—when an AI system discriminates against applicants in granting a university scholarship, for example.
This article is the first one of a series of publications regarding the legal implications of Artificial Intelligence.
Julia Rodrigo is an Associate in Uría Menéndez at the M&A and Private Equity practice. She has experience on a wide variety of transactions as merger and acquisitions, corporate transactions, financing or digital law. She studied Law and Business Administration at the University of Valencia and she has an International LLM of the IE Law School. Julia is also actively involved in solving problems in the intersection of law, technology, and business.
Note: The views expressed by the author of this paper are completely personal and do not represent the position of any affiliated institution.
[1] MCCARTHY, John “Programs with Common Sense” RLE and MIT Computation Center, 1960.
[2] https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
[3] https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1528874672298&uri=CELEX%3A32016R0679
[4] https://www.baai.ac.cn/blog/beijing-ai-principles
[5] https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449#_ga=2.255659969.876738194.1560360687-218661035.1560360687
[6] https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
[7] WEINBERGER, David: “Everyday Chaos, Technology, Complexity, and How We’re Thriving in a New World of Possibility” Harvard Business Review Press. May 2019.