Ethical AI: Are we asking the right questions?

@LawAhead

Artificial intelligence carries with it a whole host of concerns that force us to think about our own morality. If we want to discuss the real problems and the challenges that we face with the rise of AI, let’s start by asking the hard questions.

Author: Rodolfo Carpintier, Founder and Shareholder, DAD – Digital Assets Deployment

Joseph Weizenbaum, whose well-known conversational computer program, Eliza, foreshadowed the potential of artificial intelligence, is an AI critic. He defended that computers shouldn’t be allowed to judge, and said that if emotions and judgment become part of the process, then humans should intervene. Ethics demands judgement, and this would mean that ethics should be considered the domain of humans, not AI.

Let’s take the example of Facebook’s Alice and Bob. Last year, researchers at Facebook Artificial Intelligence Research built a chatbot programmed to discuss a deal and negotiate an agreement. But the program was shut down by the tech-giant soon after the AI robots invented their own language— without human intervention.

We can find many other examples of AI where algorithms aren’t traced, and traceability is the basic guarantee of AI’s ethical behavior. We should always know why AI is making the decisions it’s making.

A computer playing chess is a computational problem, but an AI computer deciding whether a person should be given a very expensive medicine is a completely different matter.

If algorithms are the new decision-makers, why are we delegating such critical decisions to machines?

In pop culture and films, Artificial Intelligence is many times portrayed as malevolent and decides to eradicate humans. Yes, these films are fiction and they don’t reflect reality. But, can we guarantee that machines won’t reach this decision? Let’s not forget that AI analyses big data, statistical behavior and models past human history. If we take this into account, would it be that irrational to think that robots could actually reach this decision? 

 

As robots become ever more present in our daily life, the question of how to control their behavior naturally arises.

Well-known science fiction author Isaac Asimov, the person who got us thinking about “roboethics,”, described the three laws to govern the behavior of robots and prevent any such mishap. The three laws are as follows:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.[1]

But, are these laws sufficient? Do we need a set of Asimov-like laws to govern robots’ behavior as they become more advanced?

The World Economic Forum describes the nine most important ethical issues in AI, which according to top tech companies and leading experts, are the issues and conversations we should have in order to ethically navigate the nearly boundless landscape of artificial intelligence:

  1. Unemployment: What happens after the end of jobs?
  2. Inequality: How do we distribute the wealth created by machines?
  3. Humanity: How do machines affect our behavior and interaction?
  4. Artificial stupidity: How can we guard against mistakes?
  5. Racist robots: How do we eliminate AI bias?
  6. Security: How do we keep AI safe from adversaries?
  7. Evil genies: How do we protect against unintended consequences?
  8. How do we stay in control of a complex intelligent system?
  9. Robot rights. How do we define the humane treatment of AI?

We are entering a new frontier for ethics and these issues should provide enough debate to start the conversation and approach the many ethical issues that arise in an AI-driven world.

These are some of the questions that we must begin to answer:

  • Are we ready? Humans have been trained to work and maintain themselves with the results of it. Can we reach a situation where all tasks are solved with AI robots that do manual jobs and computers that solve all the problems of life on Earth?
  • Bill Gates is provoking us with an income tax for robots. Will humans reach a point where there’s no work for us? What would be our challenges at that point?
  • Can our robots vote for us? How many robots can we have? Is the wealth they produce ours?
  • Will I use robots to interact with friends? Will robotic interaction replace human contact?
  • How do we cover robotic damage? Will insurance protect us from mistakes in AI programing? Will an accident caused by our robot make me responsible to third parties?
  • Will robots have a race or have racist views of the world? Will they react differently depending of programmer’s political views?
  • Will we be able to cover all aspects of security in our own AI developments?
  • How can we cover cases where AI develops its own views of problems and solutions? How do we protect ourselves from them sensing that we are no longer necessary?
  • How can we control a system that has gone beyond our understanding of complexity?
  • Can we destroy a robot because we do not like it? Is a robot property of a human or considered general wealth for all?
  • Before giving machines a sense of morality, should humans first define morality in a way that computers can process?

In an AI-driven world, what moral framework should guide us?

Mark Robert Anderson, a professor at Edge Hill University, describes a world that has substantially changed since Asimov created his the Laws of Robotics 75 years ago. AI and robotics have evolved in ways Asimov could not foresee at the time.

We now have sensors connected to the Internet 24 hours a day, gathering information about what we do and how we feel. In the future, intelligent sensors will monitor and report on all aspects of what we do in our daily lives. Huge amounts of data about us will be gathered, and we won’t have control over it. The question arises: will we be able to control our own data?

Existing mechanisms that allow the use of our data are obsolete and it has been proven that they do not protect us.

The European Union has developed new laws and regulations about data protection for individuals but they do not cover leading world companies that are beyond their borders. Americans have a completely different approach to data security and only legislate when it becomes impossible to avoid. Europe prefers to foresee the future and legislate ahead of the problem.

Both legal positions create a practical disruption that allows companies to bypass the EU laws by being based outside their borders.

These issues are only the beginning of an interesting debate that is unlikely to be settled anytime soon. This is one more reason why it’s important to engage in discussions about what is being done, from a legal perspective, about all these complex matters.

Rodolfo Carpintier, lawyer and serial entrepreneur, is founder and shareholder of Digital Assets Deployment, an incubator for tech and internet-based projects. He holds conferences to talk about the evolution of the online business model, the digital economy, technology, ethics and the Internet. Follow him on Twitter.

 

Note: The views expressed by the author of this paper are completely personal and do not represent the position of any affiliated institution