Business News

Who is to blame when AI goes wrong? Medieval theology can help decide

A self-driving taxi has no passengers, so it parks itself widely to reduce congestion and air pollution. After being hailed, the taxi pulls out to pick up its passenger—and tragically hits a pedestrian on the way.

Who or what deserves praise for the car’s action to reduce congestion and air pollution? And who or what is to blame for the pedestrian’s injuries?

Another possibility is the designer or developer of self-driving taxis. But most of the time, they would not be able to predict the taxi’s behavior. In fact, people often look to artificial intelligence for some new or unexpected idea or plan. If we know exactly what the system is supposed to do, there’s no need to bother with AI.

Otherwise, perhaps the taxi itself is to be praised and blamed. However, these types of AI systems are deterministic: Their behavior is determined by their code and incoming sensor data, or observers may find it difficult to predict that behavior. It seems strange to judge the behavior of a machine that had no choice.

According to many modern philosophers, rational people can be morally responsible for their actions, even if their actions were completely predetermined—either by neuroscience or by code. But many agree that a moral principle must have certain skills that a self-driving taxi probably doesn’t have, such as the ability to shape its own values. AI systems fall in an uneasy middle ground between ethical agents and unethical tools.

As a society, we face a conundrum: It appears that no one, or no one, is morally responsible for the actions of AI—what philosophers call the responsibility gap. Today’s theory of moral responsibility seems ill-suited to understanding situations involving autonomous or autonomous AI systems.

If current theories won’t work, then perhaps we should look to the past—to centuries-old ideas that have incredible resonance today.

God and man

The same question puzzled Christian theologians in the 13th and 14th centuries, from Thomas Aquinas to Duns Scotus to William of Ockham. How can people be responsible for their actions and consequences, if God who knows everything designed them—and probably knew what they were going to do?

Medieval philosophers believed that a person’s decisions are the result of his will, using the products of his intellect. Broadly speaking, they understood human intelligence as a set of mental abilities that allow logical thinking and learning.

Intelligence is the rational, logical part of the mind or souls of people. When two people are given the same situations and both come to the same “reasonable conclusion” about how to handle things, they are using intelligence. Intelligence is like computer code in this way.

But wisdom does not always provide a unique answer. In general, it only offers opportunities, and the will chooses between them, whether consciously or unconsciously. Will is the act of freely choosing between possibilities.

As a simple example, on a rainy day, wisdom means I should take an umbrella from my closet, but not which one. Will chooses the red umbrella instead of the green umbrella.

For these medieval thinkers, moral responsibility depended on what each will and reason contributed. If intelligence determines that there is only one possible course of action, then I could not do otherwise, and therefore I have no moral responsibility. One might even conclude that God is morally responsible, since my wisdom comes from God—although medieval theologians were very wary of God being responsible.

On the other hand, if wisdom does not set barriers to my actions, then I have full moral responsibility, since the will does all the work. Of course, most actions involve contributions from both intellect and will—it’s usually not an either/or.

In addition, other people often pressure us, from parents and teachers to judges and kings—especially in the days of the philosophers of the Middle Ages—which made it even more difficult to claim moral responsibility.

Human and AI

Obviously, the relationship between AI engineers and their creations is not exactly the same as that between God and humans. But as professors of philosophy and computing, we see an interesting parallel. These old ideas can help us today to think about how an AI system and its creators can share moral responsibility.

AI developers are not omniscient gods, but they provide the “intelligence” of an AI system by choosing and using its learning methods and response capabilities. From the designer’s point of view, this “intelligence” constrains the AI’s behavior but almost never completely determines its behavior.

Many modern AI systems are designed to learn from data and can respond dynamically to their environments. So the AI ​​will appear to have a “will” that chooses how to respond, within the bounds of its “intelligence”.

Users, managers, regulators, and other organizations can continue to oppress AI systems—similar to how human authorities such as kings oppress humans in a medieval philosophical framework.

Who is responsible?

These 1,000-year-old ideas look surprisingly good in the formulation of ethical problems involving AI systems. So let’s go back to our opening questions: Who or what is responsible for the benefits and harms of the self-driving taxi?

Details are important. For example, if the developer of a taxi clearly writes how the taxi should behave in intersections, then its actions will be due to its “intelligence”, so the engineers will be accountable.

However, suppose the taxi encountered situations it was clearly not designed for—such as if the intersection was painted in an unusual way, or if the taxi learned something different from its environmental data than the developer thought. In such cases, the taxi’s actions will mainly be due to its “will”, because the taxi chose an unexpected choice, so the taxi is responsible.

If the taxi behaves well, so what? Is the taxi company liable? Should the taxi code be updated? Even we both disagree on the full answer. But we think that a better understanding of moral responsibility is an important first step.

Medieval ideas are not limited to medieval objects. These theologians can help ethicists today to better understand today’s challenge of AI systems, although we are only scratching the surface.


David Danks is a professor of data science, philosophy, and policy at the University of California, San Diego.

Mike Kirby is a professor of computer science at the University of Utah.

This article has been republished from The conversation under a Creative Commons license. Read the first article.




Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button