There are three basic elements in the mathematical formula for teaching a code of ethics to machines. And it is not much different from the ethical cocktail that we people handle. Action, value and norm make up this triad with which researchers play to establish limitations that control the behavior of artificial intelligences.
For people, value is equivalent to a kind of commonly accepted social norm: we know that lying is a morally reprehensible action. And norms help formalize the idea of value in a legal code. “The rules prohibit, as smoking is prohibited in closed spaces, but courage also helps you promote good actions, such as making a donation or being kind,” explains Maite López-Sánchez, AI researcher and professor at the University of Barcelona, which works on systems to introduce ethical principles in artificial intelligence systems.
People learn this framework, which serves to delimit our behavior, during the socialization process. But in machines everything must be translated into numbers and mathematical functions. The final objective is to provide an ordering of the actions. “In the end, machines are very integrated into society and end up making decisions that affect us as people. It would be desirable for these decisions to be aligned with what we understand to be correct, for them to be well integrated socially,” says the researcher.
López-Sánchez goes to the most basic to explain the need to have ethical machines: “I can have an autonomous car and, if I give it the objective of taking me to work, the car would take the route that was most efficient or fastest. We are very clear that I want to get to work, but I don't want to run over anyone. “It wouldn’t be morally right.” But the casuistry goes far beyond the extreme assumptions. “There are many aspects to take into account to drive correctly. It's not just about not breaking the rules, but about doing things well, such as giving way to a pedestrian, maintaining a safe distance or not being aggressive with the horn,” adds the researcher.
Ethics in artificial intelligence also serves to promote equal treatment. “If it is a decision-making system for granting health insurance, what we want is for it to be an algorithm that has no bias, that treats all the people it evaluates in the same way,” says López-Sánchez.
In recent years, algorithmic biases of all kinds have come to the fore. A system developed by Amazon that selected candidates for a job favored men's resumes over women's. She did it because she trained with a majority of male curricula and there was no way to correct this deviation. Another algorithm, in this case used by the health system in the United States, penalized black people compared to white people at equal clinical severity, so that whites were assigned a higher risk and, therefore, were given priority in medical care.
Additionally, autonomous systems deal with issues related to intellectual property or the use of private data. A formula to avoid these deficiencies is to establish self-limitations in the design of the algorithm. Ana Cuevas, professor in the area of Logic and Philosophy of Science at the University of Salamanca, defends this proactive approach: “We do not have to wait for things to occur to analyze the risks they may have, but rather start from the assumption that before “To create an artificial intelligence system we have to think about what type of system I want to create to avoid certain undesirable results.”
Ethics in machine language
Introducing an ethical corpus into machines is relatively new work. The scientific community has approached it mainly from a theoretical point of view, but it is not so common to go down into the mud to specify values in figures and moral teachings in engineering. In the Sánchez-López research group, WAIfrom the University of Barcelona, explore this field experimentally.
These researchers link the concepts of value and action in the design of systems. “We have mathematical functions that tell us that for a certain value, a certain action of the machine is considered positive or negative,” says López-Sánchez. Thus, in the example of the autonomous car, smooth driving on a curvy road will be considered positive given the value of safety. Although if observed from the prism of the value of kindness to other drivers, the vehicle could decide to increase its speed if it notices that it hinders the pace of other cars.
In this specific case there would be a conflict between values, which would be resolved with a weighting. Previously, preferences are established that indicate which values prevail. The entire set comprises intertwined formulas, which must also contain the norm variable. “There is another function that stipulates that a norm promotes a value,” says the researcher. “And we also have functions that observe how a norm evaluates the action and also how the value of said action evaluates.” It is a complex system in which feedback is key.
When López-Sánchez talks about evaluating, he refers directly to machine learning. One of the ways they learn is through reinforcement, like people, we act well because we are rewarded and we avoid doing badly because we are punished. This mechanism also works in artificial intelligence.
“Rewards are numbers. We give the rewards with positive numbers and we give the punishments with negative numbers,” explains the WAI researcher. “The machines try to get as many points as possible. So the machine is going to try to behave if I give it positive numbers when it does things right. And, if when she misbehaves, I punish her and take points away, she will try not to do it.” As if to instruct children, it is scored for educational purposes.
But there are many questions to be resolved. To start, something as simple as deciding what values we want to enter into the machines. “Ethics develop in very different ways. In some cases we will have to make utilitarian calculations to minimize risks or damages,” says Professor Cuevas. “Other times we may have to use stronger deontological codes, such as establishing that a system cannot lie. Each system needs to incorporate certain values and for this there has to be a community and social agreement.”
In López-Sánchez's laboratory they delve into sociological studies to find common values between people and across different cultures. At the same time, they take as reference international documents, such as the UN Universal Declaration of Human Rights. Although there will be aspects that are more difficult to reach consensus on a global level. This is what Cuevas believes: “The limitations of machines will have their limits. The European Union, for example, has its way of doing things and the United States has another,” he emphasizes, in reference to the different regulatory approach that occurs on each side of the Atlantic.
You can follow EL PAÍS Technology in Facebook and x or sign up here to receive our weekly newsletter.
Subscribe to continue reading
Read without limits
_
#Machines #ethics #teach #morals #computers #translating #deontology #numbers