Psychology | Can a machine decide how we are treated? The study found out how people feel about care robots

We treat the treatment given by a machine in a different way than the treatment given by a human, the researcher states in the Tiede magazine exam. There’s a reason for that.

imagine, that the patient has to be given medicines against his will, but the person giving them is not a regular nurse but a care robot. How do you feel?

Social psychologist and docent of cognitive sciences Michael Laakasuo He and his colleagues from the University of Helsinki studied how we morally evaluate situations where the operator is a machine instead of a human. Such are coming more and more often, because new technologies based on artificial intelligence are taking over the field in many areas of society.

Score was published In European Journal of Social Psychology. Laakasuo answers questions about research in the Tiede magazine exam.

Researcher Dr. Michael Laakasuo and his groups investigate the moral psychology of robotics, such as how people relate to robot prostitutes

What was your research about in a simplified way?

We wanted to know how people form ideas about right and wrong, and how they judge or accept the events they see.

Based on previous research, humans instinctively see other humans and animals as beings with minds. The mind, on the other hand, is seen as a prerequisite for a sense of morality. We found out how people morally interpret situations where the other party does not have the same opinion as us.

We presented the test subjects with an identical story, in which either a human or a robot tries to forcibly medicate an unwilling patient. In the alternative ending, the human nurse or the robot respects the patient’s autonomy and leaves the patient unmedicated.

After reading the story, the subjects answered 20 questions in which they rated how right or wrong the described actions were. In addition, we did a background field study in which we talked about the situation with residents of nursing homes.

What kind of emotions do care robots evoke in people?

They are perceived as cold and distant. The thought of losing autonomy is considered frightening and distressing.

The anxiety seems to be aggravated by the fact that you cannot negotiate with the robot. You can ask a person for explanations of actions and behavior, but the robot is perceived as a rigid and unstoppable force.

Was there a difference in attitude towards robot and human nurses?

In practice the test subjects accepted respecting the patient’s autonomy, i.e. always acting against the instructions of the chief physician, both by the human and the robot.

Instead, compliance with the instructions, i.e. forced medication, was accepted only from a human nurse. If the robot followed the instructions and forcibly medicated the patient, the decision was not liked.

Later, we varied the test setup by changing the details of the story. In one story, the human nurse was portrayed as incompetent. An incompetent nurse was expected to follow the doctor’s instructions, so this time respecting the patient’s autonomy was not as accepted. In the case of the robot, the change in the story did not affect the ratings.

If, according to the story, the patient was found dead in his room the next day, this had a particularly strong effect on the evaluation of the human nurse.

A person was judged equally regardless of whether he treated the patient or not. This phenomenon is called moral luck: people suffer the consequences of chance, regardless of whether they can influence them themselves.

In the same situation, the robot’s decision to respect the patient’s autonomy was considered more acceptable.

We do not treat machine and human actions identically, even if the consequences are identical.

In one experiment, a situation where the entire hospital care chain was automated was evaluated. In the story, the artificial intelligence system gave an order to medicate the patient, and the medicating was performed either by a human or a robot.

According to the test subjects, the most morally respectable decision was the one where the nurse robot left the patient without medication, defying the super artificial intelligence.

What can be concluded from the results?

People do not treat the decisions made by a machine and another human identically, even if their consequences are identical.

What was surprising, however, was that not all decisions made by the robot were considered inferior to human decisions: some of them were accepted and some were not. It is strange that some of the robot’s decisions were accepted but others were not.

So it is not simply that moral cognition rests on the mechanism of mind reflection. If a robot were generally considered a worse decision-maker than a human, this should be reflected in all decisions made by the robot.

How could the subject be researched further?

One the method is research carried out in virtual reality, where the test subject sees the situation directly in front of their eyes with virtual glasses.

Another way is to standardize mental images. In this study, we did not describe the robot in any way. If different versions of the robot were presented, we could study how the appearance of the robot affects moral judgments.

In addition, more or less information could be given about the robot’s ability to empathize. In this way, we could remove the part of human imagination and standardize how we want people to experience or see the robot.

Published in Tiede magazine 12/2022

#Psychology #machine #decide #treated #study #people #feel #care #robots

Related Posts

Next Post

Recommended