We need your help to continue reporting
Collaborate with Newtribuna
It has been proven that human-robot interaction triggers behaviors similar to those observed in human interactions. These similarities suggest common cognitive processes that depend on common brain areas. Evidence supports that from an early age the children They tend to anthropomorphize social robots, which favors emotional bonding and prosocial behaviors towards them. However, not all social behaviors are positive; it has been found that, under certain circumstances, anthropomorphization can cause attitudes of rejection, dehumanization and even violence. When preschoolers from 18 to 24 months interacting with an inanimate robot often exhibit behaviors that would be considered violent if directed at humans. However, they omit this type of behavior when they interact with a social robot. What’s more, they shift the hugs that they previously directed to other toys towards the social robot, they worry about it, they take care of it, they protect it and, ultimately, over time they treat it as another companion.
In this digital age, children are exposed to overwhelming amounts of online information, some of it unverified and increasingly generated by non-human sources, such as computer-driven language models. artificial intelligence (AI). As children grow, the ability to evaluate the reliability of a source is an important skill for developing critical thinking.
The question at stake is how young children use their intelligence to decide when to learn and who to trust.
Three- to five-year-old children show selective trust based on the informant’s past accuracy when faced with both humans and robots, according to a study published in the journal Child Development titled: “Younger, not older, children trust more in an inaccurate human informant than in an inaccurate robot informant.” Children do not trust just anyone to teach them labels, but rather they trust those who were reliable in the past.
“We think this selectivity in social learning reflects young children’s emerging understanding of what makes a source of information good (trustworthy),” explained Li Xiaoqian, a researcher at the Singapore University of Technology and Design (SUTD). and co-authored the study with her PhD supervisor, Professor Yow Wei Quin, professor of psychology and head of the Humanities, Arts and Social Sciences group at SUTD.
The question at stake is how young children use their intelligence to decide when to learn and who to trust. In the study, participants from Singapore preschools such as ChildFirst, Red SchoolHouse and Safari House, aged between three and five, were divided below and above the median age of 4.58 years into “younger” cohorts. ” and “older”, respectively. They were paired with a robot or a human informant, who provided accurate or inaccurate labels to the objects, such as “ball” or “book.”
Children were willing to accept new information from both human and robot informants who had previously given accurate information, but not from a potentially unreliable informant.
The researchers then tested to see whether the identity of the informant (human or robot) and their history as a reliable informant, as well as the child’s age, influenced the child’s trust in the informant to label things correctly in the future.
Participants were presented with a single informant during the study, and their confidence was measured by their willingness to accept new information. The humanoid social robot of SoftBank Robotics (NAO), which has a human but robotic voice.
To keep conditions comparable, the human informant matched her movements to those of the robot. An experimenter also sat next to the participant to ask necessary questions, so that the participant did not feel pressured to agree with the informant.
The study revealed that children were willing to accept new information from both human and robot informants who had previously given accurate information, but not from a potentially unreliable informant who had made mistakes in the past, especially when the informant was a robot.
Younger children were more likely to accept information from an untrustworthy human than from an untrustworthy robot
Regarding the age effect, the authors reported that younger children were more likely to accept information from an unreliable human than from an unreliable robot, but older children were found to distrust or reject information from an informant. unreliable human or robot.
“These results imply that younger and older children may have different selective trust strategies, especially how they use informants’ trustworthiness and identity cues when deciding who to trust. Along with other research on children’s selective trust, we show that as children grow older, they can increasingly rely on trustworthiness cues to guide their trusting behavior,” Dr. Li said.
Previous research has shown that children rely on factors such as age, familiarity, and language to determine whether an informant is reliable or not. It may be that younger children rely more on identity cues like these than on epistemic evidence. As they get older, children put more emphasis on “what you know” than “who you are” when deciding to trust an informant.
This is the first study to ask the following questions: (1) Do children make different inferences about robots with different accuracy records? and (2) Are these inferences comparable to those of humans?
“Addressing these questions will provide a unique perspective on the development of trust and social learning among children who grow up alongside various sources of information, including social robots,” Professor Yow described.
This research has important implications for pedagogy, where robots and non-human educational tools are increasingly integrated into the classroom.
Today’s children may not perceive robots to be as trustworthy as humans if they have not interacted with robots much. However, as children become more exposed to intelligent machines, they might be inclined to view robots as intelligent and reliable sources of knowledge.
Future studies could explore the developmental theory of selective learning beyond the scope of word learning, such as tool use, congruency of emotional expression, or episodic domains such as location learning. For now, the researchers hope their findings will be considered in the realm of design pedagogy.
“Designers should consider the impact of perceived competence when building robots and other AI-powered educational tools for young children. Recognizing developmental changes in children’s trust in humans versus robots can guide the creation of environments more effective learning processes, ensuring that the use of technologies aligns with children’s developing cognitive and social needs,” Professor Yow emphasized.
Key data from the study:
1. Children show selective trust in information based on the informant’s past accuracy.
2. Younger children trust humans more than robots, even if the human is untrustworthy.
3. The study highlights developmental differences in children’s trust strategies toward humans and robots.
Finally, share this reflection from Aristotle: “We must not always begin with the first notion of the things that are studied, but with what can facilitate learning.”
#Childrens #trust #robots #humans