“Humans do not simulate personality, have ” he says, “so I’m stuck in my personality. These things they model personality.” When I tell Kosinski that he sounds like I’m describing a sociopath, he lights up. “A sociopath can put on a mask: he is not really sad, but he can pretend to be a sad person.” This chameleon-like power could turn AI into a superior con artist. With zero regrets.
Some research psychologists have questioned Kosinski’s claims. In response to a preprint that Kosinski had posted on Arxiv in early 2023, a group of AI researchers wrote a paper suggesting that he was actually looking at a “Clever Hans,” referring to that famous horse from the early 20th century for which people were deceived The horse (or rather, its owner) made them believe that it could do mathematics and follow a calendar. According to Vered Shwartz, associate professor of computer science at the University of British Columbia and one of the study’s co-authors, “LLMs may have some reasoning ability, but it is not complete or robust like that of humans.” “We did many different tests, and we certainly cannot say that the linguistic models have the same capacity [de teoría de la mente] than people. And it could be that they were just cheating.”
Shwartz is referring to the fact that since LLMs are trained on enormous bodies of information, some of them inevitably contain published academic articles discussing experiments similar to those Kosinski conducted. GPT-4 could have turned to its vast training material to find the answers. Chief AI skeptic Gary Marcus discovered that the tests Kosinski used were also used in classic experiments that had been cited in scientific papers more than 11,000 times. It is as if the LLMs have memorized information to pretend have theory of mind. To me, by the way, this cold-blooded shortcut to cognition, if true, is even scarier than LLMs acquiring theory of mind emergently).
Kosinski says the work done for this latest version of the article addresses those criticisms. Additionally, other articles have recently been published that appear to reinforce his claims, including one in Nature Human Behavior in which it is found that both GPT-3.5 and GPT-4, although they are not successful in all theory of mind tasks, “show impressive performance” in some of them and “exceed the human level” in others. In an email, lead author James Strachan, a postdoctoral researcher at the University Medical Center Hamburg-Eppendorf, does not claim that LLMs have completely mastered theory of mind, but says his team refuted the accusation of cheating. “It appears that these abilities go beyond simply regurgitating the data used to train LLMs,” he says, and that “it is possible to reconstruct a large amount of information about human mental states from natural language statistics.”
I am agnostic about whether LLMs will achieve true theory of mind. What matters is if they behave as if they had that capacity, and they are undoubtedly on the way to achieving it. Even Shwartz, who rejected some of Kosinski’s methods, admits it’s possible. “If companies continue to refine language models, perhaps at some point they will have [teoría de la mente]”he says.
That’s why Kosinski is worth listening to, despite the harsh criticism of his work. Let’s look at the conclusion of his article: “The Theory of Mind is unlikely to be the pinnacle of what neural networks can achieve in this universe,” he writes. “We may soon be surrounded by AI systems equipped with cognitive capabilities that we humans cannot even imagine.”
Article originally published in WIRED, adapted by Manuel de León.
#psychologist #warned #power #Facebook #understand #humans