In recent years, artificial intelligence has made great strides, challenging Alan Turing's very vision of machines imitating human behavior. A recent study published in the journal PNAS highlights how the latest AI chatbot models, in particular the GPT-3.5-Turbo and GPT-4 versions developed by OpenAI, behave surprisingly similar to humans. The research, which compared the performance of these chatbots with a vast database of human responses from over 108,000 subjects from more than 50 countries, used the OCEAN Big Five questionnaire to evaluate personality profiles and six behavioral games to analyze traits such as altruism, fairness and risk aversion.
The results of the study were surprising. ChatGPT-4 was shown to reflect human averages across all personality dimensions, with slight deviations for ChatGPT-3. Both chatbots showed behavioral tendencies close to human ones, with a focus on extroversion and neuroticism, but notable differences in terms of kindness and openness. Even more interesting was the result in behavioral games. ChatGPT-4, in particular, showed performance that was often indistinguishable from or even superior to human performance, suggesting its ability to pass the Turing Test in certain contexts. Chatbots have shown a higher-than-average propensity for generosity and fairness in humans.
Adaptability and learning from experience were other highlights of the study, with chatbots shown to modify their strategies based on context and previous experiences – a very human behavior. These results open new horizons on the potential of AI and their integration into society. The fact that artificial intelligences can not only imitate human behaviors but also learn and adapt to different situations raises important questions in AI decision making.
#Artificial #intelligences #increasingly #human #neuroses #insecurities