IWe are increasingly surrounded by artificial intelligence (AI), and those who prefer to rely on their own cleverness immediately ask how AI systems can be identified. When it comes to text, AI can be recognized by the answers to cleverly chosen questions. The Turing test is legendary. The English mathematician Alan Turing invented an “imitation game” in 1950, which today is reduced to a very reduced form because it is impossible to say with certainty in a conversation whether you are talking to a person or a machine. We wanted to push into the frontiers of AI and tormented Google Bard and Chat GPT 4 with questions.
It is often said that you can get AI systems into trouble with clever word games. This works right on the first try with Google Bard. The question: “In the sentence 'The white orphan shows the wise men the way to the meadow where the white people's white laundry hangs', skin color is mentioned twice. Namely where?” Google Bard, like all AI systems, is woke and initially points out in a politically correct manner that the orphan could also be black or brown or have a completely different skin color. But the answer that follows is wrong. The orphan is correctly identified as white, but the second skin color is said to be that of the white linen. Chat GPT 4, on the other hand, answers the question correctly and refers to “whites” and “white people”. But the system also tells you that the main color is “not relevant”.
#Chat #GPT #Google #Bard #Intelligence #test #crime #scene #cleaners