The vast majority of people blindly think that Artificial Intelligence is a real threat, and that giving so many capabilities and abilities to machines is dangerous. It is true that being cautious is not a bad thing, but the truth is that for now, AI is far from being conscious or intelligent enough to be able to trick us or take advantage of us.
There is no doubt that AI is already a useful tool and, for example, it can replace humans in many tasks that are simple and repetitive. The other day we told you the case of a businessman who fired 90% of his staff and obtained impressive results.
However, AI is not useful for everything, and especially the ones that are available to us for free, although we can be amazed by what they do, They are very far from being machines superior to us. This has been demonstrated by Ned Block, a professor in the Departments of Philosophy and Psychology at New York University, where he works on philosophy of mind, philosophy of neuroscience and cognitive science.
And as he explained in Robinson Erhardt’s podcast, where they discuss consciousness, AI and philosophy of mind, the professor has shown that ChatGPT really isn’t as smart as we think. To do this, he has given a very simple example, but it is very visual and shows that For now, generative AIs are only capable of copying what they see and do not have that level of consciousness that we may think they have.
“Ask ChatGPT to draw a picture of several clocks showing 12:03, so that the hands of both the hours and the seconds are glued” Block begins explaining. The AI will generate the image, but what you will receive will be the hands of the clock showing 10:10, The same thing happens if you ask it to create an image that shows 6:28 (again the hands are very close), the clocks that appear will be showing 10:10.
“The reason for this is that If you look at images of clocks on the Internet and in advertisements, there is a clear dominance of examples showing 10:10since visually it is the most attractive image and for marketing reasons everyone comes out marking that time.” From elEconomista.es we have tested it, and we have been able to verify that indeed, He always gives results that read 10:10 even if you tell him and even correct him that he has done it wrong.
In this way, it is demonstrated that ChatGPT in this case, although in general they all suffer from the same thing, They only copy from the internet to give you results and therefore they are not as intelligent as they are made out to be.. Professor Block assures everyone within OpenAI is aware of this and that Although they have tried, they have not been able to correct this error.
This is not the only mistake, as there are hundreds of other examples that show that AI is far from perfect, e.g. They also do not generate images of people writing with their left handeven if you ask it, it will return the image of the person writing with your right hand.
#big #flaw #ChatGPT #shows #intelligent