Have you ever tried to understand how an artificial intelligence really thinks? Sounds like an interesting idea, right? Yet, OpenAI is threatening to block users who dare to ask its new AI model, nicknamed “Strawberry,” to reveal how its reasoning works. Yes, you read that right: despite the promise of an AI capable of “thinking” step by step, it seems that OpenAI doesn’t really want you to know how it does it.
But let’s take a step back. “Strawberry”, or more technically “o1-preview”, was launched with the great promise of introducing a new era in the field of AI reasoning, called chain of thought. This technology, second OpenAIallows the machine to explain how it arrives at its answers, step by step, making the process more transparent and, hopefully, more reliable. But what if you try to dig too deep? Well, you could be faced with a fine no-entry ban.
What’s really happening?
According to several users on social media, OpenAI is issuing warnings to those who try to ask its artificial intelligence to explain its thought process. These users are receiving emails informing them that their requests have been reported for attempting to “bypass protections.” And what would you do if a simple attempt to understand more cost you access to the service?
An ironic paradox
The irony of the situation is quite evident. The technology was launched with the idea of improving the transparencybut it now appears that OpenAI is keeping much of this reasoning behind a veil. Sure, you can still see a summary of what the AI thinks, but it’s a process filtered by a second AI model and heavily watered down.
You may be wondering: why? Well, OpenAI says this measure is to ensure that the AI doesn’t end up saying things that don’t comply with its security policies. Simply put, they don’t want the AI to think out loud, so to speak, and risk exposing problematic details.
But there’s more…
If you look carefully, there is also another reason behind this decision. OpenAI has openly stated that hiding the chain of thought also serves to maintain a “competitive advantage”. In other words, the less they share about how the AI arrives at its answers, the harder it will be for competitors to follow in their footsteps.
Who pays the price?
However, all this has a downside. By concentrating so much power in the hands of OpenAI, it is reducing the possibility of democratizing access to the technology. Programmers and researchers, also called red-teamers, who work to make AI models more secure through controlled hacking attempts, are finding it increasingly difficult to do their jobs.
Let’s take Simon Willison, a well-known researcher in the field of AI, as an example. He wrote a post on his blog saying he was disappointed by this decision. He explained that, as a developer, the transparency that’s all. Knowing how an artificial intelligence arrives at a decision is fundamental to understanding if and how to improve it.
And what do you think?
Don’t you also find the idea of an artificial intelligence thinking fascinating? But how comfortable would you feel knowing the answers you get they could be the result of a dark process, hidden by invisible barriers? This situation leaves us with an important reflection: how much can we trust an AI’s responses if we don’t know exactly how they are processed?
Want to know more about how these models work? Continue following our articles to discover the behind the scenes of the most advanced technologies of the moment and delve deeper into the world of artificial intelligence!
#Find #OpenAI #limits #access #AIs #thinking