What happens when one of the brightest minds behind OpenAI warns that the next generation of artificial intelligence will be “autonomous” and “unpredictable”? The statements of Ilya Sutskever, former chief scientist at OpenAIresonate at a key moment for the technology industry. After leaving the company in May 2024, he founded Safe Superintelligence Inc. (SSI)an organization that is committed to controlled and safe AI development.
But his departure was not easy: the friction with Sam Altman, CEO of OpenAI, shows the tensions between commercial ambition and ethical principles around artificial intelligence.
Sutskever’s key role at OpenAI
Ilya Sutskever was one of the original founders of OpenAI, created in 2015 with the ambitious purpose of developing artificial intelligence that benefits humanity. As chief scientist, Sutskever was instrumental in the breakthroughs that propelled OpenAI to stardom, from GPT-3 to the creation of the revolutionary ChatGPT. However, his work went beyond technical capabilities: he was a staunch defender of security in AI development.
For years, he warned about the risks of artificial superintelligence: systems capable of surpassing human cognitive abilities, autonomous and, above all, difficult to predict or control. His recent remarks at the NeurIPS conference only underscore these concerns.
Sutskever’s vision of superintelligence
At NeurIPS, Ilya Sutskever deepened his vision of artificial superintelligence. According to him, the next generation systems will not only be more advanced, but will have completely new features:
- Real autonomy: Unlike current AIs, which have some “limited agency,” future artificial intelligences will be able to make decisions and act independently.
- Reasoning ability: Sutskever predicts that these AIs will understand complex concepts with minimal data.
- Unpredictability: As these technologies gain autonomy, their actions will become increasingly difficult to predict.
- Self-awareness: In a controversial point, the scientist believes that these AIs could become self-conscious and even demand rights.
“If the end result is that AIs want to coexist with us and have rights, it is not necessarily a bad scenario,” Sutskever concluded. These statements, although controversial, open a philosophical and ethical debate about the relationship between humans and machines.
The fight with Sam Altman and the departure of OpenAI
Sutskever’s departure from OpenAI was the outcome of a series of internal tensions over the security priority in the face of exponential growth. In 2023, he and Jan Leike led the team Superalignmentintended to investigate how to control extremely advanced artificial intelligences. However, differences with Sam Altman, CEO of the company, led to an irreconcilable conflict.
Altman, with a more commercial visionprioritized launching products and ensuring OpenAI’s competitiveness in a market that demands constant advances. Sutskever, on the other hand, defended a more conservative position, stating that excessive growth without security guarantees could trigger irreversible consequences. This struggle reached its climax in May 2024, when Sutskever left the company.
Safe Superintelligence Inc.: Sutskever’s new bet
Just a month after his departure, Sutskever announced the release of Safe Superintelligence Inc. (SSI)along with other former members of OpenAI, such as Daniel Gross and Daniel Levy. SSI’s mission is clear and compelling: ensure that superintelligent AI is safe and aligned with human interests.
In its founding statement, the company stated that:
“Our sole focus is secure superintelligence. We will advance capabilities as quickly as possible, but with security always one step ahead.” SSI has managed to attract the attention of investors with an initial round of 1 billion dollars. Operating out of Palo Alto and Tel Aviv, the company promises a distraction-free approach, no short-term commercial pressures, and an agenda entirely focused on security.
The future of AI: between progress and caution
The conflict between Sutskever and Altman symbolizes a dilemma in the development of artificial intelligence: how far should we advance and at what cost? While companies like OpenAI and Anthropic (with Jan Leike leading their security team) are betting on a balance between growth and control, SSI proposes a absolute prioritization of security.
Sutskever’s vision is not only technical, but ethical and existential. The possibility of creating intelligence more advanced than human raises fundamental questions about control, coexistence, and the very future of humanity.
Sutskever’s words in NeurIPS leave us with an open question that seems like science fiction: can we control what we are about to create, or will we be mere spectators of its autonomy?
#Superintelligent #unpredictable #autonomous #warning #scientific #brain #ChatGPT