In the 40s, the United States led the Manhattan project in the hope of winning World War II through the development of the atomic bomb. What its architects did not foresee was that, decades later, its legacy would lead to an arms race that would keep the world on the verge of annihilation during the cold war. Are we about to repeat history, but with artificial intelligence?
Today, the idea of a “Manhattan project” for general artificial intelligence (AGI) resonates strongly in Washington, with Sam Altman and other leaders finding synergies with Donald Trump in the so -called ‘Stargate Project’.
The promise is clear: Guarantee American supremacy in a technology that can define the future of global power. But some experts, among them Eric Schmidt, Excelo de Google, warn that this path could lead to the world to an even more dangerous conflict.
The report “Superintentinlligence Strategy”published by Schmidt and other leaders of the sector, challenges the narrative that the US must be the first to develop a superintelligent AI at any cost. Instead, propose a defensive strategy: Dissoun other powers to advance too fast in this field and guarantee mechanisms to prevent the AGI from falling into wrong hands.
The risk of a Manhattan project for AI
One of the key points of the report is that a unilateral bet for the AGI could cause aggressive responses from China. The authors suggest that, if the US tried to monopolize technology, Beijing could interpret this as an existential threat and respond with cyber attacks or similar measures.
The comparison with nuclear weapons is no accident. In the cold war, the balance of power was based on the insured mutual destruction (Mad): If a country attacked with nuclear weapons, the other had the ability to respond with equal force, which deterred its use. Schmidt and his co -authors apply this logic to AI with the concept of Mutual Assure Ai Malfunction (Maim)a strategy in which countries could intervene to prevent their adversaries from developing dangerous.
This implies an important change in the usual narrative about AI. Instead of competing at full speed by the AGI, experts suggest that the US should strengthen your response capacity to cyber threats and AI. This would include measures such as:
- Develop advanced cybersecurity tools to deactivate potentially dangerous in other countries.
- Limit adversaries to Hardware and Advanced AI modelshindering its progress in this field.
- Establish international agreements that regulate the development and use of the AGI, avoiding an uncontrolled career.
Between “Doomers” and “Ostriches”: a new approach to artificial intelligence
The debate about the AGI has resulted in two extreme positions:
- The “Doomers”: They believe that AGI is inherently dangerous and that the world should slow down its development to avoid catastrophes.
- The “Ostrich” (ostriches): They think that AGI will bring huge benefits and that stopping its advance is nonsense.
Schmidt and his co -authors reject both positions and propose a intermediate approach: Accept that the AGI is inevitable, but make sure it develops safely. To do this, they suggest that the US adopts A regulator and deterrenceinstead of simply trying to be the first to reach AGI.
This vision contrasts with the US government approach, which in recent months has openly bet on leading the AI career. From the Congress to the White House, many see the AGI as a geopolitical weapon that must be controlled at all costs.
Can the US avoid a new cold AI war?
The report raises a crucial issue: Is it possible to prevent artificial intelligence from becoming a source of global conflict? While Schmidt, Wang and Hendrycks defend a more prudent approach, the reality is that the AI race has already begun. China, Europe and other technological powers are investing billions in the development of advanced models.
#manhattan #project #Googles #exco #believes #global #crisis