The main artificial intelligence (AI) companies have announced the creation of a fund dedicated to the security of developments in that field, to support independent researchers who create tools to evaluate and test the most powerful models. Anthropic, Google, Microsoft and OpenAI had already created the Frontier Model Forum organization, which is now receiving its first funding and the appointment of an executive director.
The initial financing is 10 million dollars. All of these companies have valuations and investments that are around billions of dollars: Microsoft alone invested 10 billion in OpenAI. Anthropic is in talks to receive more than 3 billion from Google and Amazon.
The companies believe that this will fulfill part of the commitments signed with the White House on creating responsible AI: “Earlier this year, members of the Forum signed voluntary AI commitments at the White House, which included the commitment to facilitate discovery by third parties and the notification of vulnerabilities in our AI systems. “The Forum views the AI Safety Fund as an important part of delivering on this commitment by providing funding to the external community to evaluate and better understand the most advanced systems,” says its press release.
The Forum has also appointed Chris Meserole, who comes from the think tank Brookings Institution. “More powerful AI models hold great promise for society, but to realize their potential we need to better understand how to safely develop and evaluate them. “I am excited to take on that challenge with the Forum,” said Meserole.
These models, increasingly sophisticated, have capabilities that may be unknown to the company that creates them. This fund is a way to receive some help from outside to understand possible negative impacts.
You can follow EL PAÍS Technology in Facebook and x or sign up here to receive our weekly newsletter.
#Large #artificial #intelligence #companies #launch #fund #safer