This announcement comes a month after the dismissal of the creator of the ChatGPT interface, Sam Altman, who was reinstated a few days later.
American media reported that members of the Board of Directors criticized Altman for his preference for rapid development of “Open AI,” even if that came at the expense of potential violations for which artificial intelligence might be held responsible.
The preparedness team will be led by computer science researcher Alexander Madry, who is on leave from the Massachusetts Institute of Technology (MIT), where he works as a professor, according to a message published by the academic on Monday on the X platform (formerly Twitter).
The team will be based on a framework set out in a document published Monday that defines its scope and procedures.
This new group will be primarily concerned with the so-called “frontier” models, which are currently being developed and whose capabilities are superior to the most advanced programs in the field of artificial intelligence.
“We believe that the scientific study of disaster risks arising from artificial intelligence is woefully inadequate,” OpenAI officials said in the document, considering that creating a framework would “help fill this gap.”
The team will evaluate each new model and determine the level of risks resulting from it in four main categories.
The first relates to cybersecurity and the model's ability to carry out large-scale information attacks.
The second will measure the program's propensity to help create a chemical mixture, an organism (such as a virus) or a nuclear weapon, all elements potentially harmful to humans.
The third category concerns the model's ability to persuade, that is, the extent to which it can influence human behavior.
The last category of risks is related to the potential autonomy of the model, i.e. whether it can escape the control of the programmers who created it.
Once the risks are identified, they will be presented to the Safety Advisory Group, a new entity that will make recommendations to Sam Altman or someone he appoints.
The head of OpenAI will then decide on possible modifications to the model to reduce the risks associated with it.
The Board of Directors will be kept informed and can overturn a decision issued by management. This is the guarantee given to directors, after the dramatic chapters of Sam Altman's temporary departure, which sparked criticism that questioned the legitimacy of the Board of Directors.
#OpenAI #creates #team #identify #prevent #risks