NY.– OpenAI announced that it is establishing a security committee and has begun training a new artificial intelligence model to impersonate the GPT-4 system to strengthen its ChatGPT.
The San Francisco-based company said in a blog post Tuesday that the committee will advise a board on “critical safety and safety-related decisions” for its projects and operations.
The security committee comes as debate swirls around AI security at the company, which came under the spotlight after a researcher, Jan Leike, resigned and increased criticism of OpenAI for allowing security to “take a seat.” second place to the attractiveness of the products.”
Ilya Sutskever, co-founder and chief scientist at OpenAI also resigned and the company disbanded the “super alignment” team focused on artificial intelligence risks that they jointly headed.
Leike announced Tuesday that he has joined rival AI company Anthropic, founded by former OpenAI leaders “to continue the mission of super alignment” there.
AI models are prediction systems that are trained on vast databases to generate human-like text, images, videos, and conversations.
Frontera models are the most powerful cutting-edge AI systems.
The security committee is made up of company insiders such as OpenAI CEO Sam Altman and President Bret Taylor and four OpenAI technical and policy experts.
It also includes board members such as Adam D’Angelo, who is the CEO of Quora, and Nicole Seligman, former general counsel of Sony.
The committee’s first task will be to evaluate and develop OpenAI processes and safeguards and make its recommendations to the council within 90 days.
The company said it will publish the recommendations it is adopting “as a way to be consistent with security.”
#OpenAI #forms #security #committee