The OpenAI logo appears on a mobile phone in front of a screen showing a portion of the company website in this photo taken on Tuesday, Nov. 21, 2023 in New York. (AP Photo/Peter Morgan)
(NewsNation) — OpenAI announced this week its board has the authority to reverse safety decisions made by its CEO and other company leadership, such as the release of new AI models.
This move was laid out in a safety plan published by the artificial intelligence research organization Monday. The plan details what OpenAI is doing to protect against “catastrophic risks” posed by “increasingly powerful models.”
“The study of frontier AI risks has fallen far short of what is possible and where we need to be,” the company said on its website. “To address this gap and systematize our safety thinking, we are adopting the initial version of our Preparedness Framework.”
As part of this preparedness framework, OpenAI created a team that will monitor and evaluate risks of new AI models. Risks are ranked as low, medium, high or critical, and include chemical, biological, radiological and nuclear threats related to AI models. Models can only be deployed if they have a score of medium or low risk.
The preparedness team is tasked with making regular reports to a Safety Advisory Group. These reports will then be sent to OpenAI’s CEO, Sam Altman, and the rest of the company’s leadership, though its Board of Directors can reverse decisions they make, the safety plan says.
A group of AI industry leaders and experts signed an open letter this past April urging a six-month pause in developing systems more powerful than OpenAi’s GPT-4 as they were concerned about potential risks to society.
Those who aren’t industry insiders share those concerns: A May Reuters/Ipsos poll found more than two-thirds of Americans are worried about negative effects of AI, with 61% fearing it could threaten civilization. Another poll by Gallup showed 75% of people think AI could decrease the total number of jobs over the next decade.
Aleksander Madry, the leader of the preparedness group who is on leave from a faculty position at the Massachusetts Institute of Technology, told Bloomberg News he hopes other companies will use OpenAI’s guidelines.
“AI is not something that just happens to us that might be good or bad,” Madry said. “It’s something we’re shaping.”
OpenAI has had a tumultuous few weeks: Altman was recently fired — then quickly re-hired — by the company in late November.
Reuters contributed to this story.