Leading artificial intelligence (AI) companies OpenAI and Anthropic have signed agreements with the U.S. government for their AI models to be used for research, testing and evaluation, the National Institute of Standards and Technology (NIST) announced Thursday.
As a result of the agreement, the U.S. AI Safety Institute will be allowed access to the company's major new models before and after they are released publicly, the NIST said in a release.
The deal seeks to increase research on the capabilities and risks of AI, as well as the best ways to prevent the risk. The institute intends to also give feedback to OpenAI and Anthropic on possible safety improvements, per the NIST.
“Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” Elizabeth Kelly, the director of the U.S. AI Safety Institute, said in a statement. “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”
The agreement comes as AI companies face increasing scrutiny from the government and lawmakers over the safety of their models.
The AI Safety Insitute was launched within the Department of Commerce last year as part of President Biden's sweeping executive order on AI safety, risks and the preserving of data privacy. Prior to serving as the lead of the institute, Kelly served as an economic adviser to Biden.
OpenAI, the maker of ChatGPT, is also a part of the AI Safety Institute Consortium, which is under the umbrella of the AI Safety Institute. Other tech companies in the consortium include Microsoft, Alphabet's Google, Apple and Meta platforms. Some government agencies and academic institutions are also a part of the group.