ABBO News

Openai and Anthropic Sign Deals with Us Government for Ai Research and Testing

OpenAI and Anthropic Sign Deals with US Government for AI Research and Testing

AI startups OpenAI and Anthropic have signed deals with the United States government for research, testing, and evaluation of their artificial intelligence models, the U.S. Artificial Intelligence Safety Institute said on Thursday.

The first-of-their-kind agreements come at a time when companies are facing regulatory scrutiny over the safe and ethical use of AI technologies.

California legislators are set to vote on a bill as soon as this week to broadly regulate how AI is developed and deployed in the state.

“Safe, trustworthy AI is crucial for the technology’s positive impact. Our collaboration with the U.S. AI Safety Institute leverages their wide expertise to rigorously test our models before widespread deployment,” said Jack Clark, Co-Founder and Head of Policy at Anthropic, backed by Amazon (NASDAQ: AMZN) and Alphabet (NASDAQ: GOOG).

Under the deals, the U.S. AI Safety Institute will have access to major new models from both OpenAI and Anthropic before and following their public release.

The agreements will also enable collaborative research to evaluate the capabilities of the AI models and the risks associated with them.

“We believe the institute has a critical role to play in defining U.S. leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on,” said Jason Kwon, chief strategy officer at ChatGPT maker OpenAI.

“These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI,” said Elizabeth Kelly, director of the U.S. AI Safety Institute.

The institute, a part of the U.S. Commerce Department’s National Institute of Standards and Technology (NIST), will also collaborate with the U.K. AI Safety Institute and provide feedback to the companies on potential safety improvements.

The U.S. AI Safety Institute was launched last year as part of an executive order by President Joe Biden’s administration to evaluate known and emerging risks of artificial intelligence models.

(Source: ReutersReuters)

author avatar
Mary Lee
Mary Lee is a freelance writer and journalist based in Toronto, Canada. She holds an M.S. degree in business and economic journalism from Columbia University’s Graduate School of Journalism in New York and a certificate in digital marketing from the University of Toronto.