OpenAI and Anthropic agree to share their models with the US AI Safety Institute
OpenAI and Anthropic have agreed to share AI models – before and after release – with the US AI Safety Institute. This agency, established by order of President Biden in 2023, will provide security feedback to companies to improve their models. OpenAI CEO Sam Altman commented on the deal earlier this month.
The US AI Safety Institute did not mention other companies dealing with AI. But in a statement to Engadget, a Google spokesperson told Engadget that the company is in talks with the agency and will share more information as it becomes available. This week, Google began rolling out an updated chatbot and image-generating models for Gemini.
“Safety is critical to driving new technological breakthroughs. With these agreements in place, we look forward to launching our technical collaboration with Anthropic and OpenAI to advance the science of AI safety,” wrote Elizabeth Kelly, director of the US AI Safety Institute, in a statement. “These agreements are just the beginning, but they are milestones as we work to help manage the future of AI responsibly.”
The US AI Safety Institute is part of the National Institute of Standards and Technology (NIST). Creates and publishes guidelines, benchmark tests and best practices for testing and evaluating potentially harmful AI systems. “As profoundly good as AI is, it has the potential to cause great harm, from AI-enabled cyber attacks on a scale beyond anything we’ve seen before to AI-developed bio-weapons that could endanger the lives of millions. ,” said Vice President Kamala Harris in late 2023 after the establishment of this organization.
The first agreement of its kind is a Memorandum of Understanding (formal but non-binding). The agency will get access to each company’s “big new models” before and after they go public. The agency describes the agreements as collaborative, risk-reduction studies that will test capabilities and safety. The US AI Safety Institute will also collaborate with the UK AI Safety Institute.
It comes as federal and state regulators try to establish AI guardrails while the rapidly developing technology is still in its infancy. On Wednesday, the California state assembly approved an AI safety bill (SB 10147) that mandates safety testing of AI models that cost more than $100 million to develop or require a fixed amount of computing power. The bill requires AI companies to have kill switches that can shut down models if they become “uncontrollable or out of control.”
Unlike the non-binding agreement with the federal government, California’s bill will have enforcement teeth. It gives the state attorney general license to sue if AI developers don’t comply, especially during threat-level events. However, the process still needs one vote — and the signature of Governor Gavin Newsom, who will have until Sept. 30 to decide whether to give it the green light.
Update, August 29, 2024, 4:53 PM ET: This story has been updated to add a response from a Google spokesperson.
Source link