{"id":18927,"date":"2024-08-30T09:32:55","date_gmt":"2024-08-30T01:32:55","guid":{"rendered":"https:\/\/www.1ai.net\/?p=18927"},"modified":"2024-08-30T09:38:31","modified_gmt":"2024-08-30T01:38:31","slug":"openai-%e5%92%8c-anthropic-%e5%90%8c%e6%84%8f%e6%8e%a8%e5%87%ba%e6%96%b0%e6%a8%a1%e5%9e%8b%e5%89%8d%e4%ba%a4%e7%bb%99%e7%be%8e%e5%9b%bd%e6%94%bf%e5%ba%9c%e8%af%84%e4%bc%b0%e5%ae%89%e5%85%a8","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/18927.html","title":{"rendered":"OpenAI and Anthropic agree to submit new models to the US government for safety assessment before launching"},"content":{"rendered":"<p>Artificial Intelligence Companies <a href=\"https:\/\/www.1ai.net\/en\/tag\/openai\" title=\"[View articles tagged with [OpenAI]]\" target=\"_blank\" >OpenAI<\/a> and <a href=\"https:\/\/www.1ai.net\/en\/tag\/anthropic\" title=\"[View articles tagged with [Anthropic]]\" target=\"_blank\" >Anthropic<\/a> Agreed to allow<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e7%be%8e%e5%9b%bd\" title=\"_Other Organiser\" target=\"_blank\" >USA<\/a>The government is accessing major new AI models before these companies release them to help improve their security.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-18928\" title=\"3df95fa6j00sj0ca1001qd000o200i2m\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/08\/3df95fa6j00sj0ca1001qd000o200i2m.jpg\" alt=\"3df95fa6j00sj0ca1001qd000o200i2m\" width=\"866\" height=\"650\" \/><\/p>\n<p>The US AI Safety Institute announced on Thursday that<strong>The two companies have signed a memorandum of understanding with the institute, committing to provide access to the model before and after it is publicly released.<\/strong>The U.S. government said the move would help them jointly assess security risks and mitigate potential issues. The agency said it would work with its British counterparts to provide feedback on security improvements.<\/p>\n<p>Jason Kwon, Chief Strategy Officer at OpenAI, expressed support for the collaboration:<\/p>\n<blockquote>\n<ul>\n<li>\u201cWe strongly support the mission of the National AI Safety Institute and look forward to working together to develop safety best practices and standards for AI models. We believe the institute plays a key role in ensuring American leadership in the responsible development of AI. We expect that through our collaboration with the institute, we can provide a framework that the world can learn from.\u201d<\/li>\n<\/ul>\n<\/blockquote>\n<p>Anthropic also said that it is important to build the ability to effectively test AI models. Jack Clark, the company\u2019s co-founder and head of policy, said:<\/p>\n<blockquote>\n<ul>\n<li>\u201cEnsuring AI is safe and trustworthy is critical to enabling the technology to have a positive impact. Through testing and collaboration like this, we can better identify and mitigate the risks posed by AI and promote responsible AI development. We are proud to be part of this important work and hope to set a new standard for the safety and trustworthiness of AI.\u201d<\/li>\n<\/ul>\n<\/blockquote>\n<p>Sharing access to AI models is an important move as federal and state legislatures consider how to place limits on the technology without stifling innovation. On Wednesday, California lawmakers passed the Frontier AI Model Safety Innovation Act (SB 1047), which requires California AI companies to take specific safety measures before training advanced underlying models. This has drawn opposition from AI companies including OpenAI and Anthropic, who warned that it could hurt smaller open source developers, although the bill has been amended and is still awaiting the signature of California Governor Gavin Newsom.<\/p>\n<p>Meanwhile, the White House has been working to get voluntary commitments from major companies on AI safety measures. Several leading AI companies have made non-binding commitments to invest in cybersecurity and discrimination research and to work on watermarking AI-generated content.<\/p>\n<p>Elizabeth Kelly, director of the AI Safety Institute, said in a statement that the new agreements are &quot;just the beginning, but they are an important milestone in our efforts to help responsibly govern the future of AI.&quot;<\/p>","protected":false},"excerpt":{"rendered":"<p>Artificial intelligence companies OpenAI and Anthropic have agreed to allow the U.S. government access to major new AI models before the companies release them to help improve their safety. The companies have signed a memorandum of understanding with the US AI Safety Institute, promising to provide access to models before and after they are publicly released, the institute announced Thursday. The U.S. government said the move will help them work together to assess safety risks and mitigate potential problems. The agency said it will work with its UK counterpart to provide feedback on security improvements. Jason Kwon, OpenAI's chief strategy officer, expressed his support for the collaboration: \"We are very supportive of the U.S.<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[320,190,236],"collection":[],"class_list":["post-18927","post","type-post","status-publish","format-standard","hentry","category-news","tag-anthropic","tag-openai","tag-236"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/18927","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=18927"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/18927\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=18927"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=18927"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=18927"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=18927"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}