{"id":7035,"date":"2024-04-03T09:34:44","date_gmt":"2024-04-03T01:34:44","guid":{"rendered":"https:\/\/www.1ai.net\/?p=7035"},"modified":"2024-04-03T09:34:44","modified_gmt":"2024-04-03T01:34:44","slug":"%e5%85%a8%e7%90%83%e9%a6%96%e4%b8%aa%e6%b6%89-ai-%e5%ae%89%e5%85%a8%e5%8f%8c%e8%be%b9%e5%8d%8f%e8%ae%ae%ef%bc%8c%e8%8b%b1%e7%be%8e%e4%b8%a4%e5%9b%bd%e5%bb%ba%e7%ab%8b%e4%ba%ba%e5%b7%a5%e6%99%ba","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/7035.html","title":{"rendered":"The world&#039;s first bilateral agreement on AI safety, the UK and the US establish a scientific partnership on artificial intelligence safety"},"content":{"rendered":"<p data-vmark=\"edd1\"><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e7%be%8e%e5%9b%bd\" title=\"_Other Organiser\" target=\"_blank\" >USA<\/a>On the evening of April 1, ET, the U.S. and<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%8b%b1%e5%9b%bd\" title=\"[Sees articles with [British] labels]\" target=\"_blank\" >U.K.<\/a>signed a landmark AI-related agreement, announcing the creation of the<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e4%ba%ba%e5%b7%a5%e6%99%ba%e8%83%bd\" title=\"[View articles tagged with [artificial intelligence]]\" target=\"_blank\" >AI<\/a>Safety Science Partnership.<\/p>\n<p data-vmark=\"d482\">It is also the world's first bilateral agreement on AI safety. The agreement makes it clear that the new AI Safety Institute, established in the UK last November, and its U.S. counterpart, the<span class=\"accentTextColor\">Expertise will be exchanged through secondment of researchers from both countries.<\/span>.<\/p>\n<p data-vmark=\"93e8\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-7036\" title=\"b136249c-a528-4930-99b0-5f15e55448ab\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/04\/b136249c-a528-4930-99b0-5f15e55448ab.jpg\" alt=\"b136249c-a528-4930-99b0-5f15e55448ab\" width=\"640\" height=\"390\" \/><\/p>\n<p>Image source: Pixabay<\/p>\n<p data-vmark=\"e359\">also,<span class=\"accentTextColor\">The two organizations will also develop a common approach to AI safety testing<\/span>This includes using the same methodology and underlying infrastructure, and both parties will seek to exchange employees and share information in accordance with state laws and regulations and contracts. The press release also states that the parties intend to conduct joint testing on a \"publicly accessible\" AI model.<\/p>\n<p data-vmark=\"f869\">The two countries are making good on a promise made at the AI Security Summit last November. At that time, 28 countries and regions, including China, the United States and the European Union, signed the Bletchley Declaration on AI. The Bletchley AI Declaration calls for and advocates a human-centered approach, and hopes that AI research institutions, companies and others will design, develop and use AI in a responsible way.<\/p>","protected":false},"excerpt":{"rendered":"<p>On the evening of April 1st, Eastern United States Time, the United States and the United Kingdom signed a landmark AI-related agreement announcing the establishment of an artificial smart security science partnership. This is also the first bilateral agreement on artificial intelligence security around the globe. The agreement made it clear that the new AI Institute for Security Studies, which was established by the United Kingdom last November, and similar United States agencies would share expertise with each other through the secondment of researchers from both countries. In addition, the two institutions will develop a common method of artificial intelligence security testing, including the use of the same methodology and lower-level infrastructure, and will seek to exchange staff and share information in accordance with national laws and regulations and contracts. The press release also stated that the parties intended to conduct joint testing on an \"publicly accessible\" AI model<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[204,236,552],"collection":[],"class_list":["post-7035","post","type-post","status-publish","format-standard","hentry","category-news","tag-204","tag-236","tag-552"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/7035","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=7035"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/7035\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=7035"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=7035"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=7035"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=7035"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}