{"id":920,"date":"2023-10-27T10:59:50","date_gmt":"2023-10-27T02:59:50","guid":{"rendered":"https:\/\/www.1ai.net\/?p=920"},"modified":"2023-10-27T10:59:50","modified_gmt":"2023-10-27T02:59:50","slug":"%e8%8b%b1%e5%9b%bd%e9%a6%96%e7%9b%b8%e5%91%bc%e5%90%81%e5%9c%a8ai%e4%b8%8a%e5%af%bb%e6%b1%82%e5%b9%b3%e8%a1%a1%e8%a7%84%e5%88%99%ef%bc%8c%e5%b9%b6%e9%82%80%e8%af%b7%e4%b8%ad%e5%9b%bd%e5%8f%82%e5%8a%a0","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/920.html","title":{"rendered":"UK PM calls for balanced rules on AI, invites China to summit"},"content":{"rendered":"<p><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%8b%b1%e5%9b%bd%e9%a6%96%e7%9b%b8\" title=\"[Sees articles with labels]\" target=\"_blank\" >British Prime Minister<\/a>Rishi Sunak in government<a href=\"https:\/\/www.1ai.net\/en\/tag\/ai%e5%ae%89%e5%85%a8%e5%b3%b0%e4%bc%9a\" title=\"_OTHER ORGANISER\" target=\"_blank\" >AI Security Summit<\/a>He gave a speech recently, stressing the need to strike a balance when dealing with the risks and potential benefits of artificial intelligence (AI).<\/p>\n<p>In a speech a week ago, he acknowledged the serious risk of AI being exploited by criminals, but stressed that he did not want to be an alarmist.<span class=\"spamTxt\">only<\/span>Those testing AI safety are the organizations that are developing it, and even they don\u2019t always fully understand how powerful their models might become. Those organizations can\u2019t be relied upon to assess their work on their own, he said, as many in AI development agree.<\/p>\n<p class=\"article-content__img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-922\" title=\"202310250959057210_0\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2023\/10\/202310250959057210_0.jpg\" alt=\"202310250959057210_0\" width=\"1000\" height=\"666\" \/><\/p>\n<p>Source Note: The image is generated by AI, and the image is authorized by Midjourney<\/p>\n<p>Sunak opposed a rush to regulate AI to prevent stifling innovation. How, he asked, could meaningful laws be made for something that was not fully understood? Instead, he stressed that the government was building a world-leading capability to understand and assess the safety of AI models.<\/p>\n<p>So far, the UK government has been relatively relaxed about AI legislation, not as strict as the EU. The UK government has set out its requirements for AI in a white paper, but has entrusted regulators to develop AI rules in their respective jurisdictions. However, the UK&#039;s AI minister said at the AI Summit London that any potential regulations would complement technical standards and safeguarding technology, meaning more regulations are likely.<\/p>\n<p>Sunak said he wants to work with other countries to address AI safety issues, rather than treating them as adversaries. This is why he invited countries like China to the summit. Sunak hopes to get the opinions of diverse voices to have an in-depth discussion on the regulation of AI. He stressed that no serious AI strategy can be achieved without at least trying to cooperate with the world&#039;s leading AI powers. Although this may not be easy, it is the right thing to do.<\/p>\n<p>China has taken a stricter stance on AI regulation, with Chinese AI companies having to undergo security reviews by the country\u2019s data regulator before releasing new generative AI models to the public.<\/p>\n<p>Sunak said he hopes the summit will build a common understanding of the risks of AI. He hopes that attendees will agree on the first international statement on the nature of AI risks. In addition, Sunak also hopes to establish a &quot;truly global expert group&quot; nominated by the attending countries and organizations to publish a &quot;state of the science of AI&quot; report. He pointed out that our efforts will also depend on cooperation with the AI companies themselves. Each new wave of AI will become more advanced, better trained, with better chips and more computing power. Therefore, we need to ensure that as the risks evolve, our common understanding also evolves.<\/p>\n<p>This proposal reflects some<span class=\"spamTxt\">advanced<\/span>Previous calls from politicians, including UN Secretary-General Antonio Guterres, have called for the creation of a global oversight body for AI risks, similar to the International Atomic Energy Agency (IAEA), which oversees nuclear power plants and weapons.<\/p>\n<p>However, some industry reactions were sceptical, calling it a \u201cpipe dream\u201d.<span class=\"spamTxt\">super<\/span>But some see that as a hope, arguing that it could take years to achieve effective regulation. Instead, they argue companies should focus on ensuring their AI is trained on trustworthy, proprietary data.<\/p>","protected":false},"excerpt":{"rendered":"<p>British Prime Minister Rishi Sunak addressed the Government before the AI Security Summit, stressing the need to balance the risks and potential benefits of AI. In his speech a week ago, he acknowledged that AI was at serious risk of being exploited by outlaws, while stressing that he did not want to be a sensationalist. He noted that the only person currently testing AI security was the organization that was developing AI and that even they did not always fully understand how powerful its models might become. He stated that those organizations could not be relied on to assess their work, as many AI developers agreed. Figure Source Note: Pictures are generated by AI, which empowers Midjourney Sunak, a service provider, to oppose the rush to regulate AI in order to discourage innovation. Him<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[299,298],"collection":[],"class_list":["post-920","post","type-post","status-publish","format-standard","hentry","category-news","tag-ai","tag-298"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/920","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=920"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/920\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=920"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=920"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=920"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=920"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}