{"id":5418,"date":"2024-03-13T09:37:15","date_gmt":"2024-03-13T01:37:15","guid":{"rendered":"https:\/\/www.1ai.net\/?p=5418"},"modified":"2024-03-13T09:37:15","modified_gmt":"2024-03-13T01:37:15","slug":"%e6%8a%a5%e5%91%8a%e7%a7%b0%e4%ba%ba%e5%b7%a5%e6%99%ba%e8%83%bd%e4%b8%8e%e6%a0%b8%e6%ad%a6%e5%99%a8%e7%9b%b8%e6%af%94%e5%8f%af%e8%83%bd%e5%af%bc%e8%87%b4%e4%ba%ba%e7%b1%bb%e7%81%ad%e7%bb%9d","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/5418.html","title":{"rendered":"Artificial intelligence could lead to human extinction as badly as nuclear weapons, report says"},"content":{"rendered":"<p>According to the New York Post, a report commissioned by the US government stated:<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e4%ba%ba%e5%b7%a5%e6%99%ba%e8%83%bd\" title=\"[View articles tagged with [artificial intelligence]]\" target=\"_blank\" >AI<\/a>The rapid development of the human race could lead to the threat of weaponization and loss of control, and governments need to take urgent action to avoid potential human extinction. The report said the risk is similar to<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e6%a0%b8%e6%ad%a6%e5%99%a8\" title=\"[Sees articles with [nuclear weapons] labels]\" target=\"_blank\" >nuclear weapon<\/a>The introduction of this technology poses a potential instability to global security.<\/p>\n<p class=\"article-content__img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-5419\" title=\"202306261422262392_7\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/03\/202306261422262392_7.jpg\" alt=\"202306261422262392_7\" width=\"1000\" height=\"752\" \/><\/p>\n<p>Source: The image is generated by AI, and the image is authorized by Midjourney<\/p>\n<p>The report, \u201cAn Action Plan to Improve the Safety and Security of Advanced AI,\u201d was published by Gladstone AI Inc. and obtained by TIME. It recommends a 13-month intervention blueprint during which the researchers spoke with more than 200 people from the U.S. and Canadian governments, major cloud service providers, AI safety organizations, and security and computing experts.<\/p>\n<p>The first steps in the action plan are to establish interim AI safety measures that can then be formalized into law and rolled out internationally. Some of these measures include establishing a new AI agency, setting limits on AI computing power, requiring companies to obtain government licenses when they exceed certain thresholds, and considering prohibiting the public release of how powerful AI models work, such as in open source licenses.<\/p>\n<p>In addition, the report recommends that the government strengthen controls on the manufacture and export of AI chips to ensure national security. These recommendations are intended to mitigate the threat posed by growing AI capabilities to national security through weaponization and loss of control, and to mitigate the risks posed by the continued proliferation of these capabilities.<\/p>\n<p>Governments urgently need to step in to ensure that the development of AI does not threaten global security and to develop comprehensive international safety measures to this end.<\/p>","protected":false},"excerpt":{"rendered":"<p>A report commissioned by the U.S. government states that rapid advances in artificial intelligence could lead to the threat of weaponization and loss of control of the human race, and that the government urgently needs to take action to avert potential human extinction, according to a report by the New York Post. According to the report, the risk is similar to the introduction of nuclear weapons, creating a potentially destabilizing effect on global security. Image Source Remarks:Image generated by AI, image license provider Midjourney The report, titled \"An Action Plan to Improve the Safety and Security of Advanced Artificial Intelligence,\" was released by Gladstone AI Inc. and obtained by TIME magazine. The report suggests a blueprint for a 13-month intervention, during which researchers worked with experts from both the U.S. and Canadian governments, major cloud service providers, AI safety organizations, and security and computing experts from two<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[204,1652],"collection":[],"class_list":["post-5418","post","type-post","status-publish","format-standard","hentry","category-news","tag-204","tag-1652"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/5418","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=5418"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/5418\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=5418"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=5418"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=5418"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=5418"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}