{"id":5551,"date":"2024-03-15T09:19:45","date_gmt":"2024-03-15T01:19:45","guid":{"rendered":"https:\/\/www.1ai.net\/?p=5551"},"modified":"2024-03-15T09:19:45","modified_gmt":"2024-03-15T01:19:45","slug":"200-%e5%90%8d%e4%b8%93%e5%ae%b6%e7%bc%96%e5%86%99%e6%8a%a5%e5%91%8a%ef%bc%9aai-%e5%8f%91%e5%b1%95%e5%8f%af%e8%83%bd%e5%af%b9%e4%ba%ba%e7%b1%bb%e6%9e%84%e6%88%90%e7%81%ad%e7%bb%9d%e7%ba%a7","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/5551.html","title":{"rendered":"200 experts write report: AI development may pose an &quot;extinction-level threat&quot; to humans"},"content":{"rendered":"<p data-vmark=\"be57\">The U.S. Department of State has commissioned a new report warning of <a href=\"https:\/\/www.1ai.net\/en\/tag\/ai\" title=\"[View articles tagged with [AI]]\" target=\"_blank\" >AI<\/a> is growing exponentially.<strong>Could pose an \"extinction-level threat\" to humanity.<\/strong><\/p>\n<p data-vmark=\"bdcf\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-5552\" title=\"5ffb89b5-2dd7-407c-81d1-6ce7709a82a5\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/03\/5ffb89b5-2dd7-407c-81d1-6ce7709a82a5.jpg\" alt=\"5ffb89b5-2dd7-407c-81d1-6ce7709a82a5\" width=\"1024\" height=\"819\" \/><\/p>\n<p data-vmark=\"6d9d\">The report, known in its entirety as the Action Plan to Improve the Safety and Security of Advanced Artificial Intelligence, calls for the<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e7%be%8e%e5%9b%bd%e6%94%bf%e5%ba%9c\" title=\"[View articles tagged with [US Government]]\" target=\"_blank\" >US Government<\/a>Swift and decisive action must be taken to avoid the significant national security risks posed by AI. The report goes on to say that if these impediments are not implemented, AI or Artificial General Intelligence (AGI) could become \"an extinction-level threat to the human species\".<\/p>\n<p data-vmark=\"0ed8\">The report, which involved more than 200 executives from OpenAI, Meta, Google, Google DeepMind, and other major companies in the AI field, as well as government workers, recommended that the U.S. government limit and regulate the development of AI and require AI companies to train any new <a href=\"https:\/\/www.1ai.net\/en\/tag\/ai%e6%a8%a1%e5%9e%8b\" title=\"[View articles tagged with [AI models]]\" target=\"_blank\" >AI Models<\/a>All applications need to be submitted at the time.<\/p>\n<p data-vmark=\"eada\">The report also recommends that the U.S. government introduce legislation to outlaw the open-sourcing of important AI models, arguing that the information in those models could lead to \"potentially devastating consequences for global security.\"<\/p>","protected":false},"excerpt":{"rendered":"<p>The U.S. Department of State has commissioned a new report warning that AI is growing exponentially and could pose an \"extinction-level threat\" to humanity. The report, known as the Action Plan for Improving the Safety and Security of Advanced Artificial Intelligence, calls for the U.S. government to act quickly and decisively to avert the significant national security risks posed by AI. The report goes on to say that if these impediments are not implemented, AI or Artificial General Intelligence (AGI) could become an \"extinction-level threat to the human species\". The report, which involved more than 200 executives from OpenAI, Meta, Google, Google DeepMind, and other major companies in the field of artificial intelligence, as well as government staff, recommends that the U.S. government limit and regulate the development of AI and require AI<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[411,167,1685],"collection":[],"class_list":{"0":"post-5551","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"hentry","6":"category-news","7":"tag-ai","9":"tag-1685"},"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/5551","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=5551"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/5551\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=5551"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=5551"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=5551"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=5551"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}