{"id":44015,"date":"2025-09-26T11:15:50","date_gmt":"2025-09-26T03:15:50","guid":{"rendered":"https:\/\/www.1ai.net\/?p=44015"},"modified":"2025-09-26T11:15:50","modified_gmt":"2025-09-26T03:15:50","slug":"%e5%be%ae%e8%bd%af-ai-ceo-%e8%8b%8f%e8%8e%b1%e6%9b%bc%ef%bc%9a%e6%9c%aa%e6%9d%a5%e7%9a%84-ai-%e6%81%90%e5%b0%86%e9%9c%80%e8%a6%81%e5%86%9b%e4%ba%8b%e7%ba%a7%e5%b9%b2%e9%a2%84","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/44015.html","title":{"rendered":"MICROSOFT AI CEO SULAIMAN: THE FUTURE AI IS LIKELY TO REQUIRE \"MILITARY INTERVENTION\" TO CONTROL"},"content":{"rendered":"<p>On September 26th, according to a foreign media report tonight<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%be%ae%e8%bd%af\" title=\"[View articles tagged with [Microsoft]]\" target=\"_blank\" >Microsoft<\/a> <a href=\"https:\/\/www.1ai.net\/en\/tag\/ai\" title=\"[View articles tagged with [AI]]\" target=\"_blank\" >AI<\/a> CEO MUSTAFA SULAIMAN WARNED THAT AI MIGHT BE STRONG ENOUGH TO REQUIRE MILITARY-LEVEL INTERVENTION TO STOP IT\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-44016\" title=\"814 bac61j00t36eda001nd000qm00nfp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/09\/814bac61j00t36eda001nd000qm00nfp.jpg\" alt=\"814 bac61j00t36eda001nd000qm00nfp\" width=\"958\" height=\"843\" \/><\/p>\n<p>Sleiman stressed the importance of being able to audit and regulate these complex capabilities to prevent them from going out of control. He said, \u201cWe should not panic. Regulation is necessary, but we must<strong>At the right time and in the right way<\/strong>I DON'T KNOW. IF AI HAS THE ABILITY TO ADAPT ITS OWN CODE, TO SET ITS OWN TARGETS, TO ACT INDEPENDENTLY AND TO ACCUMULATE RESOURCES, IT WILL BECOME AN EXTREMELY POWERFUL SYSTEM, AND IT IS POSSIBLE THAT IT WILL BE ABLE TO DO SO\u00a0<strong>5 to 10 years will require military-level intervention to control<\/strong>. &quot;<\/p>\n<p>SULAIMAN HAD PREVIOUSLY WARNED OF THE POTENTIAL DANGER TO HUMANS BY AI. HE SAID, \"WE HAVE TO..<strong>BUILD AI FOR HUMANS<\/strong>Not<strong>TURN AI INTO A MAN<\/strong>I DON'T KNOW. HE STRESSED THAT THE DESIGN OF AI FOR HUMANS IS MORE IMPORTANT THAN THE USE OF TECHNOLOGY AS AN ADULT DIGITAL PERSON. WHILE PROMOTING THE AI PARTNERSHIP PROGRAMME, HE REMAINED CONVINCED THAT WHILE ENSURING THE VALUE OF TECHNOLOGY, PROTECTION MECHANISMS SHOULD BE PUT IN PLACE TO GUARANTEE HUMAN SECURITY\u3002<\/p>\n<p>1AI learned from the reports that Google DeepMind CEO Demes Hashabis also believes that humans are \"near\" to AGI, but<strong>Society is not ready to respond<\/strong>THIS REALITY KEEPS YOU UP ALL NIGHT. THE DIRECTOR OF THE CYBERSECURITY LABORATORY AT LOUISVILLE UNIVERSITY, AND AN AI SECURITY EXPERT, ROMAN YEUNG POLSKI, CLAIMED, \"THERE'S AN A<strong>99.99999% PROBABILITY<\/strong>THE ONLY WAY TO AVOID THIS RISK IS TO STOP DEVELOPING AI FROM THE START\u3002<\/p>","protected":false},"excerpt":{"rendered":"<p>On September 26th, according to a foreign media report tonight, Microsoft AI CEO Mustafa Sulaiman warned that AI might be strong enough to require military-level intervention to stop. Sleiman stressed the importance of being able to audit and regulate these complex capabilities to prevent them from going out of control. He said: \u201cWe should not panic, regulation is necessary, but it must be done in the right way at the right time. If AI has the capability to adapt its own code, to set its own goals, to act independently and to accumulate resources, it will become an extremely powerful system that may require military-level intervention to control within 5 to 10 years.\u201d Sulaiman had previously warned of the potential danger to humans by AI. He said, \"We will.\"<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[411,280],"collection":[],"class_list":["post-44015","post","type-post","status-publish","format-standard","hentry","category-news","tag-ai","tag-280"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/44015","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=44015"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/44015\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=44015"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=44015"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=44015"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=44015"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}