{"id":25545,"date":"2024-12-24T02:14:28","date_gmt":"2024-12-23T18:14:28","guid":{"rendered":"https:\/\/www.1ai.net\/?p=25545"},"modified":"2024-12-23T20:16:57","modified_gmt":"2024-12-23T12:16:57","slug":"%e8%b0%b7%e6%ad%8c%e5%89%8d-ceo-%e6%96%bd%e5%af%86%e7%89%b9%ef%bc%9a%e8%8b%a5-ai-%e5%bc%80%e5%a7%8b%e8%87%aa%e6%88%91%e6%94%b9%e8%bf%9b%ef%bc%8c%e6%88%91%e4%bb%ac%e5%ba%94%e8%ae%a4%e7%9c%9f","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/25545.html","title":{"rendered":"Former Google CEO Schmidt: If AI Starts Improving Itself, We Should 'Seriously Consider' Suspending It"},"content":{"rendered":"<p>Current AGI (1)<a href=\"https:\/\/www.1ai.net\/en\/tag\/ai\" title=\"[View articles tagged with [AI]]\" target=\"_blank\" >AI<\/a>(Note: Generalized Artificial Intelligence) has been heating up unabated. openAI CEO Altman believes that AGI will be realized sooner than expected with existing hardware; and former<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%b0%b7%e6%ad%8c\" title=\"[View articles tagged with [Google]]\" target=\"_blank\" >Google<\/a> CEO Eric.<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e6%96%bd%e5%af%86%e7%89%b9\" title=\"[Sees articles with [Schmidt] labels]\" target=\"_blank\" >Schmidt (name)<\/a>Instead, it says that once AI begins to improve itself, we should consider a moratorium on its further development.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-25546\" title=\"067906d5j00soy4r600kcd000v900kup\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/067906d5j00soy4r600kcd000v900kup.jpg\" alt=\"067906d5j00soy4r600kcd000v900kup\" width=\"1125\" height=\"750\" \/><\/p>\n<p>In an interview with the U.S. television network ABC News on the 17th of this month, Schmidt said, \"Eventually you may be able to tell computers, 'Learn everything, do everything.' When the system is able to improve itself, we have to think seriously about<strong>Whether to unplug it<\/strong>. It's going to be extremely tough and it's going to be very challenging to keep the balance.\"<\/p>\n<p>Schmidt predicts that AI will gradually evolve from a task-specific assistant to a<strong>Complex systems capable of independent decision-making<\/strong>. When AI gets to this point, humans should step in and consider shutting down the system, Schmidt noted. Humans will also need to<strong>Ensure that the AI cannot counteract the shutdown system<\/strong>of the effort. \"Theoretically, we'd better have someone who can 'control the power,' as the parable goes.\"<\/p>\n<p>Musk said in October that AI is \"most likely to be great,\" but the<strong>There are also odds that 10% to 20% could go bad<\/strong>He emphasized that the possibility of AI \"going bad\" is not zero. He emphasized that the likelihood of AI \"going bad\" is not zero.<\/p>\n<p>Nonetheless, Schmidt is also optimistic about the positive potential of AI. He said, \"Importantly, this powerful intelligence means that everyone will have an assistant like the erudite in their pocket at all times. In addition to note-taking and writing assistants, everyone will also have<strong>Wise men like Einstein and Leonardo da Vinci advise their own<\/strong>. This will apply to everyone around the globe.\"<\/p>","protected":false},"excerpt":{"rendered":"<p>The current AGI (1AI note: General Artificial Intelligence) fever has been unabated. OpenAI CEO Altman believes that AGI will be realized earlier than expected with the support of existing hardware, while former Google CEO Eric Schmidt said that once AI begins to improve itself, we should consider suspending its further development. In an interview with U.S. television network ABC News on the 17th of this month, Schmidt said: \"Eventually you may be able to tell the computer, 'Learn everything, do everything. When the system is able to improve itself, we'll have to seriously consider whether we want to unplug it. It's going to be extremely tough, and it's going to be very challenging to keep the balance.\" Schmidt predicts that AI will gradually evolve from task-specific assistants to complexes that can make independent decisions<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[411,5286,281],"collection":[],"class_list":["post-25545","post","type-post","status-publish","format-standard","hentry","category-news","tag-ai","tag-5286","tag-281"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/25545","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=25545"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/25545\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=25545"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=25545"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=25545"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=25545"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}