{"id":1018,"date":"2023-11-01T10:58:27","date_gmt":"2023-11-01T02:58:27","guid":{"rendered":"https:\/\/www.1ai.net\/?p=1018"},"modified":"2023-11-01T10:58:27","modified_gmt":"2023-11-01T02:58:27","slug":"openai-%e8%81%94%e5%90%88%e5%88%9b%e5%a7%8b%e4%ba%ba%e8%ad%a6%e5%91%8a%e6%9c%aa%e6%9d%a5-ai-%e5%8f%af%e8%83%bd%e8%b6%85%e8%b6%8a%e4%ba%ba%e7%b1%bb%e6%99%ba%e6%85%a7%ef%bc%9a%e4%ba%ba%e7%b1%bb%e5%8f%af","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/1018.html","title":{"rendered":"OpenAI co-founder warns that AI may surpass human intelligence in the future: humans may become part of artificial intelligence"},"content":{"rendered":"<p><a href=\"https:\/\/www.1ai.net\/en\/tag\/openai\" title=\"[View articles tagged with [OpenAI]]\" target=\"_blank\" >OpenAI<\/a>\u00a0Co-founder Ilya Sutskever says that when the future<span class=\"spamTxt\">super<\/span>When intelligent machines rise, humans may choose to work with the<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e4%ba%ba%e5%b7%a5%e6%99%ba%e8%83%bd\" title=\"[View articles tagged with [artificial intelligence]]\" target=\"_blank\" >AI<\/a>Fusion. He might even be<span class=\"spamTxt\">First<\/span>Someone who does.<\/p>\n<p class=\"article-content__img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-1019\" title=\"202302150929449091_0\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2023\/11\/202302150929449091_0.jpg\" alt=\"202302150929449091_0\" width=\"979\" height=\"545\" \/><\/p>\n<p>OpenAI co-founder Ilya Sutskever recently came up with a compelling theory that<strong>He believes that in the future, when super-intelligent machines rise, humans may choose to merge with AI<\/strong>.<\/p>\n<p>In a conversation with MIT Technology Review, Sutskever describes how we should prepare for machines that may surpass human intelligence. He mentioned that these super-intelligent machines will 'see things we can't see'.<\/p>\n<p>This thinking led him to form a team with Jan Leike, a scientist at OpenAI, focused on ensuring that AI models perform only as humans ask them to, and don't behave outside of instructions.<strong>They named this process 'Superalignment', or 'alignment' as applied to superintelligence<\/strong>.<\/p>\n<p>Sutskever emphasized that keeping a check on superintelligence is important, but that it is an 'unsolved problem' and not an area that many researchers are discussing or working on. \"Obviously, we have to make sure that superintelligence built by anyone doesn't get out of hand.\"<\/p>\n<p>Having solved the problem of out-of-control AI, Sutskever believes that in the future humans may choose to integrate with AI.<strong>While he admits the idea may seem 'crazy' today, it could become a reality in the future<\/strong>.<\/p>\n<p>He said:<strong>Many will choose to be part AI. at first, only the<span class=\"spamTxt\">maximum<\/span>The boldest, most adventurous people will try this. Maybe others will follow, or not.<\/strong>&quot;<\/p>\n<p>As he said goodbye to the interviewer, Sutskever even suggested that he might be one of the first to accept being part AI.<\/p>\n<p>Sutskever, no.<span class=\"spamTxt\">only<\/span>The expert who warned that AI could overtake human intelligence. Earlier this month, Geoffrey Hinton, considered one of the three godfathers of AI, said in an interview with CBS News 60 Minutes that emerging technologies could pose a threat to humanity in the next five to 20 years.<\/p>\n<p>Hinton noted that<strong>We might.<span class=\"spamTxt\">first<\/span>Facing 'things' smarter than us and warning of a future where AI could manipulate humans<\/strong>.<\/p>\n<p>He said, \"They will be able to manipulate humans, right? And, since they are from all the novels, Machiavelli (Italian political scientist, philosopher, historian, politician, and diplomat. He was a major figure in the Italian Renaissance, known as the<strong>\u300e<\/strong>The father of modern political science', representative in the field of political philosophy, his book 'Monarchism' puts forward a realist political theory, which<strong>\u300e<\/strong>(The idea of 'politics without morality' as a power play has been called 'Machiavellianism'). Having learned something from all the books and all the political shenanigans of the world, it<strong>They will be very good at convincing people. They will know how to do it.<\/strong>&quot;<\/p>\n<p>As AI technology advances at a rapid pace, experts' warnings are a reminder that technological advances should be pushed forward while also being acutely aware of potential risks and seeking possible solutions.<\/p>","protected":false},"excerpt":{"rendered":"<p>The co-founder of OpenAI, Ilya Sutskever, says that humans may choose to blend with artificial intelligence when super-smart machines rise in the future. He might even be the first to do it. The co-founder of OpenAI, Ilya Sutskever, recently put forward a remarkable theory that humans may choose to merge with AI when future super-smart machines rise. In a conversation with MIT Technology Review, Sutskever described how we should be prepared for a machine that might transcend human intelligence. He mentioned that these super smart machines would \"see what we can't see.\" This idea prompted him to work with OpenAI's scientist, J<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[190,204],"collection":[],"class_list":["post-1018","post","type-post","status-publish","format-standard","hentry","category-news","tag-openai","tag-204"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/1018","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=1018"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/1018\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=1018"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=1018"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=1018"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=1018"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}