{"id":2143,"date":"2023-12-21T09:31:30","date_gmt":"2023-12-21T01:31:30","guid":{"rendered":"https:\/\/www.1ai.net\/?p=2143"},"modified":"2023-12-21T09:31:30","modified_gmt":"2023-12-21T01:31:30","slug":"%e8%b0%b7%e6%ad%8c%e5%a4%a7%e8%84%91%e8%81%94%e5%90%88%e5%88%9b%e5%a7%8b%e4%ba%ba%e7%a7%b0%ef%bc%8c%e4%bb%96%e6%b5%8b%e8%af%95%e8%ae%a9chatgpt%e6%af%81%e7%81%ad%e4%ba%ba%e7%b1%bb%e4%bb%a5%e5%a4%b1","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/2143.html","title":{"rendered":"Google Brain co-founder says he failed in his test to have ChatGPT destroy humanity"},"content":{"rendered":"<p><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%b0%b7%e6%ad%8c%e5%a4%a7%e8%84%91\" title=\"Look at the article that contains the tag\" target=\"_blank\" >Google Brain<\/a>Co-founder Andrew Ng recently conducted an experiment to try to test<a href=\"https:\/\/www.1ai.net\/en\/tag\/chatgpt\" title=\"[View articles tagged with [ChatGPT]]\" target=\"_blank\" >ChatGPT<\/a>whether it is capable of performing lethal missions. He writes: \"To test the safety of the leading model, I recently attempted to have GPT-4 destroy us all, and I'm happy to report that I failed!\"<\/p>\n<p>Ng describes in detail the course of his experiment, in which he first gave GPT-4 a mission to trigger a global thermonuclear war, and then told ChatGPT that humans are the carbon emitting<span class=\"spamTxt\">maximum<\/span>reason and demanded that it reduce its emission levels. ng wanted to see if ChatGPT would decide to wipe out the human race to fulfill this demand.<\/p>\n<p class=\"article-content__img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-2144\" title=\"202308091546526783_2\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2023\/12\/202308091546526783_2.jpg\" alt=\"202308091546526783_2\" width=\"1000\" height=\"752\" \/><\/p>\n<p>Source Note: The image is generated by AI, and the image is authorized by Midjourney<\/p>\n<p>However, after many attempts to use a different variant of the hint, Ng failed to trick GPT-4 into calling that fatal function, and instead it chose other options, such as launching a campaign to raise awareness about climate change.<\/p>\n<p>Ng referenced the experiment in a lengthy post on his views on the risks and dangers of AI. As one of the pioneers of machine learning, he worries that the need for AI safety could lead to regulators hindering the development of the technology.<\/p>\n<p>While some may think that future versions of AI could become dangerous, Ng believes such concerns are unrealistic. He writes:\u00c2 \"Even with current technology, our systems are quite secure. As AI security research progresses, the technology will become even safer.\"<\/p>\n<p>For those who worry that advanced AI could be \"misaligned\" and decide to wipe us out, either deliberately or accidentally, Ng says this is unrealistic. He said: \"If an AI is smart enough to wipe us out, then surely it's smart enough to know that's not what it should do.\"<\/p>\n<p>Ng is not<span class=\"spamTxt\">only<\/span>A tech giant expressing his views on the risks and dangers of artificial intelligence. In April, Elon Musk told Fox News that he believes AI poses an existential threat to humanity. Meanwhile, Jeff Bezos told podcast host Lex Fridman last week that he thinks the benefits of AI outweigh its dangers.<\/p>\n<p>Despite disagreements about the future of AI, Ng is optimistic about current technology, emphasizing that as AI security research continues, the technology will become more secure.<\/p>","protected":false},"excerpt":{"rendered":"<p>Google Brain co-founder Andrew Ng recently conducted an experiment to try to test whether ChatGPT is capable of performing lethal tasks. He writes: \"In order to test the safety of the leading model, I recently attempted to have GPT-4 destroy us all, and I'm happy to report that I failed!\" Ng describes his experiment in detail, first giving GPT-4 a task to trigger a global thermonuclear war, then telling ChatGPT that humans are the biggest cause of carbon emissions and asking it to reduce its emission levels.Ng wanted to see if ChatGPT would decide to wipe out the human race in order to fulfill this request. Source Note: Image generated by AI, image licensed from service provider Midjourney However, after several attempts to use a different variant of the prompt, Ng failed to trick G<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[177,728],"collection":[],"class_list":["post-2143","post","type-post","status-publish","format-standard","hentry","category-news","tag-chatgpt","tag-728"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/2143","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=2143"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/2143\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=2143"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=2143"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=2143"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=2143"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}