{"id":10624,"date":"2024-05-19T09:31:17","date_gmt":"2024-05-19T01:31:17","guid":{"rendered":"https:\/\/www.1ai.net\/?p=10624"},"modified":"2024-05-19T09:31:17","modified_gmt":"2024-05-19T01:31:17","slug":"%e7%aa%81%e5%8f%91%ef%bc%81openai%e5%86%8d%e5%a4%b1%e4%b8%80%e5%90%8d%e9%ab%98%e7%ae%a1%ef%bc%8c%e5%ae%89%e5%85%a8%e4%b8%bb%e7%ae%a1%e8%be%9e%e8%81%8c","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/10624.html","title":{"rendered":"Breaking news! OpenAI loses another senior executive, security chief resigns"},"content":{"rendered":"<p>In the early morning of May 18,<a href=\"https:\/\/www.1ai.net\/en\/tag\/openai\" title=\"[View articles tagged with [OpenAI]]\" target=\"_blank\" >OpenAI<\/a>Safety Supervisor,<span class=\"spamTxt\">super<\/span>Jan Leike, head of alignment, announced on social media that he was leaving OpenAI.<\/p>\n<p>This is the second OpenAI co-founder and chief scientist to resign after Ilya Sutskever, the co-founder and chief scientist of OpenAI, resigned on Wednesday.<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e9%ab%98%e7%ae%a1\" title=\"[Sees articles with labels]\" target=\"_blank\" >Executives<\/a>Resign.<\/p>\n<p>I believe that with the departure of these two people, many more people will leave OpenAI in the future. This also marks the failure of the &quot;removal of Sam Altman&quot; event initiated by Ilya last year.<\/p>\n<p class=\"article-content__img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-10625\" title=\"2024051808504921390\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/05\/2024051808504921390.jpg\" alt=\"2024051808504921390\" width=\"554\" height=\"216\" \/><\/p>\n<p>OpenAI CEO Sam Altman<span class=\"spamTxt\">First<\/span>Time responded, thanking and affirming Jan&#039;s contributions and leadership at OpenAI.<\/p>\n<p>He also said that a longer article will be published in the next few days to explain OpenAI&#039;s plans and actions for product safety.<\/p>\n<p><strong>Jan said in his resignation speech<\/strong>He joined OpenAI because OpenAI is the most suitable company in the world for large-model security research.<\/p>\n<p>but,<strong>OpenAI\u2019s senior leadership seemed to have lost interest in safety alignment until the situation became irreversible.<\/strong>, so, it&#039;s time to leave.<\/p>\n<p>He also admitted that building AI robots smarter than humans is a very dangerous thing, and he does not want to see science fiction movies like &quot;Terminator&quot; and &quot;I, Robot&quot; become reality, where robots go out of control and start cleaning up humans.<\/p>\n<p>OpenAI is shouldering a huge responsibility on behalf of all of humanity and moving towards AGI (artificial general intelligence).<\/p>\n<p>More computing power and infrastructure should be used for product security, robustness,<span class=\"spamTxt\">super<\/span>Alignment, data confidentiality, monitoring and other areas.<\/p>\n<p>Therefore, OpenAI should be an AGI company that focuses on safety, rather than blindly releasing dangerous products without safety guarantees.<\/p>\n<p>Regarding the resignation of Ilya and Jan, some people said that since these two security leaders have resigned, who will be responsible for OpenAI&#039;s product security in the future?<\/p>\n<p>Another netizen also revealed that it is okay to leave OpenAI, but you must sign a &quot;resignation agreement&quot;. If you do not sign a lifelong non-disparagement commitment, you will lose all OpenAI shares you have obtained.<\/p>\n<p>I don\u2019t know whether Jan\u2019s complaints will affect his equity.<\/p>\n<p>Although OpenAI released the multimodal large model GPT4o this week, which once again shocked the technology circle, its internal personnel also underwent major changes, which may bring some uncertainties to future product releases such as GPT-5.<\/p>\n<p>On November 18, 2023, Ilya believed that there was a big problem with Sam&#039;s product route, which was too radical and completely disregarded the safety of the product. He then launched the &quot;Remove Sam Altman&quot; event that shocked the global technology community.<\/p>\n<p>After more than ten days of negotiations, Sam<span class=\"spamTxt\">Strong<\/span>With help, he returned to OpenAI to take charge.<\/p>\n<p>Now that Ilya and others have resigned, will Sam, without the Dragon Chain, continue to soar on the road to AGI? Let us wait and see.<\/p>\n<p><strong>About Jan Leike<\/strong><\/p>\n<p>Prior to joining OpenAI, Jan worked at Google DeepMind, where he was responsible for prototyping reinforcement learning from human feedback. During his time at OpenAI, he participated in the development of InstructGPT, ChatGPT, and the secure alignment of GPT-4.<\/p>\n<p>Jan has published many well-known large model security papers, including &quot;Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision&quot;, &quot;Self-critiquing models for assisting human evaluators&quot;, and &quot;Deep Reinforcement Learning from Human Preferences&quot;.<\/p>\n<p>In 2023, Jan was named one of the 100 most influential people in AI by Time magazine.<\/p>","protected":false},"excerpt":{"rendered":"<p>In the early morning of May 18th, Jan Leike, OpenAI's head of security and head of SuperAlignment, announced on social media that he was leaving OpenAI. This is also the resignation of another executive after the resignation of OpenAI's co-founder and Chief Scientist, Ilya Sutskever, on Wednesday, and is expected to be followed by a number of others. I'm sure that with these two departures, many more will follow, and this marks the end of the failed \"Oust Sam Altman\" campaign started by Ilya last year. Sam Altman, CEO of OpenAI, responded immediately, thanking and recognizing Jan's contributions and leadership at OpenAI. He said that he will publish a longer article in the next few days to explain OpenAI's commitment to<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[190,1032],"collection":[],"class_list":["post-10624","post","type-post","status-publish","format-standard","hentry","category-news","tag-openai","tag-1032"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/10624","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=10624"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/10624\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=10624"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=10624"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=10624"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=10624"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}