{"id":19016,"date":"2024-08-31T09:07:56","date_gmt":"2024-08-31T01:07:56","guid":{"rendered":"https:\/\/www.1ai.net\/?p=19016"},"modified":"2024-08-31T09:07:56","modified_gmt":"2024-08-31T01:07:56","slug":"%e5%89%8d%e5%91%98%e5%b7%a5%e7%88%86%e6%96%99%ef%bc%8copenai-agi-%e5%ae%89%e5%85%a8%e5%9b%a2%e9%98%9f%e5%b7%b2%e6%b5%81%e5%a4%b1%e8%bf%91%e5%8d%8a%e6%88%90%e5%91%98","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/19016.html","title":{"rendered":"Former employees revealed that OpenAI AGI security team has lost nearly half of its members"},"content":{"rendered":"<p><a href=\"https:\/\/www.1ai.net\/en\/tag\/openai\" title=\"[View articles tagged with [OpenAI]]\" target=\"_blank\" >OpenAI<\/a> However, a recent report reveals a worrying phenomenon: nearly half of the researchers who once focused on the long-term risks of superintelligent AI have left the company.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-19017\" title=\"21770aa4j00sj25oy0011d000q400hfm\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/08\/21770aa4j00sj25oy0011d000q400hfm.jpg\" alt=\"21770aa4j00sj25oy0011d000q400hfm\" width=\"940\" height=\"627\" \/><\/p>\n<p>According to Fortune, Daniel Kokotajlo, a former governance researcher at OpenAI, said that in the past few months, almost half of OpenAI <a href=\"https:\/\/www.1ai.net\/en\/tag\/agi\" title=\"_OTHER ORGANISER\" target=\"_blank\" >AGI<\/a> Members of the security team have left, raising concerns about whether the company is neglecting AI safety.<\/p>\n<p>AGI safety researchers are primarily responsible for ensuring that future AGI systems do not pose an existential threat to humanity. However, as OpenAI increasingly focuses on products and commercialization, the departure of researchers means that the company&#039;s safety research team is gradually shrinking.<\/p>\n<p>Kokotajlo pointed out that<strong>Since 2024, OpenAI&#039;s AGI safety team has been reduced from about 30 people to about 16 people<\/strong>He believes that this was not an organized action, but rather individuals gradually lost confidence and left.<\/p>\n<p>An OpenAI spokesperson said the company is proud to provide the most capable and safest artificial intelligence systems and believes it has a scientific approach to address risks.<\/p>\n<p>Earlier this year, OpenAI co-founder and chief scientist Ilya Sutskever announced his resignation from OpenAI, and the &quot;super alignment&quot; team he led, which was responsible for safety issues, was also disbanded.<\/p>","protected":false},"excerpt":{"rendered":"<p>OpenAI has always been committed to developing AI technologies that can benefit all of humanity, yet a recent report reveals a worrying phenomenon: nearly half of the researchers who had focused on the long-term risks of superintelligent AI have left the company. Daniel Kokotajlo, a former governance researcher at OpenAI, said that almost half of OpenAI's AGI security team members have left the company in the past few months, according to Fortune. This raises concerns about whether the company is neglecting AI security. AGI security researchers are primarily responsible for ensuring that future AGI systems do not pose an existential threat to humans. However, as OpenAI focuses more and more on product and commercial<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[151,190],"collection":[],"class_list":["post-19016","post","type-post","status-publish","format-standard","hentry","category-news","tag-agi","tag-openai"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/19016","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=19016"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/19016\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=19016"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=19016"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=19016"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=19016"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}