{"id":11396,"date":"2024-05-27T10:56:45","date_gmt":"2024-05-27T02:56:45","guid":{"rendered":"https:\/\/www.1ai.net\/?p=11396"},"modified":"2024-05-27T10:56:45","modified_gmt":"2024-05-27T02:56:45","slug":"openai%e5%86%8d%e5%ba%a6%e6%8b%89%e5%93%8d%e5%ae%89%e5%85%a8%e8%ad%a6%e6%8a%a5%ef%bc%9a%e5%8f%88%e4%b8%80%e9%ab%98%e5%b1%82%e7%a6%bb%e8%81%8c%e6%8f%ad%e7%a4%ba%e5%b7%a8%e5%a4%a7%e9%a3%8e%e9%99%a9","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/11396.html","title":{"rendered":"OpenAI sounds security alarms again: another top executive departure reveals huge risks"},"content":{"rendered":"<p><a href=\"https:\/\/www.1ai.net\/en\/tag\/openai\" title=\"[View articles tagged with [OpenAI]]\" target=\"_blank\" >OpenAI<\/a>recently faced a succession of security team members<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e7%a6%bb%e8%81%8c\" title=\"[Sees articles with [separate] labels]\" target=\"_blank\" >quit a job<\/a>The team includes AI strategy researcher Gretchen Krueger. She joined OpenAI in 2019, worked on GPT-4 and DALL-E2, and led OpenAI's first company-wide \"red team\" test in 2020. Now, she's one of the departing employees who issued a warning.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-11397\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/05\/6385240291914698899182741.png\" alt=\"\" width=\"798\" height=\"262\" \/><\/p>\n<p>Ger took to social media to express her concern that OpenAI needs to improve its decision-making process, accountability, transparency, documentation, and strategy execution, and should take steps to mitigate the impact of technology on social inequality, rights, and the environment. She mentioned that tech companies sometimes disempower those seeking to be held accountable by creating divisions, and she is very concerned about preventing this from happening.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-11398\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/05\/6385240295479133816999189.png\" alt=\"\" width=\"799\" height=\"596\" \/><\/p>\n<p>Ger's departure is part of an internal shakeup within OpenAI's security team. Before her, OpenAI Chief Scientist Ilya Sutskever and Head of SuperAlignment Jan Leike have also announced their departures. These departures have raised questions about OpenAI's decision-making process on security issues.<\/p>\n<p>OpenAI's security team is organized into three main sections.<\/p>\n<p>Super Alignment Team:Focuses on controlling superintelligence that doesn't exist yet.<\/p>\n<p>Safety Systems Team:Focused on reducing the misuse of existing models and products.<\/p>\n<p>Preparedness Team:Mapping Emerging Risks in Frontier Modeling.<\/p>\n<p>Although OpenAI has multiple security teams, the final decision-making power for risk assessment remains in the hands of the leadership. The board of directors has the power to override decisions and currently includes experts and executives from a variety of fields.<\/p>\n<p>Ger's departure and her public comments, along with other changes to the OpenAI security team, have raised widespread community concerns about AI security and corporate governance. These issues are important to people and communities now, and they affect how and by whom aspects of the future are planned.<\/p>","protected":false},"excerpt":{"rendered":"<p>OpenAI has recently faced a succession of departures from its security team, including AI strategy researcher Gretchen Krueger. She joined OpenAI in 2019, worked on GPT-4 and DALL-E2, and led OpenAI's first company-wide \"red team\" test in 2020. Now, she's among the departing employees who have issued warnings. Ger took to social media to express her concern that OpenAI needs to improve its decision-making process, accountability, transparency, documentation, strategy execution, and take steps to mitigate the impact of technology on social inequality, rights, and the environment. She mentioned that tech companies sometimes disempower those seeking to be held accountable by creating divisions, and she<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[190,2786],"collection":[],"class_list":["post-11396","post","type-post","status-publish","format-standard","hentry","category-news","tag-openai","tag-2786"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/11396","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=11396"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/11396\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=11396"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=11396"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=11396"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=11396"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}