{"id":22819,"date":"2024-11-10T10:03:42","date_gmt":"2024-11-10T02:03:42","guid":{"rendered":"https:\/\/www.1ai.net\/?p=22819"},"modified":"2024-11-10T10:03:42","modified_gmt":"2024-11-10T02:03:42","slug":"openai-%e5%8f%88%e8%b5%b0%e4%ba%86%e4%b8%80%e4%bd%8d%e6%a0%b8%e5%bf%83%e4%ba%ba%e6%89%8d%ef%bc%8c%e9%a6%96%e5%b8%ad%e5%ae%89%e5%85%a8%e7%a0%94%e7%a9%b6%e5%91%98%e7%bf%81%e8%8d%94%e7%a6%bb%e8%81%8c","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/22819.html","title":{"rendered":"OpenAI Gets Another Core Talent, Chief Security Researcher Weng Li Departs"},"content":{"rendered":"<div class=\"dpu8C _2kCxD\">\n<p>Nov. 9 (Bloomberg) -- According to a foreign news report, the<a href=\"https:\/\/www.1ai.net\/en\/tag\/openai\" title=\"[View articles tagged with [OpenAI]]\" target=\"_blank\" >OpenAI<\/a> Another lead security researcher, Lilian Weng, announced Friday that she is leaving the startup, after serving as vice president of research and security since August, and before that as OpenAI's security systems team leader.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-22820\" title=\"378df5bfj00smppn8002qd000i900l5m\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/11\/378df5bfj00smppn8002qd000i900l5m.jpg\" alt=\"378df5bfj00smppn8002qd000i900l5m\" width=\"657\" height=\"761\" \/><\/p>\n<\/div>\n<div class=\"dpu8C _2kCxD\">\n<p>In a post on X, Ms. Weng said, \"After 7 years at OpenAI, I feel ready to reboot and explore something new.\"<\/p>\n<\/div>\n<div class=\"dpu8C _2kCxD\">\n<p>Ms. Weng said her last day would be Nov. 15, but did not specify where she would go next.<\/p>\n<\/div>\n<div class=\"dpu8C _2kCxD\">\n<p>\"I made the extremely difficult decision to leave OpenAI,\" Weng said in the post. Weng said in the post. \"Looking at what we've accomplished, I'm proud of everyone on the Safe Systems team, and I'm very confident that the team will continue to thrive.\"<\/p>\n<\/div>\n<div class=\"dpu8C _2kCxD\">\n<p>father's<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e7%a6%bb%e8%81%8c\" title=\"[Sees articles with [separate] labels]\" target=\"_blank\" >quit a job<\/a>is the latest in a series of AI safety researchers, policy researchers and other executives to leave OpenAI in the last year, several of whom have accused OpenAI of prioritizing commercial products over AI safety.<\/p>\n<\/div>\n<div class=\"dpu8C _2kCxD\">\n<p>Weng's departure joins Ilya Sutskever and Jan Leike, who headed OpenAI's now-defunct Superalignment team, which sought to develop ways to control superintelligent AI systems, and who also left the startup this year for other AI safety work.<\/p>\n<\/div>\n<div class=\"dpu8C _2kCxD\">\n<p>According to Weng's LinkedIn information, she first joined OpenAI in 2018 and worked on the startup's robotics team, ultimately building a robotic hand that could solve a Rubik's Cube - a task that took two years to complete, according to her post.<\/p>\n<\/div>\n<div class=\"dpu8C _2kCxD\">\n<p>As OpenAI began to focus more on the GPT paradigm, so did Weng. in 2021, the researcher turned to helping build the startup's applied AI research team. After the release of GPT-4, Weng was tasked with creating a dedicated team to build safety systems for the startup in 2023. Today, OpenAI's Safe Systems division employs more than 80 scientists, researchers, and policy experts, according to Weng's post.<\/p>\n<\/div>\n<div class=\"dpu8C _2kCxD\">\n<p>The number of AI safety experts is considerable, but many are concerned about OpenAI's focus on safety as it tries to build increasingly powerful AI systems. Miles Brundage, a longtime policy researcher, left the startup in October and announced that OpenAI was disbanding its AGI readiness team, which he had advised.<\/p>\n<\/div>\n<div class=\"dpu8C _2kCxD\">\n<p>On the same day, the New York Times reported on former OpenAI researcher Suchir Balaji, who said he left OpenAI because he felt the startup's technology was doing more harm than good to society.<\/p>\n<\/div>\n<div class=\"dpu8C _2kCxD\">\n<p>OpenAI revealed to TechCrunch that its executives and security researchers are working to find a replacement for Weng.<\/p>\n<\/div>\n<div class=\"dpu8C _2kCxD\">\n<p>We are grateful for Lilian's contributions to groundbreaking security research and the establishment of rigorous technical safeguards,\" an OpenAI spokesperson said in an emailed statement. We are confident that the Safe Systems team will continue to play a critical role in ensuring that our systems are safe and secure for the hundreds of millions of people around the world.<\/p>\n<\/div>\n<div class=\"dpu8C _2kCxD\">\n<p>Other executives who have left OpenAI in recent months include CTO Mira Murati, CRO Bob McGrew, and VP of research Barret Zoph. in August, prominent researcher Andrej Karpathy and co-founder John Schulman also announced they would be leaving the startup. Some of them, including Leike and Schulman, left to join OpenAI competitor Anthropic, while others went on to start their own ventures.<\/p>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>Nov. 9 (Bloomberg) -- Lilian Weng, OpenAI's other chief security researcher, announced Friday that she is leaving the startup, according to a report from FT. Weng has served as vice president of research and security since August, and prior to that she was head of OpenAI's security systems team. In a post on X, Ms. Weng said, \"After seven years at OpenAI, I feel ready to reboot and explore something new.\" Ms. Weng said her last day will be November 15, but didn't specify where she'll go next. \"I made the extremely difficult decision to leave OpenAI,\" Weng said in the post. Weng said in the post. \"Looking at what we have accomplished, I<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[190,2786],"collection":[],"class_list":["post-22819","post","type-post","status-publish","format-standard","hentry","category-news","tag-openai","tag-2786"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/22819","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=22819"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/22819\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=22819"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=22819"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=22819"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=22819"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}