{"id":10633,"date":"2024-05-19T10:16:25","date_gmt":"2024-05-19T02:16:25","guid":{"rendered":"https:\/\/www.1ai.net\/?p=10633"},"modified":"2024-05-19T10:16:38","modified_gmt":"2024-05-19T02:16:38","slug":"%e6%9c%aa%e6%9d%a5%e5%b0%86%e6%af%94%e8%bf%87%e5%8e%bb%e6%9b%b4%e5%8a%a0%e8%89%b0%e9%9a%be%ef%bc%9aopenai%e7%9a%84altman%e5%92%8cbrock%e5%9b%9e%e5%ba%94%e9%ab%98%e5%b1%82%e8%be%9e%e8%81%8c","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/10633.html","title":{"rendered":"\"The future will be tougher than the past\": OpenAI's Altman and Brock respond to top resignations"},"content":{"rendered":"<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-10634\" title=\"hero-image.fill_.size_1248x702.v1716064606\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/05\/hero-image.fill_.size_1248x702.v1716064606.jpg\" alt=\"hero-image.fill_.size_1248x702.v1716064606\" width=\"1248\" height=\"702\" \/><\/p>\n<p>This week,<a href=\"https:\/\/www.1ai.net\/en\/tag\/openai\" title=\"[View articles tagged with [OpenAI]]\" target=\"_blank\" >OpenAI<\/a>Jan Leike, co-leader of the &quot;super alignment&quot; team that oversees safety issues at the company<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%be%9e%e8%81%8c\" title=\"[Sees articles with [resignation] labels]\" target=\"_blank\" >Resign<\/a>In a post on the X platform, formerly known as Twitter, the safety leader explained his reasons for leaving OpenAI, including that he had been at odds with the company&#039;s leadership on &quot;core priorities&quot; for so long that they had reached a &quot;tipping point.&quot;<\/p>\n<p>The next day, OpenAI CEO Sam <a href=\"https:\/\/www.1ai.net\/en\/tag\/altman\" title=\"_Other Organiser\" target=\"_blank\" >Altman<\/a> and president and co-founder Greg Brockman responded to Leike&#039;s assertion that the company isn&#039;t focusing on safety.<\/p>\n<p>Among other things, Leike said that OpenAI\u2019s \u201csafety culture and processes have taken a backseat to other products\u201d in recent years and that his team had difficulty getting the resources to do its safety work.<\/p>\n<p>\u201cIt\u2019s long past time for us to take seriously the implications of AGI (artificial general intelligence),\u201d Lake wrote. \u201cWe must prioritize doing everything we can to prepare for them.\u201d<\/p>\n<p>Altman first responded to Lake\u2019s retweet on Friday, saying Lake was right and that OpenAI \u201chas a lot more to do\u201d and is \u201ccommitted to doing so.\u201d He promised a longer post would follow.<\/p>\n<p>On Saturday, Brockman posted a joint response from himself and Altman on X:<\/p>\n<p>After thanking Lake for his work, Brockman and Altman said they had received some questions after Lake&#039;s resignation. They shared three points, the first of which was that OpenAI raised awareness of AGI &quot;so that the world can be better prepared for it.&quot;<\/p>\n<p>\u201cWe have repeatedly demonstrated the incredible possibilities of scaling up deep learning and analyzed their implications; called for international governance of AGI before such calls became popular; and helped pioneer the science of assessing the catastrophic risks of AI systems,\u201d they wrote.<\/p>\n<p>The second is that they are laying the foundation for the safe deployment of these technologies, citing the work done by employees to &quot;bring [Chat] GPT-4 to the world in a safe way.&quot; The two claim that since then (OpenAI released ChatGPT-4 in March 2023), the company has &quot;continuously improved model behavior and abuse monitoring based on lessons learned from deployment.&quot;<\/p>\n<p>The third point? \u201cThe future will be harder than the past,\u201d they wrote. Brock and Altman explained that OpenAI needs to continually improve its safety work when releasing new models, and cited the company\u2019s preparedness framework as a way to help achieve that goal. According to a page on OpenAI\u2019s website, the framework predicts \u201ccatastrophic risks\u201d that may arise and seeks to mitigate them.<\/p>\n<p>Brockman and Altman then went on to discuss a future where OpenAI\u2019s models will be more engaged with the world and interact with more people. They see this as a beneficial thing and believe it can be done safely \u2014 \u201cbut it requires a lot of groundwork.\u201d As such, the company may delay its release timeline so that the models \u201creach [its] safety standards.\u201d<\/p>\n<p>\u201cWe know we can\u2019t imagine every possible future scenario,\u201d they said. \u201cSo we need to have a very tight feedback loop, rigorous testing, careful consideration at every step, world-class safety, and harmony of security and functionality.\u201d<\/p>\n<p>Leaders said OpenAI will continue to research and collaborate with governments and stakeholders on safety issues.<\/p>\n<p>\u201cThere is no proven playbook for how to embark on the path to general artificial intelligence. We think empirical understanding can help point the way forward,\u201d they concluded. \u201cWe believe there are both significant benefits to be gained and serious risks to be mitigated; we take our role here very seriously and carefully weigh feedback on our actions.\u201d<\/p>\n<p data-immersive-translate-paragraph=\"1\" data-immersive-translate-walked=\"773d3dbe-55c9-4cbd-b35d-3934c7c08cfa\"><span class=\"notranslate immersive-translate-target-wrapper\" lang=\"zh-CN\" data-immersive-translate-translation-element-mark=\"1\"><span class=\"notranslate immersive-translate-target-translation-theme-none immersive-translate-target-translation-block-wrapper-theme-none immersive-translate-target-translation-block-wrapper\" data-immersive-translate-translation-element-mark=\"1\"><span class=\"notranslate immersive-translate-target-inner immersive-translate-target-translation-theme-none-inner\" data-immersive-translate-translation-element-mark=\"1\">OpenAI \u9996\u5e2d\u79d1\u5b66\u5bb6 Ilya Sutskever \u672c\u5468\u4e5f\u8f9e\u804c\uff0c\u8fd9\u4e00\u4e8b\u5b9e\u4f7f\u96f7\u514b\u7684\u8f9e\u804c\u548c\u8a00\u8bba\u53d8\u5f97\u66f4\u52a0\u590d\u6742\u3002 \u201c#WhatDidIlyaSee\u201d\u6210\u4e3a X \u4e0a\u7684\u70ed\u95e8\u8bdd\u9898\uff0c\u6807\u5fd7\u7740\u4eba\u4eec\u5bf9 OpenAI \u9ad8\u5c42\u9886\u5bfc\u8005\u6240\u4e86\u89e3\u7684\u4e8b\u60c5\u7684\u731c\u6d4b\u3002\u4ece\u5e03\u7f57\u514b\u66fc\u548c\u5965\u7279\u66fc\u4eca\u5929\u58f0\u660e\u7684\u8d1f\u9762\u53cd\u5e94\u6765\u770b\uff0c\u8fd9\u5e76\u6ca1\u6709\u6d88\u9664\u4efb\u4f55\u8fd9\u79cd\u731c\u6d4b\u3002<\/span><\/span><\/span><\/p>\n<p data-immersive-translate-paragraph=\"1\" data-immersive-translate-walked=\"773d3dbe-55c9-4cbd-b35d-3934c7c08cfa\">As of now, the company is moving forward with the next version: the voice assistant ChatGPT-4o.<\/p>","protected":false},"excerpt":{"rendered":"<p>This week, Jan Leike, co-leader of OpenAI's \"Super Alignment\" team, which oversees the company's security issues, resigned. In a post on Platform X (formerly Twitter), the security leader explained his reasons for leaving OpenAI, including the fact that he had been at odds with the company's leadership over \"core priorities\" for quite some time, to the point of reaching a \"tipping point \"The next day The next day, OpenAI CEO Sam Altman and President and Co-Founder Greg Brockman responded to Leike's assertion that the company was not focused on security. Among other things, Leike said that in recent years, OpenAI's \"security culture and processes have given way to a more robust security culture and processes.<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[1741,190,1850],"collection":[],"class_list":["post-10633","post","type-post","status-publish","format-standard","hentry","category-news","tag-altman","tag-openai","tag-1850"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/10633","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=10633"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/10633\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=10633"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=10633"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=10633"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=10633"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}