{"id":12427,"date":"2024-06-06T09:44:58","date_gmt":"2024-06-06T01:44:58","guid":{"rendered":"https:\/\/www.1ai.net\/?p=12427"},"modified":"2024-06-06T09:45:57","modified_gmt":"2024-06-06T01:45:57","slug":"openai-%e5%9b%9e%e5%ba%94%e5%91%98%e5%b7%a5%e6%8b%85%e5%bf%a7%ef%bc%9a%e6%94%af%e6%8c%81%e7%9b%91%e7%ae%a1%ef%bc%8c%e5%bf%85%e8%a6%81%e4%bf%9d%e9%9a%9c%e6%8e%aa%e6%96%bd%e5%88%b0%e4%bd%8d%e5%89%8d","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/12427.html","title":{"rendered":"OpenAI responds to employee concerns: supports regulation, won&#039;t release new AI technology until necessary safeguards are in place"},"content":{"rendered":"<p><a href=\"https:\/\/www.1ai.net\/en\/12257.html\/\">OpenAI and Google DeepMind employees jointly expressed concerns about the huge risks of advanced AI and the urgent need for stronger supervision<\/a>, for this <a href=\"https:\/\/www.1ai.net\/en\/tag\/openai\" title=\"[View articles tagged with [OpenAI]]\" target=\"_blank\" >OpenAI<\/a> The company released a statement today highlighting its commitment to delivering powerful and safe AI systems.<\/p>\n<p>The translation of OpenAI\u2019s official statement is as follows:<\/p>\n<p>We pride ourselves on delivering the most capable and secure AI systems and believe we can address risks with a science-based approach.<\/p>\n<p>Given the importance of AI technology, we agree with the contents of the open letter that serious discussions are crucial to better promoting the development of AI technology.<\/p>\n<p>We will continue to engage with governments, civil society, and other communities around the world to create a harmonious AI environment.<\/p>\n<p>Effective means of regulating AI include anonymous integrity hotlines and Safety and Security Committees that involve board members and company safety leaders.<\/p>\n<p>OpenAI said it will not release new AI technology until necessary safeguards are in place, and reiterated its support for government regulation and participation in voluntary commitments on AI safety.<\/p>\n<p>OpenAI responds to employee concerns: supports regulation, won&#039;t release new AI technology until necessary safeguards are in place<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-12428\" title=\"d31f3538-b3c3-4f18-8811-317a475d709c-1\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/d31f3538-b3c3-4f18-8811-317a475d709c-1.jpg\" alt=\"d31f3538-b3c3-4f18-8811-317a475d709c-1\" width=\"640\" height=\"342\" \/><br \/>\nIn response to concerns about retaliation, a spokesperson confirmed that the company has released all former employees from non-disparagement agreements and removed such clauses from standard separation documents.<\/p>","protected":false},"excerpt":{"rendered":"<p>In response to a joint statement from OpenAI and Google DeepMind employees expressing concern about the risks of advanced AI and the need for stronger regulation, OpenAI today released a statement emphasizing its commitment to providing powerful and safe AI systems. Translated, OpenAI's official statement reads as follows: \"We pride ourselves on providing the most capable and safe AI systems, and are confident in our ability to address risks in a scientific manner. Given the importance of AI technology, we agree with the content of the open letter on how serious discussions are critical to better advancing AI technology. We will continue to reach out to governments, civil society and other communities around the world to work together to create a harmonious AI environment. Including anonymity and integrity<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[593,190],"collection":[],"class_list":["post-12427","post","type-post","status-publish","format-standard","hentry","category-news","tag-deepmind","tag-openai"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/12427","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=12427"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/12427\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=12427"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=12427"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=12427"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=12427"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}