{"id":16460,"date":"2024-07-26T09:23:04","date_gmt":"2024-07-26T01:23:04","guid":{"rendered":"https:\/\/www.1ai.net\/?p=16460"},"modified":"2024-07-26T09:24:05","modified_gmt":"2024-07-26T01:24:05","slug":"%e7%be%8e%e5%9b%bd%e5%8f%82%e8%ae%ae%e5%91%98%e8%a6%81%e6%b1%82openai%e6%8a%ab%e9%9c%b2%e5%ae%89%e5%85%a8%e6%8e%aa%e6%96%bd%e4%b8%8e%e5%b7%a5%e4%bd%9c%e6%9d%a1%e4%bb%b6","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/16460.html","title":{"rendered":"U.S. senators ask OpenAI to disclose safety practices and working conditions"},"content":{"rendered":"<p data-pm-slice=\"0 0 []\">Recently, a group<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e7%be%8e%e5%9b%bd\" title=\"_Other Organiser\" target=\"_blank\" >USA<\/a><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%8f%82%e8%ae%ae%e5%91%98\" title=\"&quot;Look at the article with the tag.&quot;\" target=\"_blank\" >senator<\/a>Towards<a href=\"https:\/\/www.1ai.net\/en\/tag\/openai\" title=\"[View articles tagged with [OpenAI]]\" target=\"_blank\" >OpenAI<\/a>CEO Sam Altman sent a critical letter requiring him to disclose detailed information about the company&#039;s safety measures and working conditions by August 13, 2024. The request comes as media reports have revealed some of OpenAI&#039;s potential safety risks, including the departure of multiple AI safety researchers, security vulnerabilities, and employee concerns.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-16465\" title=\"get-813\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/07\/get-813.jpg\" alt=\"get-813\" width=\"1000\" height=\"752\" \/><\/div>\n<p data-track=\"9\">Source Note: The image is generated by AI, and the image is authorized by Midjourney<\/p>\n<p data-track=\"10\">The letter was initiated in response to revelations from former employees who harshly criticized OpenAI\u2019s safety measures in AI development. OpenAI\u2019s new AI model GPT-4o was reportedly completed safety testing in just one week, and this accelerated testing approach has raised concerns among security experts. The new model was exposed to be able to generate malicious content, such as instructions for making bombs, with simple prompts.<\/p>\n<p data-track=\"11\">In the letter, the senators emphasized that the public needs to trust OpenAI to remain safe when developing its systems. This includes the integrity of corporate governance, the standardization of security testing, the fairness of hiring practices, compliance with public commitments, and the enforcement of cybersecurity policies. They pointed out that the security commitments OpenAI made to the Biden administration must be fulfilled in a concrete manner.<\/p>\n<p data-track=\"12\">In response to the senators\u2019 requests, OpenAI has released several statements through social media platforms, mentioning the recently formed Safety and Security Committee, the latest progress on Level 5 AGI, the Readiness Framework, and the revision of the much-criticized employee contracts. OpenAI hopes that through these measures, it can demonstrate its improvements in safety and governance.<\/p>\n<p>&nbsp;<\/p>","protected":false},"excerpt":{"rendered":"<p>In recent days, a group of United States senators sent an important letter to Sam Altman, Chief Executive Officer of OpenAI, requesting him to provide detailed information on the company ' s security measures and working conditions by 13 August 2024. In the context of this request, media reports have revealed a number of potential security risks for OpenAI, including the departure of several AI security researchers, security gaps and staff concerns. Source Note: Image generated by AI, which authorized Midjourney, a service provider, to launch this letter in connection with the exposure of former employees, who strongly criticized OpenAI ' s security measures in AI development. It was reported that OpenAI's new AI model GPT-4o had completed a safety test in just a week<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[190,3705,236],"collection":[],"class_list":["post-16460","post","type-post","status-publish","format-standard","hentry","category-news","tag-openai","tag-3705","tag-236"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/16460","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=16460"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/16460\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=16460"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=16460"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=16460"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=16460"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}