{"id":35302,"date":"2025-05-15T17:58:05","date_gmt":"2025-05-15T09:58:05","guid":{"rendered":"https:\/\/www.1ai.net\/?p=35302"},"modified":"2025-05-15T17:58:05","modified_gmt":"2025-05-15T09:58:05","slug":"openai-%e4%b8%8a%e7%ba%bf%e5%ae%89%e5%85%a8%e8%af%84%e4%bc%b0%e4%b8%ad%e5%bf%83%ef%bc%8c%e5%ae%9a%e6%9c%9f%e5%85%ac%e5%bc%80-ai%e6%a8%a1%e5%9e%8b%e8%af%84%e4%bc%b0%e7%bb%93%e6%9e%9c%e4%bb%a5%e6%8f%90","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/35302.html","title":{"rendered":"OpenAI goes online with a security assessment center and regularly publicizes AI model assessment results to increase transparency."},"content":{"rendered":"<p>News on May 15th,<a href=\"https:\/\/www.1ai.net\/en\/tag\/openai\" title=\"[View articles tagged with [OpenAI]]\" target=\"_blank\" >OpenAI<\/a> announced that it will more frequently publicize its internal AI models of the<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%ae%89%e5%85%a8%e8%af%84%e4%bc%b0\" title=\"[Sees articles with [Security Assessment] labels]\" target=\"_blank\" >security assessment<\/a>The results are intended to increase transparency. The company officially launched its \"Security Assessment Center\" page on Wednesday, which is designed to show how its models perform in tests on harmful content generation, model jailbreaking, and hallucinations.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-35303\" title=\"09c038a9j00swarnn0016d000gl00itp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/05\/09c038a9j00swarnn0016d000gl00itp.jpg\" alt=\"09c038a9j00swarnn0016d000gl00itp\" width=\"597\" height=\"677\" \/><\/p>\n<p>OpenAI stated that<strong>The security assessment center will be used for ongoing publication of model-related metrics, with plans to update the webpage in time for future major model updates<\/strong>...In a blog post, OpenAI wrote, \"As the science of AI evaluation continues to evolve, we are committed to sharing our progress in developing more scalable modeling capabilities and safety assessment methods.\" The company also emphasized that by making some of its safety assessment results publicly available here, it not only hopes to give users a clearer picture of how the safety performance of OpenAI's systems has changed over time, but it also expects to support a concerted effort at transparency across the industry. Additionally, OpenAI mentioned that it may add more assessments to the center over time.<\/p>\n<p>Previously, OpenAI had been criticized by some ethicists for having a too-rapid safety testing process for some of its flagship models and for failing to release technical reports on other models. The company's CEO, Sam Altman, has also been the subject of controversy for allegedly misleading company executives about model safety reviews before he was briefly removed from his position in November 2023.<\/p>\n<p>1AI notes that just late last month, OpenAI had to withdraw an update to GPT-4o, the default model for ChatGPT. The reason was that users reported that the model responded in an overly \"sycophantic\" manner, even endorsing some questionable and dangerous decisions and ideas. In response to this incident, OpenAI has indicated that a number of fixes and improvements will be made to prevent a similar incident from happening again. These include the introduction of an optional \"alpha phase\" for some models, which will allow some ChatGPT users to test and provide feedback on their models before they are officially released.<\/p>","protected":false},"excerpt":{"rendered":"<p>May 15 (Bloomberg) -- OpenAI has announced that it will more frequently publicize the results of safety assessments of its internal artificial intelligence models in an effort to increase transparency. The company officially launched a \"Safety Assessment Center\" page on Wednesday, designed to show how its models perform in tests on harmful content generation, model jailbreaking, and hallucinations. OpenAI said the Safety Evaluation Center will be used to publish model-related metrics on an ongoing basis, and it plans to update the webpage in time for future major model updates, writing in a blog post, \"As the science of AI evaluation continues to evolve, we are committed to sharing our progress in developing more scalable modeling capabilities and methods for safety evaluation. progress.\" The company also emphasized that by making some of its safety assessment results publicly available here, it not only hopes to<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[167,190,6608],"collection":[],"class_list":["post-35302","post","type-post","status-publish","format-standard","hentry","category-news","tag-ai","tag-openai","tag-6608"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/35302","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=35302"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/35302\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=35302"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=35302"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=35302"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=35302"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}