OpenAI's GPT-4.1 Has No Security Report, AI Security Transparency Questioned Again

This Monday, April 16th.OpenAI Launched a new AI Models GPT-4.1 Series. The company said that the model outperformed some of its existing models in certain tests, particularly programming benchmarks. However, unlike OpenAI's previous model releases, GPT-4.1 does not come with the security report (i.e., a system card) that usually accompanies model releases.

OpenAI's GPT-4.1 Has No Security Report, AI Security Transparency Questioned Again

As of Tuesday morning, OpenAI had not yet released a security report for GPT-4.1 and does not appear to have any plans to do so, OpenAI spokesperson Shaokyi Amdo said in a statement to TechCrunch:"GPT-4.1 is not a frontier model, so no separate system card will be released for it. "

Typically, AI labs publish safety reports showing the types of tests they conduct internally and with third-party partners to assess the safety of a particular model. These reports sometimes reveal less than flattering information, such as the possibility that a particular model may deceive humans or be dangerously persuasive. Overall, the AI community generally views these reports as a good faith effort by AI labs to support independent research and red team testing.

However, over the past few months, theSome leading AI labs appear to have lowered their reporting standardsthat has sparked strong opposition from security researchers. Google, for example, has been slow to release security reports, while some other labs have released reports that lack previous details.

OpenAI hasn't fared so well lately. Last December, the company was criticized for releasing a security report that contained benchmark results for a model that differed from the actual production deployment. Last month, OpenAI released a system card for a model called Deep Research just weeks after it was released.

Former OpenAI security researcher Steven Adler noted thatSafety reports are not mandated by any law or regulation and are issued voluntarilyHowever, OpenAI has repeatedly promised governments that it would increase the transparency of its models. However, OpenAI has repeatedly promised governments that it will increase the transparency of its models.2023, in a blog post on the eve of the AI Safety Summit in the UK, OpenAI called system cards a "key part" of its accountability approach. And in the run-up to the 2025 Paris AI Action Summit, OpenAI said system cards could provide valuable insights into the risks of models.

Adler said, "System cards are the primary tool used by the AI industry for transparency and describing the content of safety tests. Today's transparency norms and commitments are ultimately voluntary, so it's up to each AI company to make its own decision on whether and when to publish a system card for a particular model."

1AI notes that the backdrop to this GPT-4.1 unreleased system card is that current and former OpenAI employees have raised concerns about its security practices. Last week, Adler, along with 11 other former OpenAI employees, filed a proposed amicus brief in Elon Musk's lawsuit against OpenAI, arguing that the for-profit OpenAI may be cutting back on security efforts. The Financial Times recently reported that OpenAI has reduced the time and resources allocated to security testers due to competitive pressures.

While GPT-4.1 isn't the highest-performing AI model in OpenAI's portfolio, it has made significant strides in efficiency and latency, according to Thomas Woodside, co-founder and policy analyst at the Secure AI Project, who says the performance gains have made security reporting more important. The more complex the model, the higher the risk it can pose, he argues.

Currently, many AI labs have been resisting efforts to put safety reporting requirements into law. For example, OpenAI had opposed California's SB 1047, a bill that would have required many AI developers to audit their publicly released models and publish safety assessments.

statement:The content of the source of public various media platforms, if the inclusion of the content violates your rights and interests, please contact the mailbox, this site will be the first time to deal with.
Information

Google's Veo 2 video generation model comes to Gemini, users can create 8-second 720p videos

2025-4-16 11:03:45

HeadlinesInformation

Tianjin: Striving to build a national brain-computer interface technology innovation center

2025-4-16 11:09:23

Search