{"id":50324,"date":"2026-02-25T12:28:45","date_gmt":"2026-02-25T04:28:45","guid":{"rendered":"https:\/\/www.1ai.net\/?p=50324"},"modified":"2026-02-25T12:28:45","modified_gmt":"2026-02-25T04:28:45","slug":"ai-%e5%af%b9%e6%89%8b%e5%8f%91%e5%b1%95%e5%a4%aa%e5%bf%ab%ef%bc%8canthropic-%e6%94%be%e5%bc%83%e9%87%8d%e8%a6%81%e5%ae%89%e5%85%a8%e6%89%bf%e8%af%ba","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/50324.html","title":{"rendered":"AI, the opponent is moving too fast, Anthropic, abandoning important security commitments"},"content":{"rendered":"<p>FEBRUARY 25, ACCORDING TO TIME MAGAZINE, AI ENTERPRISE USA <a href=\"https:\/\/www.1ai.net\/en\/tag\/anthropic\" title=\"[View articles tagged with [Anthropic]]\" target=\"_blank\" >Anthropic<\/a> Great success has been achieved and self-proclaimed as the most safe company in the top AI research laboratory. But executive Anthropic told Time<strong>The company is relinquishing the core commitments of its flagship security policy\u3002<\/strong><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-50325\" title=\"895a48bfj00tazz3600ecd000v90kxp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2026\/02\/895a48bfj00tazz3600ecd000v900kxp.jpg\" alt=\"895a48bfj00tazz3600ecd000v90kxp\" width=\"1125\" height=\"753\" \/><\/p>\n<p>In 2023, Anthropic promised not to train the AI system unless it was assured in advance that adequate security measures were in place. This commitment has become a central pillar of Anthropic's \u201cResponsible Extension Policy\u201d (RSP). For many years, Anthropic high-levels have been praising this commitment as strong evidence that they are a responsible company that is able to resist market temptations and is not eager to develop potentially dangerous technologies\u3002<\/p>\n<p>BUT IN RECENT MONTHS, THE COMPANY HAS DECIDED TO OVERHAUL THE RSP\u3002<strong>THIS DECISION INCLUDES A WAIVER OF THE PREVIOUS COMMITMENT TO RELEASE AI MODELS IF APPROPRIATE RISK MITIGATION MEASURES CANNOT BE ASSURED IN ADVANCE<\/strong>.<\/p>\n<p>In an exclusive interview with The Times, Jared Kaplan, Chief Science Officer of Anthropic, said: \u201cWe believe that stopping training the AI model is not actually helping anyone. With the rapid development of AI<strong>We do not feel that it is reasonable for us to make unilateral commitments in a situation where competitors are advancing rapidly\u3002<\/strong>\u201d<\/p>\n<p>The new version of the Anthropic policy reviewed in The Times includes a commitment to be more transparent with regard to AI security risks, including additional disclosure of the performance of the Anthropic own model in security testing; a commitment to security inputs and efforts to reach or surpass the level of competitors; and a commitment by the leadership to \u201cdelay\u201d the AI development process if it considers that Anthropic is a leader in the AI competition and that there is a high probability of catastrophic risk\u3002<\/p>\n<p>In general, however, this adjustment of the RSP has significantly reduced the constraints placed on Anthropic ' s security policy. Previously, the policy explicitly prohibited companies from training models beyond a specified level without appropriate security measures\u3002<\/p>\n<p>At present, Anthropic is faced with intense competition from rivals such as OpenAI, Elon Musk and Google, who regularly publish sophisticated tools. At the same time, the company has also been in dispute with the United States Department of Defense over the use of its Claude tool. Previously, Anthropic had informed the Pentagon that these tools should not be used for domestic surveillance or lethal autonomous operations. However, the United States Department of Defense has also issued an ultimatum to Anthropic that, if use is restricted, the contract with Anthropic will be terminated\u3002<\/p>\n<p>According to Anthropic, the change in security policy was based on the pace of development of AI and the lack of legislation at the federal level\u3002<\/p>\n<p>Anthropic spokesperson said that this adjustment<strong>To help firms counter multiple competitors in an uneven policy context<\/strong>I don't know. In such an environment, the company must decide for itself whether or not safety measures are sufficient. She also stated that this security commitment had nothing to do with the Pentagon negotiations\u3002<\/p>\n<p>\u201cThe policy environment has shifted to giving priority to AI's competitiveness and economic growth, while security-oriented discussions have yet to make substantial progress at the federal level.\u201d Anthropic writes in a blog post announcing this adjustment. It stated that it remained committed to industry-led safety standards\u3002<\/p>","protected":false},"excerpt":{"rendered":"<p>On February 25, according to Time magazine, the American AI Enterprise Anthropic was a great success and claimed to be the top AI research laboratory with the highest security focus. However, executive Anthropic told The Times that the company was abandoning the core commitments of its flagship security policy. In 2023, Anthropic promised not to train the AI system unless it was assured in advance that adequate security measures were in place. This commitment has become a central pillar of Anthropic's \u201cResponsible Extension Policy\u201d (RSP). For many years, Anthropic seniors have been praising this commitment as a family that is responsible, able to resist market temptations and has no rush to develop<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[320],"collection":[],"class_list":["post-50324","post","type-post","status-publish","format-standard","hentry","category-news","tag-anthropic"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/50324","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=50324"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/50324\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=50324"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=50324"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=50324"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=50324"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}