{"id":47035,"date":"2025-12-06T17:17:05","date_gmt":"2025-12-06T09:17:05","guid":{"rendered":"https:\/\/www.1ai.net\/?p=47035"},"modified":"2025-12-06T17:17:05","modified_gmt":"2025-12-06T09:17:05","slug":"openai-%e6%9c%80%e5%bc%ba%e7%bc%96%e7%a8%8b%e6%a8%a1%e5%9e%8b%e7%99%bb%e5%9c%ba%ef%bc%8c%e7%9b%b4%e6%8e%a5%e5%af%b9%e6%a0%87-anthropic","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/47035.html","title":{"rendered":"OpenAI most powerful programming model, direct to Anthropic"},"content":{"rendered":"<p>On December 6, according to Neowin<a href=\"https:\/\/www.1ai.net\/en\/tag\/openai\" title=\"[View articles tagged with [OpenAI]]\" target=\"_blank\" >OpenAI<\/a> Yesterday, it officially launched its most powerful ever<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e7%bc%96%e7%a8%8b%e6%a8%a1%e5%9e%8b\" title=\"[See articles with [programmed model] labels]\" target=\"_blank\" >Programming Model<\/a> GPT \u2013 5.1 \u2013 Codex \u2013 Max, open to API use at \u201cunexpectedly low prices\u201d\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-47036\" title=\"ac899833j00t6ucfa000vd000ugwm\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/12\/ac899833j00t6ucfa000vd000u000gwm.jpg\" alt=\"ac899833j00t6ucfa000vd000ugwm\" width=\"1080\" height=\"608\" \/><\/p>\n<p>the new model, designed for long-duration intelligent programming missions, is able to work continuously in a context window of more than one million tokens, and internal tests show that it can perform more than 24 hours of continuous operations\u3002<\/p>\n<p>GPT \u2013 5.1 \u2013 Codex \u2013 Max performed better than the conventional version in a number of benchmark tests: 779% on SWE \u2013 Bench Verified, 79.9% on SWE \u2013 Lancer IC SWE and 581% on Terminal Bench 2.0\u3002<\/p>\n<p>In contrast, regular GPT \u20105.1 -Codex performed 73.7%, 66.3% and 52.8% respectively. OpenAI indicates that the model will not only be faster and token will be more efficient, but will also be the Codex default model\u3002<\/p>\n<p>In terms of pricing, GPT \u2013 5.1 \u2013 Codex \u2013 Max is consistent with GPT \u2013 5, input token $1.25 per million, output token $10 per million\u3002<\/p>\n<p>Previously, the model was limited to such channels as Codex CLI, IDE Extension, Cloud and Code Review, and was open to ChatGPT Plus, Pro, Business, Edu and Enterprise users\u3002<\/p>\n<p>Today, through API, developers can call the model directly in tools such as Cursor, GitHub Copilot and Linear\u3002<\/p>\n<p>It is worth noting that GPT5.1-Codex-Max optimizes the Windows environment, which is different from the Codex model, which was previously mainly oriented towards the Unix system. OpenAI is seen as a strategic move to further expand the group of developers\u3002<\/p>\n<p>On the competition level, OpenAI's main opponent <a href=\"https:\/\/www.1ai.net\/en\/tag\/anthropic\" title=\"[View articles tagged with [Anthropic]]\" target=\"_blank\" >Anthropic<\/a> Claude Code has rapidly emerged as the fastest growing SaaS product and is expected to receive between $8 billion and $10 billion this year\u3002<\/p>\n<p>Since August, when GPT \u20135-Codex was released, the use of the OpenAI Codex model has increased more than 10 times, with the number of tokens processed weekly reaching trillions. The industry is concerned about whether the new model will be effective in curbing the strong momentum of Anthropic in the enterprise-level programming market\u3002<\/p>","protected":false},"excerpt":{"rendered":"<p>On December 6, according to Neowin, OpenAI officially launched yesterday its most powerful programming model to date, GPT \u2013 5.1 \u2013 Codex \u2013 Max, and opened for use by API at \u201cunsurprisingly low prices\u201d. The new model, designed for long-duration intelligent programming missions, is able to work continuously in a context window of more than one million tokens, and internal tests show that it can perform more than 24 hours of continuous operations. GPT \u2013 5.1 \u2013 Codex \u2013 Max performed better than the conventional version in several benchmark tests: 77.9% on SWE \u2013 Bench Verified, 79 on SWE = Lancer IC SWE<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[320,190,7692],"collection":[],"class_list":["post-47035","post","type-post","status-publish","format-standard","hentry","category-news","tag-anthropic","tag-openai","tag-7692"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/47035","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=47035"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/47035\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=47035"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=47035"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=47035"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=47035"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}