{"id":45797,"date":"2025-11-09T14:34:22","date_gmt":"2025-11-09T06:34:22","guid":{"rendered":"https:\/\/www.1ai.net\/?p=45797"},"modified":"2025-11-09T14:34:22","modified_gmt":"2025-11-09T06:34:22","slug":"openai-%e6%8e%a8%e5%87%ba-gpt-5-codex-mini%ef%bc%9a%e7%bb%8f%e6%b5%8e%e9%ab%98%e6%95%88%e5%9e%8bai-%e7%bc%96%e7%a8%8b%e6%a8%a1%e5%9e%8b","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/45797.html","title":{"rendered":"OpenAI Launch GPT-5-Codex-Mini \"Economic Efficient\" AI Programming Model"},"content":{"rendered":"<p>November 9th news, September this year<a href=\"https:\/\/www.1ai.net\/en\/tag\/openai\" title=\"[View articles tagged with [OpenAI]]\" target=\"_blank\" >OpenAI<\/a> Launched <a href=\"https:\/\/www.1ai.net\/en\/tag\/gpt-5\" title=\"[SEE ARTICLES WITH [GPT-5] LABELS]\" target=\"_blank\" >GPT-5<\/a>- Codex, a GPT-5 model that optimizes the \u201cautonomous coding\u201d mission on the Codex platform, based on the GPT-5 architecture, with a significant increase in its reasoning and programming capabilities\u3002<\/p>\n<p>GPT-5-Codex is a real software engineering landscape capable of working from creating new projects, adding functionality and testing to large-scale code re-engineering\u3002<\/p>\n<p>According to Foreign Media Neowin, OpenAI published GPT-5-Codex-Mini. By definition, the model is a smaller and cheaper version of GPT-5-Codex. It has a small loss of performance compared to the original version and is available to developers<strong>About 4 times the amount of use<\/strong>I don't know. In the SWE-bench Verified test, GPT-5 High scored 72.8%<strong>GPT-5-Codex score 74.51 TP3T, while GPT-5-Codex-Mini score 71.31 TP3T<\/strong>.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-45798\" title=\"5cdd13b6j00t5g4vc001td000xc00dsp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/11\/5cdd13b6j00t5g4vc001td000xc00dsp.jpg\" alt=\"5cdd13b6j00t5g4vc001td000xc00dsp\" width=\"1200\" height=\"496\" \/><\/p>\n<p>OpenAI proposal<strong>Lightweight engineering task or near-speed ceiling<\/strong>when using GPT-5-Codex-Mini; when the usage reaches 90%, the Codex system automatically alerts the user to switch. The Mini version is online in CLI and IDE extensions, and API support is about to be in place\u3002<\/p>\n<p>With GPU efficiency improvements, ChatGPT Plus, Business and Edu users will be able to increase their speed ceilings by 50%, ChatGPT Pro and Enterprise users will have priority scheduling to obtain faster response\u3002<\/p>\n<p>1AI understands that OpenAI has also made a back-office optimization of Codex to ensure that, whenever visited, developers have a stable and predictable use experience and avoid previous fluctuations caused by the problem of caches or traffic routes\u3002<\/p>","protected":false},"excerpt":{"rendered":"<p>According to news from November 9, in September this year, OpenAI launched GPT-5-Codex, a GPT-5 model that optimizes \u201cauto-code\u201d missions on the Codex platform, based on the GPT-5 architecture, which has significantly improved its reasoning and programming capabilities. GPT-5-Codex is a real software engineering landscape capable of working from creating new projects, adding functionality and testing to large-scale code re-engineering. According to Foreign Media Neowin, OpenAI published GPT-5-Codex-Mini. By definition, the model is a smaller and cheaper version of GPT-5-Codex. It has a small loss of performance compared to the original version, and the developer can<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[7850,719,190],"collection":[],"class_list":["post-45797","post","type-post","status-publish","format-standard","hentry","category-news","tag-ai","tag-gpt-5","tag-openai"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/45797","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=45797"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/45797\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=45797"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=45797"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=45797"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=45797"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}