{"id":28006,"date":"2025-02-01T17:12:50","date_gmt":"2025-02-01T09:12:50","guid":{"rendered":"https:\/\/www.1ai.net\/?p=28006"},"modified":"2025-02-01T17:12:50","modified_gmt":"2025-02-01T09:12:50","slug":"openai-%e7%b4%a7%e6%80%a5%e5%8f%91%e5%b8%83-o3-mini%ef%bc%8cceo-%e9%98%bf%e5%b0%94%e7%89%b9%e6%9b%bc%e7%bd%95%e8%a7%81%e8%ae%a4%e9%94%99%e5%b9%b6%e7%a7%b0-deepseek%e9%9d%9e%e5%b8%b8%e5%a5%bd","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/28006.html","title":{"rendered":"OpenAI urgently releases o3-mini, CEO Altman admits mistake and calls DeepSeek \"very good\""},"content":{"rendered":"<p>Feb. 1 AM.<a href=\"https:\/\/www.1ai.net\/en\/tag\/openai\" title=\"[View articles tagged with [OpenAI]]\" target=\"_blank\" >OpenAI<\/a> Today's Releases <a href=\"https:\/\/www.1ai.net\/en\/tag\/o3-mini\" title=\"[see articles with [o3-mini] labels]\" target=\"_blank\" >o3-mini<\/a> model, the newest and most cost-effective model in the OpenAI inference family, is open for use in ChatGPT and the API. The model is said to be refreshing SOTA in benchmark tests such as math codes, where o3-mini (high) is reportedly the best in both accuracy and calibration error.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-28007\" title=\"ad2c2bf9j00sqzyw6001fd000fa00bjp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/02\/ad2c2bf9j00sqzyw6001fd000fa00bjp.jpg\" alt=\"ad2c2bf9j00sqzyw6001fd000fa00bjp\" width=\"550\" height=\"415\" \/><\/p>\n<p>According to the report, o3-mini is 63% cheaper than OpenAI o1-mini and 93% cheaper than the full-blooded version of o1. o3-mini allows developers to choose between high, medium, and low reasoning strengths according to their needs, so that o3-mini can think deeply when dealing with complex problems, balancing speed and accuracy.<\/p>\n<p>In what will be seen as OpenAI's response to DeepSeek taking the world by storm over the past week, OpenAI co-founder and CEO Sam Altman revealed in an online Q&amp;A following the release of the o3-mini, \"<strong>A full-blooded version of o3 is coming in the next few weeks.<\/strong>\u201d<\/p>\n<p>Sam Altman also talked about his thoughts on DeepSeek. He said, \"It [DeepSeek] is really a very good model, and OpenAI will develop better models, but we won't be able to hold as big a lead as we have in previous years.\"<\/p>\n<p>Altman made a rare admission of his mistake and said OpenAI is discussing a new open source strategy. \"I personally think that<strong>We're on the wrong side of this issue and need to come up with a different open source strategy<\/strong>; not everyone at OpenAI holds that view, and it's not our highest priority at the moment.\" Altman said.<\/p>\n<p>Altman also revealed that an update to OpenAI's Advanced Speech Model is coming soon, the<strong>This may be named GPT-5 instead of GPT-5o<\/strong>, but there is no specific timetable at this time.<\/p>","protected":false},"excerpt":{"rendered":"<p>On the morning of February 1st, OpenAI released today an o3-mini model, the latest and most cost-effective model in the OpenAI Logic Series, which has been made available in ChatGPT and API. The model is known to have been updated with SOTA in benchmark tests such as mathematical code, in which o3-mini(high) is the best in accuracy and calibration error. The price of O3-mini was described as less than OpenAI o1-mini 63% and 93% than full blood. Developer selects three types of reasoning intensity, high, medium and low, based on demand, to allow o3-mini to deal with complex issues<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[5634,190],"collection":[],"class_list":["post-28006","post","type-post","status-publish","format-standard","hentry","category-news","tag-o3-mini","tag-openai"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/28006","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=28006"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/28006\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=28006"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=28006"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=28006"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=28006"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}