{"id":28677,"date":"2025-02-12T20:45:57","date_gmt":"2025-02-12T12:45:57","guid":{"rendered":"https:\/\/www.1ai.net\/?p=28677"},"modified":"2025-02-12T20:45:57","modified_gmt":"2025-02-12T12:45:57","slug":"openai-%e6%9c%80%e6%96%b0%e8%ae%ba%e6%96%87%ef%bc%9ao3-%e5%9c%a8-ioi-2024-%e4%b8%a5%e6%a0%bc%e8%a7%84%e5%88%99%e4%b8%8b%e6%8b%bf%e5%88%b0-395-64-%e5%88%86%e8%be%be%e6%88%90%e9%87%91%e7%89%8c%e6%88%90","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/28677.html","title":{"rendered":"OpenAI's latest paper: o3 achieves gold medal with 395.64 points under the strict rules of IOI 2024"},"content":{"rendered":"<p>Feb. 12 (Bloomberg) -- With the influence of Chinese AI company<a href=\"https:\/\/www.1ai.net\/en\/tag\/openai\" title=\"[View articles tagged with [OpenAI]]\" target=\"_blank\" >OpenAI<\/a> Opened up the secrets of O-series reinforcement learning.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-28678\" title=\"930efbb9j00srkm3q008ed000fa00mvp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/02\/930efbb9j00srkm3q008ed000fa00mvp.jpg\" alt=\"930efbb9j00srkm3q008ed000fa00mvp\" width=\"550\" height=\"823\" \/><\/p>\n<p>Today (February 12), OpenAI released research on the use of inference models in competitive programming<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%ae%ba%e6%96%87\" title=\"[See articles with [paper] labels]\" target=\"_blank\" >paper<\/a>The report, Competitive Programming with Large Reasoning Models, puts out the results of OpenAI's three inference models: o1, o1-ioi, and o3 in the IOI (International Olympiad in Informatics) and CodeForces (the world's leading online programming competition).<\/p>\n<p>The paper shows that in IOI 2024, o3 scored 395.64 points under strict rules to reach the gold medal achievement and performed on par with elite human competitors at CodeForces.<\/p>\n<p>The paper also mentions that China's DeepSeek-R1 and Kimi k1.5 have shown through independent research that the combined performance of the models in mathematical problem solving and programming challenges can be significantly improved using the chain-of-thinking learning (COT) approach. r1, k1.5 are new inference models that were released simultaneously by DeepSeek and Kimi on January 20th.<\/p>\n<p>The paper compares the performance of general-purpose reasoning models with systems optimized for specific domains in competitive programming through the performance improvement of large language models trained by reinforcement learning (RL) on complex coding and reasoning tasks. The findings show that adding reinforcement learning training computation and test-time computation can significantly improve model performance to approach that of the world's top human players, and that these models will unlock new application experiences in AI applications in science, coding, math, and other fields.<\/p>","protected":false},"excerpt":{"rendered":"<p>February 12 evening news, under the influence of Chinese AI companies, OpenAI has disclosed the secrets of O-series reinforcement learning. Today (February 12), OpenAI released a research paper on the application of inference models in competitive programming, \"Competitive Programming with Large Reasoning Models\", which released the results of OpenAI's three inference models: o1, o1-ioi, and o3 in the IOI (International Olympiad of Informatics) and CodeForces (the world's leading online programming competition). Olympiad (International Olympiad in Informatics) and CodeForces (a globally recognized online programming competition). According to the paper, in IOI 2024, o3 got the highest score under strict rules.<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[190,911],"collection":[],"class_list":["post-28677","post","type-post","status-publish","format-standard","hentry","category-news","tag-openai","tag-911"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/28677","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=28677"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/28677\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=28677"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=28677"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=28677"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=28677"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}