{"id":44209,"date":"2025-09-30T12:04:28","date_gmt":"2025-09-30T04:04:28","guid":{"rendered":"https:\/\/www.1ai.net\/?p=44209"},"modified":"2025-09-30T12:04:28","modified_gmt":"2025-09-30T04:04:28","slug":"%e8%9a%82%e8%9a%81%e9%9b%86%e5%9b%a2%e5%bc%80%e6%ba%90%e5%85%a8%e7%90%83%e9%a6%96%e4%b8%aa%e4%b8%87%e4%ba%bf%e5%8f%82%e6%95%b0%e6%8e%a8%e7%90%86%e5%a4%a7%e6%a8%a1%e5%9e%8b-ring-1t-preview%ef%bc%8c","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/44209.html","title":{"rendered":"Ring-1T-preview, code generation super GPT-5"},"content":{"rendered":"<p>The news of September 30th<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%9a%82%e8%9a%81%e9%9b%86%e5%9b%a2\" title=\"[Sees articles with labels]\" target=\"_blank\" >Ant Group<\/a>It's announced this morning<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%bc%80%e6%ba%90\" title=\"[View articles tagged with [open source]]\" target=\"_blank\" >Open Source<\/a>Ring-1T-preview, a natural language<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e6%8e%a8%e7%90%86%e5%a4%a7%e6%a8%a1%e5%9e%8b\" title=\"Look at the article that contains the labels\" target=\"_blank\" >Large model of inference<\/a>And it's the first of the world's open-source mega-models of the hundreds of billion parameters\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-44210\" title=\"e2daf61j00t3dvae0025d000u8p\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/09\/e2daff61j00t3dvae0025d000u000f8p.jpg\" alt=\"e2daf61j00t3dvae0025d000u8p\" width=\"1080\" height=\"548\" \/><\/p>\n<p>Ring-1T-preview is a preview of a mega-mode Ring-1T of trillion parametric reasoning, according to the official information of the Magna Carta Model. Although it was a preview, its natural language reasoning was excellent. In the AIME 25 test, Ring-1T-preview received 92.6 points, exceeding all known open source models and Gemini 2.5 Pro, and close to 94.6 points of GPT-5 (no tool). In the CodeForces test, the model demonstrated a strong cod-generation capability by achieving 94.69 points above GPT-5. In addition, Ring-1T-preview ranks first in open source models such as LiveCodeBench and ARC-AGI-v1\u3002<\/p>\n<p>1AI understands that Ring-1T-preview also tested Ring-1T-preview at IMO25 (International Olympic Mathematics Competition), and that Ring-1T-preview can do the third question at once, while some of the correct answers can be deduced at one time in questions 1, 2, 4 and 5\u3002<\/p>\n<p>The ants team said that they had been training the Ling 2.0 family 1T language base post-training to maximize the potential for natural language reasoning for this trillion-scale base model. Currently, the Ring-1T official version is under training\u3002<\/p>","protected":false},"excerpt":{"rendered":"<p>On September 30th, an ant group announced this early morning the first mega-model of the hundreds of billions of parameters it studied itself, Ring-1T-preview, a large-scale model of natural language reasoning and the first large-scale model of the hundreds of billion parameters of open source reasoning around the globe. Ring-1T-preview is a preview of a mega-mode Ring-1T of trillion parametric reasoning, according to the official information of the Magna Carta Model. Although it was a preview, its natural language reasoning was excellent. In the AIME 25 test, Ring-1T-preview received 92.6 points, exceeding all known open source models and Gemini 2.5 Pro, and close to 94.6 points of GPT-5 (no tool). At Cod<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[219,6260,1030],"collection":[],"class_list":["post-44209","post","type-post","status-publish","format-standard","hentry","category-news","tag-219","tag-6260","tag-1030"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/44209","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=44209"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/44209\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=44209"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=44209"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=44209"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=44209"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}