The news of September 30thAnt GroupIt's announced this morningOpen SourceRing-1T-preview, a natural languageLarge model of inferenceAnd it's the first of the world's open-source mega-models of the hundreds of billion parameters。

Ring-1T-preview is a preview of a mega-mode Ring-1T of trillion parametric reasoning, according to the official information of the Magna Carta Model. Although it was a preview, its natural language reasoning was excellent. In the AIME 25 test, Ring-1T-preview received 92.6 points, exceeding all known open source models and Gemini 2.5 Pro, and close to 94.6 points of GPT-5 (no tool). In the CodeForces test, the model demonstrated a strong cod-generation capability by achieving 94.69 points above GPT-5. In addition, Ring-1T-preview ranks first in open source models such as LiveCodeBench and ARC-AGI-v1。
1AI understands that Ring-1T-preview also tested Ring-1T-preview at IMO25 (International Olympic Mathematics Competition), and that Ring-1T-preview can do the third question at once, while some of the correct answers can be deduced at one time in questions 1, 2, 4 and 5。
The ants team said that they had been training the Ling 2.0 family 1T language base post-training to maximize the potential for natural language reasoning for this trillion-scale base model. Currently, the Ring-1T official version is under training。