{"id":44455,"date":"2025-10-10T12:11:16","date_gmt":"2025-10-10T04:11:16","guid":{"rendered":"https:\/\/www.1ai.net\/?p=44455"},"modified":"2025-10-10T12:11:16","modified_gmt":"2025-10-10T04:11:16","slug":"%e7%99%be%e7%81%b5%e5%8f%91%e5%b8%83%e4%b8%87%e4%ba%bf%e5%8f%82%e6%95%b0%e6%97%97%e8%88%b0%e6%a8%a1%e5%9e%8b%ef%bc%8c%e9%ab%98%e6%95%88%e6%8e%a8%e7%90%86%e4%b8%8e%e8%b7%a8%e6%a8%a1%e6%80%81%e8%83%bd","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/44455.html","title":{"rendered":"Billion-billion flagship models of parameters are released, and efficient reasoning and cross-model capabilities are fully upgraded"},"content":{"rendered":"<p>Yesterday.<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e7%99%be%e7%81%b5%e5%a4%a7%e6%a8%a1%e5%9e%8b\" title=\"[Sees articles with labels]\" target=\"_blank\" >Pepsi model<\/a>The team officially launched the first flagship non-thinking model of the Ling 2.0 series - <a href=\"https:\/\/www.1ai.net\/en\/tag\/ling-1t\" title=\"[See articles with [Ling-1T] label]\" target=\"_blank\" >Ling-1T<\/a>.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-44456\" title=\"742963dfj00t3we9p0062d000u1m\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/10\/742963dfj00t3we9p0062d000u000l1m.jpg\" alt=\"742963dfj00t3we9p0062d000u1m\" width=\"1080\" height=\"757\" \/><\/p>\n<p>It was described that the model was based on the Ling 2.0 architecture and had a total parameter size of 1T, each token activated about 50B parameters and completed pre-training on high-quality language above 20T token to support the highest 128K context window\u3002<\/p>\n<p>According to the official presentation, Ling-1T has demonstrated a leading advantage over multiple open and closed-source flagship models in a number of difficult reference tests, including code generation, software development, competitive mathematics, and logical reasoning\u3002<\/p>\n<p>In addition, Ling-1T displays a strong ability to migrate and extend in a smart tool call, and even without the introduction of a large number of operational tracks, the call accuracy of approximately 70% can be achieved by fine-tuning only a few commands. The team indicated that these capabilities constitute a key foundation for universal intelligence\u3002<\/p>\n<p>Ling-1T is currently available on platforms like HuggingFace, ModelScope, GitHub, and can be downloaded by domestic and foreign developers\u3002<\/p>\n<p>HuggingFace: https:\/\/huggingface.co\/inclusionAI\/Ling-1T<\/p>\n<p>\ud83d\udc7eModelScope: https:\/\/modelsscope.cn\/models\/inclusionAI\/Ling-1T<\/p>\n<p>\ud83d\udcbb GitHub: https:\/\/github.com\/inclusionAI\/Ling-V2<\/p>","protected":false},"excerpt":{"rendered":"<p>Yesterday, the Grinch Model team officially launched the first flagship non-thinking model of the Ling 2.0 series -- Ling-1T. It was described that the model was based on the Ling 2.0 architecture and had a total parameter size of 1T, each token activated about 50B parameters and completed pre-training on high-quality language above 20T token to support the highest 128K context window. According to the official presentation, Ling-1T has demonstrated a leading advantage over multiple open and closed-source flagship models in a number of difficult reference tests such as code generation, software development, competitive mathematics, and logical reasoning. In addition, Ling-1T has demonstrated a strong ability to migrate and extend in the use of smart tools<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[7712,7711],"collection":[],"class_list":["post-44455","post","type-post","status-publish","format-standard","hentry","category-news","tag-ling-1t","tag-7711"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/44455","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=44455"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/44455\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=44455"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=44455"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=44455"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=44455"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}