{"id":39728,"date":"2025-07-19T13:44:30","date_gmt":"2025-07-19T05:44:30","guid":{"rendered":"https:\/\/www.1ai.net\/?p=39728"},"modified":"2025-07-19T13:44:30","modified_gmt":"2025-07-19T05:44:30","slug":"%e8%b6%85%e8%bf%87-deepseek-r1%ef%bc%8ckimi-k2-%e6%8b%bf%e4%b8%8b%e5%bc%80%e6%ba%90%e6%a8%a1%e5%9e%8b%e7%ac%ac%e4%b8%80","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/39728.html","title":{"rendered":"Kimi K2 Takes First Place in Open Source Modeling Over DeepSeek R1"},"content":{"rendered":"<p>July 19, 2012 - LMArena, the authoritative big model ranking, has published its latest ranking results for the recently released <a href=\"https:\/\/www.1ai.net\/en\/tag\/kimi\" title=\"[View articles tagged with [Kimi]]\" target=\"_blank\" >Kimi<\/a> K2. Beyond. <a href=\"https:\/\/www.1ai.net\/en\/tag\/deepseek\" title=\"[View articles tagged with [DeepSeek]]\" target=\"_blank\" >DeepSeek<\/a> R1, take it down.<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%bc%80%e6%ba%90%e6%a8%a1%e5%9e%8b\" title=\"[See articles with [open source model] labels]\" target=\"_blank\" >Open Source Model<\/a>First place.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-39729\" title=\"2518f528j00szmt3f0022d000u000mtm\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/07\/2518f528j00szmt3f0022d000u000mtm.jpg\" alt=\"2518f528j00szmt3f0022d000u000mtm\" width=\"1080\" height=\"821\" \/><\/p>\n<p>LMArena stated that<strong>The Kimi K2 has earned the fifth position on the overall LMArena charts thanks to its performance and 3,000 community votes.<\/strong><\/p>\n<p>It is worth noting that Kimi K2 and DeepSeek R1 are the two Chinese models in the top 10 of the LMArena charts, but the number of Chinese models in the global top 20 extends to seven, with models such as the MiniMax M1 and the Qwen3-235b on the list.<\/p>\n<p>Kimi-K2 was released and open-sourced last week, and it is called \"MoE Architecture Base Model with Superb Code and Agent Capabilities\". Officially, K2 has 1T total parameters, 32B activation parameters, 128k context length, and supports ToolCalls, networked search functions, and more.<\/p>\n<p>It is reported that in benchmark performance tests such as SWE Bench Verified, Tau2, and AceBench, Kimi K2 has achieved SOTA scores in open source models, demonstrating its leading capabilities in code, Agent, and mathematical reasoning tasks.<\/p>\n<p>In addition, Liu Shaowei, a member of the Kimi K2 development team, answered the question of \"Kimi K2 adopts DeepSeek V3 architecture\" in Zhihu recently. He said<strong>\"It does inherit the structure of DeepSeek V3, but adjusts the structural parameters to fit Kimi's model.<\/strong>The V3 architecture is a very simple and easy to implement. And it revealed that the V3 architecture fit the cost budget associated with the development, so it chose to inherit the V3 architecture in its entirety.<\/p>","protected":false},"excerpt":{"rendered":"<p>July 19, 2011 - LMArena, the leading ranking of big models, has released its latest rankings, with the recently released Kimi K2 taking the top spot over DeepSeek R1 for open source models. According to LMArena, Kimi K2 took fifth place in LMArena's overall ranking based on its performance and 3,000 community votes. It's worth noting that Kimi K2 and DeepSeek R1 are the only two Chinese models in the top 10 of the LMArena charts, but the number of Chinese models in the global top 20 extends to seven, with models such as MiniMax M1 and Qwen3-235b on the list. Kim<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[3606,1814,862],"collection":[],"class_list":["post-39728","post","type-post","status-publish","format-standard","hentry","category-news","tag-deepseek","tag-kimi","tag-862"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/39728","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=39728"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/39728\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=39728"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=39728"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=39728"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=39728"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}