{"id":27450,"date":"2025-01-21T10:52:48","date_gmt":"2025-01-21T02:52:48","guid":{"rendered":"https:\/\/www.1ai.net\/?p=27450"},"modified":"2025-01-21T10:52:48","modified_gmt":"2025-01-21T02:52:48","slug":"%e6%9c%88%e4%b9%8b%e6%9a%97%e9%9d%a2%e5%8f%91%e5%b8%83-kimi-k1-5-%e5%a4%9a%e6%a8%a1%e6%80%81%e6%80%9d%e8%80%83%e6%a8%a1%e5%9e%8b%ef%bc%8c%e5%ae%9e%e7%8e%b0-sota-%e7%ba%a7%e5%a4%9a%e6%a8%a1%e6%80%81","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/27450.html","title":{"rendered":"Dark Side of the Moon Releases Kimi k1.5 Multimodal Thinking Model for SOTA-Level Multimodal Reasoning Capabilities"},"content":{"rendered":"<p><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e6%9c%88%e4%b9%8b%e6%9a%97%e9%9d%a2\" title=\"[Sees articles with labels]\" target=\"_blank\" >Dark Side of the Moon<\/a>Launch announced on January 20 <a href=\"https:\/\/www.1ai.net\/en\/tag\/kimi\" title=\"[View articles tagged with [Kimi]]\" target=\"_blank\" >Kimi<\/a> New SOTA model --k1.5 <a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%a4%9a%e6%a8%a1%e6%80%81\" title=\"[View articles tagged with [multimodal]]\" target=\"_blank\" >Multimodality<\/a>thinking model, which achieves SOTA (state-of-the-art) level multimodal reasoning and generalized reasoning capabilities.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-27451\" title=\"0c46b0d0j00sqf3z9003zd000u000hkp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/01\/0c46b0d0j00sqf3z9003zd000u000hkp.jpg\" alt=\"0c46b0d0j00sqf3z9003zd000u000hkp\" width=\"1080\" height=\"632\" \/><\/p>\n<p>Officially, in short-CoT mode, the Kimi k1.5's<strong>Math, code, visual multimodal and generalized abilities<\/strong>Significantly outperforms the short-thinking SOTA models GPT-4o and Claude 3.5 Sonnet globally, leading the way up to<strong>\u00a0<\/strong><strong>550%<\/strong>.<\/p>\n<p>In long-CoT mode, Kimi k1.5's math, code, and multimodal reasoning abilities, the<strong>It is also at the level of the long-thinking SOTA model OpenAI o1 official version.<\/strong>.<\/p>\n<p>It is stated that there are k1.5 model design and training<strong>Long context extensions, improved policy optimization, clean framework, multimodal capabilities<\/strong>and other key elements. The model specializes in<strong>inference in depth<\/strong>It can assist in \"unlocking more and more difficult things\", dealing with difficult code problems, math problems, and work problems.<\/p>\n<p>Note: k1.5 Multimodal Thinking Model of the<strong>Preview versions will be grayed out one by one<\/strong>\u00a0Kimi.com website and the latest version of the Kimi Smart Assistant App.<\/p>","protected":false},"excerpt":{"rendered":"<p>Dark Side of the Moon announced on January 20 the launch of Kimi's new SOTA model -- k1.5 multimodal thinking model, which realizes SOTA (state-of-the-art) level multimodal reasoning and generalized reasoning capabilities. Officially, in short-CoT mode, Kimi k1.5's math, code, and visual multimodal and generalized capabilities dramatically outperform those of the short-thinking SOTA models GPT-4o and Claude 3.5 Sonnet globally, leading to 550%. In long-CoT mode, Kimi k1.5's math, code, multimodal reasoning ability of Kimi k1.5 also reaches the level of the long-thinking SOTA model<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[1814,592,1168],"collection":[],"class_list":["post-27450","post","type-post","status-publish","format-standard","hentry","category-news","tag-kimi","tag-592","tag-1168"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/27450","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=27450"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/27450\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=27450"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=27450"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=27450"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=27450"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}