{"id":27521,"date":"2025-01-21T20:38:22","date_gmt":"2025-01-21T12:38:22","guid":{"rendered":"https:\/\/www.1ai.net\/?p=27521"},"modified":"2025-01-21T20:38:22","modified_gmt":"2025-01-21T12:38:22","slug":"%e5%a4%a7%e6%a8%a1%e5%9e%8b%e5%b8%ae%e4%bd%a0%e5%86%99%e5%b0%8f%e8%af%b4%ef%bc%8c%e9%98%b6%e8%b7%83%e6%98%9f%e8%be%b0%e6%8e%a8%e5%87%ba-step-2%e9%ab%98%e6%80%a7%e4%bb%b7%e6%af%94%e7%89%88","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/27521.html","title":{"rendered":"Big models help you write novels, Step Star launches Step-2 \"Cost-effective Edition\" and \"Literary Master Edition\"."},"content":{"rendered":"<p>January 21st.<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e9%98%b6%e8%b7%83%e6%98%9f%e8%be%b0\" title=\"[View articles tagged with [Step Star]]\" target=\"_blank\" >Step Star<\/a>Two new models in the Step-2 series of language models were launched yesterday -- Step-2 mini, a smaller participant size and more cost-effective model, and Step Literature Master, a model specifically for the content creation field.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-27522\" title=\"a1ac6ea1j00sqfv33001vd000ak009np\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/01\/a1ac6ea1j00sqfv33001vd000ak009np.jpg\" alt=\"a1ac6ea1j00sqfv33001vd000ak009np\" width=\"380\" height=\"347\" \/><\/p>\n<p>1AI learned from the official introduction that Step-2 mini and Trillion Parameters<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%a4%a7%e6%a8%a1%e5%9e%8b\" title=\"[View articles tagged with [large models]]\" target=\"_blank\" >Large Model<\/a> Compared with Step-2, it retains its modeling performance above 80% with a parameter count around 3%.<\/p>\n<p>Meanwhile, Step-2 mini\u00a0<strong>Faster generation speeds and excellent value for money<\/strong>The average initial word latency of Step-2 mini is only 0.17 seconds with 4000 tokens. In the case of inputting 4000 tokens, the average first-word latency of Step-2 mini is only 0.17 seconds. At present, you can already call the API interface of Step-2 mini on the open platform of Step Star. Input 1 yuan\/million tokens; Output 2 yuan\/million tokens.<\/p>\n<p>Step-2 mini adopts a new attention mechanism architecture independently developed by Step-Star - MFA (Multi-matrix Factorization Attention) and its variant MFA-Key-Reuse, which saves nearly 94% KV cache overhead and significantly reduces inference cost compared with the commonly used MHA (Multi-Head Attention) architecture. Compared with the commonly used MHA (Multi-Head Attention) architecture, it saves nearly 94% of KV cache overhead, has faster inference speed and significantly reduces inference cost.<\/p>\n<p>According to the official introduction, Step-2 Literary Master Edition is a model developed specifically for the creation of textual content, following Step-2's knowledge base, the ability to control the text of powerful details.<strong>Featuring more robust content creation capabilities<\/strong>Step-2 Literary Masters Edition seeks to solve the problem of over-alignment of language models in the market, which leads to \"false and empty\" content and a lack of novelty and true feelings.<\/p>","protected":false},"excerpt":{"rendered":"<p>The 21st of January news, the Step-2 series of language models came online yesterday with two new items -- a small number of parameters and a higher value for money Step-2 Mini, and a model for content creation. 1AI was informed from the official presentation that Step-2 Mini had its model performance above 80% in the order of parameters around 3% compared to the mega model of trillion parameters Step-2. At the same time, Step-2 Mini has faster production and very high value for money. In the case of input of 4000 tokens, the average start time for Step-2 Mini is 0.17 seconds. For now, you can<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[216,1893],"collection":[],"class_list":["post-27521","post","type-post","status-publish","format-standard","hentry","category-news","tag-216","tag-1893"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/27521","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=27521"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/27521\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=27521"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=27521"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=27521"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=27521"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}