{"id":15878,"date":"2024-07-19T08:58:45","date_gmt":"2024-07-19T00:58:45","guid":{"rendered":"https:\/\/www.1ai.net\/?p=15878"},"modified":"2024-07-19T08:58:45","modified_gmt":"2024-07-19T00:58:45","slug":"%e7%a7%91%e5%a4%a7%e8%ae%af%e9%a3%9e%e6%98%9f%e7%81%ab-spark-pro-128k-%e5%a4%a7%e6%a8%a1%e5%9e%8b%e5%bc%80%e6%94%be%e8%b0%83%e7%94%a8%ef%bc%8c%e6%9c%80%e4%bd%8e-0-21-%e5%85%83-%e4%b8%87-tokens","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/15878.html","title":{"rendered":"iFlytek Spark Pro-128K large model is now available for use, with a minimum price of 0.21 yuan per 10,000 tokens"},"content":{"rendered":"<p><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e7%a7%91%e5%a4%a7%e8%ae%af%e9%a3%9e\" title=\"[Sees articles with tags]\" target=\"_blank\" >iFLYTEK<\/a>Announced that iFlytek Spark <a href=\"https:\/\/www.1ai.net\/en\/tag\/api\" title=\"_OTHER ORGANISER\" target=\"_blank\" >API<\/a> Officially released the long context version - Spark Pro-128K <a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%a4%a7%e6%a8%a1%e5%9e%8b\" title=\"[View articles tagged with [large models]]\" target=\"_blank\" >Large Model<\/a>, with the lowest price being 0.21 yuan\/10,000 tokens.<\/p>\n<p>It is reported that the conversation between users and large models is usually considered to be short-term memory. Once the length of the conversation exceeds its context carrying capacity, the excess part may be forgotten by the model.<\/p>\n<p>Different from traditional text processing models, long text models have more accurate text understanding and generation capabilities and stronger cross-domain migration capabilities. They can understand and generate more information at one time. They are suitable for tasks such as complex conversations, long content creation, and detailed data analysis, and can improve the boundaries of the model&#039;s problem solving.<\/p>\n<p>On June 27, iFlytek Spark V4.0 was released, with a completely new upgrade in long text capabilities. In addition, it launched the industry&#039;s first content traceability function to address the illusion problem of long document knowledge questions and answers. When users ask Spark a question, it will tell you why it answered in this way and which content it referenced after answering it. In this way, when users do not have time to read the full text and are worried about the credibility of the answer, they only need to verify its traceability.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-15879\" title=\"b119b096-8ea0-4cf1-8410-c5fae25023ae\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/07\/b119b096-8ea0-4cf1-8410-c5fae25023ae.png\" alt=\"b119b096-8ea0-4cf1-8410-c5fae25023ae\" width=\"1080\" height=\"226\" \/><\/p>\n<p>Now, Spark Pro -128k, the Spark large model that supports the longest context, opens API calls to developers at a price of 0.21~0.30 yuan\/10,000 tokens. Individual users can receive 2 million tokens of service for free.<\/p>","protected":false},"excerpt":{"rendered":"<p>KU Xunfei announced that Xunfei Starfire API officially opens the long context version -- Spark Pro-128K large models, with a price as low as 0.21 yuan \/ 10,000 tokens. According to the introduction, the dialog exchanges between the user and the large models are usually considered short-term memory. Once the length of the dialog exceeds its context-carrying capacity, the exceeding part may be forgotten by the model. Distinguished from traditional text processing models, the long text model has more accurate text comprehension and generation capabilities as well as more powerful cross-domain migration capabilities, which can comprehend and generate more information at one time, and is suitable for tasks such as complex conversations, long-form content creation and detailed data analysis, which can enhance the model's problem-solving boundaries. On June 27, Cyberoam Starfire V4.0 was released.<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[1033,216,1116],"collection":[],"class_list":["post-15878","post","type-post","status-publish","format-standard","hentry","category-news","tag-api","tag-216","tag-1116"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/15878","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=15878"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/15878\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=15878"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=15878"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=15878"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=15878"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}