{"id":17909,"date":"2024-08-14T09:41:59","date_gmt":"2024-08-14T01:41:59","guid":{"rendered":"https:\/\/www.1ai.net\/?p=17909"},"modified":"2024-08-14T09:41:59","modified_gmt":"2024-08-14T01:41:59","slug":"%e6%99%ba%e8%b0%b1ai%ef%bc%9aglm-4-long-api%e4%b8%8a%e7%ba%bf-%e8%be%93%e5%85%a5%e3%80%81%e8%be%93%e5%87%ba%e4%bb%b7%e6%a0%bc0-001%e5%85%83-%e5%8d%83tokens","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/17909.html","title":{"rendered":"Zhipu AI: GLM-4-Long API is launched with input and output price of 0.001 yuan\/thousand tokens"},"content":{"rendered":"<p><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e6%99%ba%e8%b0%b1ai\" title=\"[SEES ARTICLES WITH [INTELLIGENCE AI] LABELS]\" target=\"_blank\" >Zhipu AI<\/a>Announced that the LLM GLM-4-Long, which supports ultra-long context length, has been launched on the open platform bigmodel.cn. The model is designed for processing ultra-long texts and can read the equivalent of two copies of &quot;Dream of Red Mansions&quot; or 125 papers at a time. It is widely used in scenarios such as translating long documents, analyzing financial reports globally, extracting key information, and building chatbots with ultra-long memory.<\/p>\n<p>GLM-4-Long has a significant price advantage, with input and output prices as low as 0.001 yuan\/thousand tokens, providing an economical and efficient solution for enterprises and developers. The model continues to pursue leading context capabilities in technology iterations, from the initial 2K context to the current 1M context length, integrating a large number of research results on long text processing.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-17910\" title=\"859cfd2fj00si6pzn000cd000fq008hm\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/08\/859cfd2fj00si6pzn000cd000fq008hm.jpg\" alt=\"859cfd2fj00si6pzn000cd000fq008hm\" width=\"566\" height=\"305\" \/><\/p>\n<p>In the &quot;finding a needle in a haystack&quot; test, GLM-4-Long demonstrated its ability to process information without loss, proving its superior performance in a context length of 1M. In addition, GLM-4-Long also performed well in practical application tests such as financial report reading, paper summarization, and novel reading, and was able to accurately extract and analyze key information.<\/p>\n<p>The application of GLM-4-Long brings significant advantages to enterprises, including in-depth conversation understanding, complex document processing, more coherent content generation, and stronger data analysis capabilities. These capabilities are particularly important in areas such as customer service, law, finance, scientific research, marketing, advertising, and big data analysis.<\/p>\n<p><strong>Interface documentation:<\/strong><\/p>\n<p>https:\/\/bigmodel.cn\/dev\/api#glm-4<\/p>\n<p><strong>Experience Center:<\/strong><\/p>\n<p>https:\/\/bigmodel.cn\/console\/trialcenter<\/p>","protected":false},"excerpt":{"rendered":"<p>Smart Spectrum AI announced that LLM GLM-4-Long, which supports ultra-long context length, is now available on the open platform bigmodel.cn. Designed for processing ultra-long text, the model is able to read a volume of text equivalent to two books of Dream of the Red Chamber or 125 papers at one time, and is widely used in scenarios such as translating long documents, globally analyzing financial reports, extracting key information, and building chatbots with ultra-long memories. GLM-4-Long has a significant advantage in price, with input and output prices as low as $0.001\/thousand tokens, providing a cost-effective solution for enterprises and developers. The model has continuously pursued leading-edge context capabilities in technology iterations, evolving from the initial 2K contexts to the current 1M context length, integrating a large number of<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[379],"collection":[],"class_list":["post-17909","post","type-post","status-publish","format-standard","hentry","category-news","tag-ai"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/17909","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=17909"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/17909\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=17909"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=17909"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=17909"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=17909"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}