{"id":27290,"date":"2025-01-18T11:25:55","date_gmt":"2025-01-18T03:25:55","guid":{"rendered":"https:\/\/www.1ai.net\/?p=27290"},"modified":"2025-01-18T11:25:55","modified_gmt":"2025-01-18T03:25:55","slug":"mistral-ai-%e6%97%97%e4%b8%8b-codestral-%e6%a8%a1%e5%9e%8b%e8%8e%b7-25-01-%e6%9b%b4%e6%96%b0%ef%bc%9a%e6%94%af%e6%8c%81%e8%b6%85-80-%e7%a7%8d%e7%bc%96%e7%a8%8b%e8%af%ad%e8%a8%80%e3%80%81%e4%b8%8a","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/27290.html","title":{"rendered":"Mistral AI's Codestral Model Gets 25.01 Update: Support for Over 80 Programming Languages, Context Length Increased to 256,000 Tokens"},"content":{"rendered":"<p>recently,<a href=\"https:\/\/www.1ai.net\/en\/tag\/mistral-ai\" title=\"[See articles with [Mistral AI] label]\" target=\"_blank\" >Mistral AI<\/a>\u00a0Announcing a new program for its Codestral Programming<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e6%a8%a1%e5%9e%8b\" title=\"_Other Organiser\" target=\"_blank\" >Model<\/a>Version 25.01 was released, with officials emphasizing that the release in question featured major improvements in handling context length and code completion efficiency.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-27291\" title=\"40cb1303j00sq9lh8004ld000v900ckp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/01\/40cb1303j00sq9lh8004ld000v900ckp.jpg\" alt=\"40cb1303j00sq9lh8004ld000v900ckp\" width=\"1125\" height=\"452\" \/><\/p>\n<p>Specifically, Codestral 25.01\u00a0<strong>Increase the length of contexts supported by the model by 256,000 Token<\/strong>which claims to be able to effectively handle large projects and complex code generation needs.<strong>In addition, the new version of the model supports more than 80 programming languages, including Python, Java, JavaScript and other major languages.<\/strong>The model is also capable of accurate generation in application scenarios such as SQL and Bash. Tests show that the model has an average accuracy of 71.4% in HumanEval tests across languages.<\/p>\n<p><a href=\"https:\/\/www.1ai.net\/en\/tag\/mistral\" title=\"[See article with [Mistral] label]\" target=\"_blank\" >Mistral<\/a> AI claims that Codestral version 25.01 set several benchmark records in the Fill-In-the-Middle (FIM) task, notably achieving an average pass rate of 95.3% in the Pass@1 test for FIM, showing the model's power in generating single lines of code.<\/p>","protected":false},"excerpt":{"rendered":"<p>Recently, Mistral AI announced the release of version 25.01 of its Codestral programming model, with officials emphasizing that the release features major improvements in terms of processing context length and code completion efficiency. Specifically, Codestral 25.01 increases the model's supported context length by 256,000 tokens, which is said to be able to effectively cope with large-scale projects and the generation of complex code. In addition, the new version of the model also supports more than 80 programming languages, covering the mainstream languages such as Python, Java, JavaScript, and has the ability to accurately generate code for applications such as SQL and Bash. In addition, the new model also supports more than 80 programming languages, covering mainstream languages such as Python, Java, JavaScript, etc., and has accurate generation capability in applications such as SQL and Bash. Tests have shown that the model is highly accurate in HumanEval's language-specific tests.<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[559,1375,1489],"collection":[],"class_list":["post-27290","post","type-post","status-publish","format-standard","hentry","category-news","tag-mistral","tag-mistral-ai","tag-1489"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/27290","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=27290"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/27290\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=27290"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=27290"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=27290"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=27290"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}