{"id":15819,"date":"2024-07-18T08:54:42","date_gmt":"2024-07-18T00:54:42","guid":{"rendered":"https:\/\/www.1ai.net\/?p=15819"},"modified":"2024-07-18T08:54:42","modified_gmt":"2024-07-18T00:54:42","slug":"mistral%e6%96%b0%e6%a8%a1%e5%9e%8bcodestral-mamba-%e9%80%9f%e5%ba%a6%e6%9b%b4%e5%bf%ab%e3%80%81%e6%96%87%e6%9c%ac%e5%a4%84%e7%90%86%e9%95%bf%e5%ba%a6%e6%98%afgpt-4o%e4%b8%a4%e5%80%8d","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/15819.html","title":{"rendered":"Mistral&#039;s new model Codestral Mamba is faster and can process text twice as long as GPT-4o"},"content":{"rendered":"<p data-pm-slice=\"0 0 []\">recent,<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e6%b3%95%e5%9b%bd\" title=\"[Sees articles with [French] labels]\" target=\"_blank\" >France<\/a>AI<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%88%9d%e5%88%9b%e5%85%ac%e5%8f%b8\" title=\"[Sees articles with labels]\" target=\"_blank\" >Startups<\/a><a href=\"https:\/\/www.1ai.net\/en\/tag\/mistral\" title=\"[See article with [Mistral] label]\" target=\"_blank\" >Mistral<\/a>Released a new<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e7%bc%96%e7%a0%81%e6%a8%a1%e5%9e%8b\" title=\"[Sees articles with [coding model] labels]\" target=\"_blank\" >Coding Model<\/a>\u2014\u2014<a href=\"https:\/\/www.1ai.net\/en\/tag\/codestral-mamba\" title=\"_Other Organiser\" target=\"_blank\" >Codestral Mamba<\/a>This model is not only fast, but also can process longer codes, helping programmers and developers improve their work efficiency. Mistral has accumulated a lot of fame in the field of open source AI, and the Codestral Mamba launched this time is even more eye-catching.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-15820\" title=\"get-539\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/07\/get-539.jpg\" alt=\"get-539\" width=\"1410\" height=\"663\" \/><\/div>\n<p data-track=\"29\">Codestral Mamba is based on a new architecture called &quot;Mamba&quot;, which is more efficient than the traditional transformer architecture. Its design allows the model to give results faster when handling complex tasks and can handle input texts up to 256,000 tokens in length.<\/p>\n<p data-track=\"30\">Mistral tested the model, which will be available for free on Mistral\u2019s la Plateforme API, on text twice as long as OpenAI\u2019s GPT-4o (which, by comparison, can only handle 128,000 tokens).<\/p>\n<p data-track=\"31\">In the test, Codestral Mamba performed well in programming tasks, surpassing many competitors, including open source models such as CodeLlama and DeepSeek. Mistral&#039;s model is particularly suitable for local coding projects, making developers more comfortable when coding.<\/p>\n<p data-track=\"32\">In addition to Codestral Mamba, Mistral has also launched another model - Mathstral, which is an AI model focused on mathematical reasoning and scientific exploration. It is designed to help users solve complex mathematical problems, especially suitable for use in the STEM field. Mathstral also uses the open source Apache2.0 license, and users can use and modify it freely.<\/p>\n<p data-track=\"33\">Mistral&#039;s progress is not only due to technological breakthroughs, but also to the financial support it has received. Recently, Mistral successfully raised $640 million, with a valuation of nearly $6 billion, and received investment support from major companies such as Microsoft and IBM. It is foreseeable that Mistral will continue to play an important role in the field of AI in the future.<\/p>\n<p>&nbsp;<\/p>","protected":false},"excerpt":{"rendered":"<p>Recently, French AI startup Mistral released a new coding model, Codestral Mamba.This model is not only fast, but also capable of handling longer code, helping programmers and developers to improve their productivity.Mistral has already amassed a Mistral has accumulated a lot of fame in the field of open source AI, and this launch of Codestral Mamba is even more eye-catching. Codestral Mamba is based on a new architecture called \"Mamba\", which is more efficient than traditional converter architectures. It's designed to give faster results on complex tasks and can handle input text up to 256,000 tokens in length. Mistral tested the model and found it to be the best choice.<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[3567,559,309,2529,3568],"collection":[],"class_list":["post-15819","post","type-post","status-publish","format-standard","hentry","category-news","tag-codestral-mamba","tag-mistral","tag-309","tag-2529","tag-3568"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/15819","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=15819"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/15819\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=15819"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=15819"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=15819"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=15819"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}