{"id":30743,"date":"2025-03-15T11:27:49","date_gmt":"2025-03-15T03:27:49","guid":{"rendered":"https:\/\/www.1ai.net\/?p=30743"},"modified":"2025-03-15T11:27:49","modified_gmt":"2025-03-15T03:27:49","slug":"%e5%8a%a0%e6%8b%bf%e5%a4%a7%e5%88%9d%e5%88%9b%e5%85%ac%e5%8f%b8%e6%8e%a8%e5%87%ba-command-a-%e8%bd%bb%e9%87%8f%e7%ba%a7-ai-%e6%a8%a1%e5%9e%8b%ef%bc%8c%e5%8f%b7%e7%a7%b0%e4%bb%85%e9%9c%80%e4%b8%a4","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/30743.html","title":{"rendered":"Canadian Startup Launches Command A Lightweight AI Model, Claims to Require Only Two NVIDIA A100 \/ H100 GPUs for Deployment"},"content":{"rendered":"<p><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%8a%a0%e6%8b%bf%e5%a4%a7\" title=\"[See articles with [Canada] labels]\" target=\"_blank\" >Canada<\/a> AI <a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%88%9d%e5%88%9b%e5%85%ac%e5%8f%b8\" title=\"[Sees articles with labels]\" target=\"_blank\" >Startups<\/a> Cohere has released a new product called \"<a href=\"https:\/\/www.1ai.net\/en\/tag\/command-a\" title=\"_Other Organiser\" target=\"_blank\" >Command A<\/a>&quot;of <a href=\"https:\/\/www.1ai.net\/en\/tag\/ai%e6%a8%a1%e5%9e%8b\" title=\"[View articles tagged with [AI models]]\" target=\"_blank\" >AI Models<\/a>The model focuses on lightweight applications and claims to require only two NVIDIA A100 or H100 GPUs for easy deployment, claiming that \"performance is comparable to GPT-4o\" and that it achieves \"maximum performance with minimum hardware\".<\/p>\n<p><img decoding=\"async\" id=\"netease1742009186715\" contenteditable=\"false\" src=\"http:\/\/dingyue.ws.126.net\/2025\/0315\/823b5f0ej00st5ax0001ud000v900qep.jpg\" \/><\/p>\n<p>Cohere said Command A is designed specifically for small and medium-sized business environments.<strong>It supports 256k context lengths and 23 languages.<\/strong>For comparison, other competitors' \"similar models\" require 32 GPUs to deploy.<\/p>\n<p>In the performance test, the<strong>Command A can output up to 156 tokens per second.<\/strong>The Command A is also a very good performer in benchmarks of command tracing, SQL, agent and tool tasks. Command A also excels in benchmarks for command tracing, SQL, agent programs, and utility tasks.<\/p>\n<p>Citing performance data, Cohere claims that the industry's large language models can have serious latency problems when outputting results if they are \"oversized\"; if you just want to get to the right answer quickly, Command A is a relatively good choice.<\/p>\n<p>Cohere has now published the corresponding Command A on the Hugging Face platform (<a href=\"https:\/\/huggingface.co\/CohereForAI\/c4ai-command-a-03-2025?ref=cohere-ai.ghost.io\">Click here to visit<\/a>), open for use by academics, and will be available on other cloud service platforms in the future.<\/p>","protected":false},"excerpt":{"rendered":"<p>Canadian AI startup Cohere has released an AI model called \"Command A\", which focuses on lightweight applications and can be easily deployed with just two NVIDIA A100 or H100 GPUs, claiming \"performance comparable to GPT-4o\". Cohere said Command is a lightweight application that can be easily deployed with just two NVIDIA A100 or H100 GPUs, claiming \"performance comparable to GPT-4o\" and \"maximum performance with minimum hardware. Designed specifically for SMB environments, Command A supports 256k context length and 23 languages, Cohere said, in comparison to competitors' \"comparable models\" that require 32 GPUs to deploy. In performance tests, Command A could output up to 156 tokens per second, according to the company.<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[167,5982,309,2896],"collection":[],"class_list":["post-30743","post","type-post","status-publish","format-standard","hentry","category-news","tag-ai","tag-command-a","tag-309","tag-2896"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/30743","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=30743"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/30743\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=30743"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=30743"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=30743"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=30743"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}