{"id":30415,"date":"2025-03-10T16:26:11","date_gmt":"2025-03-10T08:26:11","guid":{"rendered":"https:\/\/www.1ai.net\/?p=30415"},"modified":"2025-03-10T16:26:11","modified_gmt":"2025-03-10T08:26:11","slug":"%e9%b8%bf%e6%b5%b7%e9%a6%96%e4%b8%aa%e5%a4%a7%e8%af%ad%e8%a8%80%e6%a8%a1%e5%9e%8b-foxbrain-%e5%8f%91%e5%b8%83%ef%bc%9a%e5%85%b7%e5%a4%87%e6%8e%a8%e7%90%86%e8%83%bd%e5%8a%9b%ef%bc%8c%e6%9c%aa%e6%9d%a5","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/30415.html","title":{"rendered":"Hon Hai's First Big Language Model FoxBrain Released: Reasoning Capabilities, Future Plans for Partial Open Source"},"content":{"rendered":"<p>March 10 (Bloomberg) -- According to Reuters.<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e9%b8%bf%e6%b5%b7\" title=\"[Sees articles with labels]\" target=\"_blank\" >Hon Hai Precision Industry Company, Taiwan technology company<\/a>today announced the launch of the first<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%a4%a7%e8%af%ad%e8%a8%80%e6%a8%a1%e5%9e%8b\" title=\"[View articles tagged with [large language model]]\" target=\"_blank\" >Large Language Model<\/a>\u201c<a href=\"https:\/\/www.1ai.net\/en\/tag\/foxbrain\" title=\"[See articles with [Foxbrain] label]\" target=\"_blank\" >FoxBrain<\/a>\" and plans to use the technology to optimize manufacturing and supply chain management.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-30416\" title=\"295efd57j00sswf9y008pd000np00rep\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/03\/295efd57j00sswf9y008pd000np00rep.jpg\" alt=\"295efd57j00sswf9y008pd000np00rep\" width=\"853\" height=\"986\" \/><\/p>\n<p>Hon Hai said in a statement that FoxBrain is operated by the\u00a0<strong>120 NVIDIA H100 GPUs<\/strong><strong>\u00a0<\/strong>The training is completed with a training cycle of about four weeks. Hon Hai is now an Apple iPhone assembler and maker of NVIDIA's AI servers, and is the world's largest electronics foundry.<\/p>\n<p>The model is based on Meta's Llama 3.1 architecture and has been specifically optimized to<strong>Adaptation of Traditional Chinese and local language styles<\/strong>. Honghai says it is the first large-scale language model with reasoning capabilities in the region. It claims that although FoxBrain is performance<strong>Slightly inferior to DeepSeek's distillation model<\/strong>but the overall performance has been<strong>Close to the top of the world<\/strong>.<\/p>\n<p>FoxBrain is primarily used for internal scenarios and supports the<strong>Data Analysis, Decision Aids, Document Collaboration, Math Operations, Reasoning and Problem Solving, and Code Generation<\/strong>.<\/p>\n<p>Hon Hai plans to work with technology companies to expand the model's application, while<strong>open source<\/strong>that drives artificial intelligence in manufacturing, supply chain management and intelligent decision-making.<\/p>\n<p>NVIDIA also supports FoxBrain training by providing computing power through its supercomputer \"Taipei-1\" in Kaohsiung, as well as technical guidance during the training process.<\/p>\n<p>Note: \"Taipei-1\" is the largest supercomputer in the region, operated by NVIDIA in Kaohsiung.<\/p>","protected":false},"excerpt":{"rendered":"<p>March 10 news, according to Reuters reports, Hon Hai today announced the launch of the first big language model \"FoxBrain\", and plans to use the technology to optimize manufacturing and supply chain management. Hon Hai said in a statement, FoxBrain by 120 NVIDIA H100 GPU training is completed, the training cycle of about four weeks. Hon Hai is currently an Apple iPhone assembler and maker of NVIDIA's AI servers, and is the world's largest electronics foundry. The model is based on Meta's Llama 3.1 architecture and has been specifically optimized to fit traditional Chinese and local language styles. Hon Hai says it is the first large-scale local language model with reasoning capabilities. It says that although FoxBrain is slightly less powerful than<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[5939,706,5938],"collection":[],"class_list":["post-30415","post","type-post","status-publish","format-standard","hentry","category-news","tag-foxbrain","tag-706","tag-5938"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/30415","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=30415"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/30415\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=30415"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=30415"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=30415"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=30415"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}