{"id":22246,"date":"2024-10-30T09:48:55","date_gmt":"2024-10-30T01:48:55","guid":{"rendered":"https:\/\/www.1ai.net\/?p=22246"},"modified":"2024-10-30T09:48:55","modified_gmt":"2024-10-30T01:48:55","slug":"%e8%b0%b7%e6%ad%8c%e5%8f%91%e5%b8%83%e6%97%a5%e8%af%ad%e7%89%88gemma-ai%e6%a8%a1%e5%9e%8b%ef%bc%8c%e4%bb%8520%e4%ba%bf%e5%8f%82%e6%95%b0%e3%80%81%e7%a7%bb%e5%8a%a8%e8%ae%be%e5%a4%87%e4%b9%9f%e8%83%bd","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/22246.html","title":{"rendered":"Google releases Japanese-language version of Gemma AI model that runs easily with just 2 billion parameters and mobile devices!"},"content":{"rendered":"<p>Recently held in Tokyo <a href=\"https:\/\/www.1ai.net\/en\/tag\/gemma\" title=\"_Other Organiser\" target=\"_blank\" >Gemma<\/a> On Developer Day.<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%b0%b7%e6%ad%8c\" title=\"[View articles tagged with [Google]]\" target=\"_blank\" >Google<\/a>A new Japanese version of the Gemma AI model has been officially launched. The performance of this model is comparable to GPT-3.5, but it has only a mere 2 billion covariates and is very small for running on mobile devices.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-22247\" title=\"71e64ffaj00sm5boa000wd000rs00fgm\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/10\/71e64ffaj00sm5boa000wd000rs00fgm.jpg\" alt=\"71e64ffaj00sm5boa000wd000rs00fgm\" width=\"1000\" height=\"556\" \/><\/p>\n<p>The Gemma models in this release excel in Japanese processing while maintaining their capabilities in English. This is especially important for small models, which can face the problem of \"catastrophic forgetting\" when fine-tuning to a new language, where newly learned knowledge overwrites previously learned information. But Gemma successfully overcame this challenge, demonstrating strong language processing capabilities.<\/p>\n<p>What's more, Google has also immediately released the model's weights, training materials, and examples through platforms like Kaggle and Hugging Face to help developers get started faster. This means that developers can easily use this model for local computing, especially in edge computing applications, which will lead to more possibilities.<\/p>\n<p>To encourage more international developers, Google has also launched a contest called \"Unlocking Global Communication with Gemma\" with $150,000 in prizes. This program is designed to help developers adapt Gemma models to local languages. Currently, there are already projects underway in Arabic, Vietnamese and Zulu. In India, developers are working on the \"Navarasa\" project, which plans to optimize the model to support 12 Indian languages, while another team is working on fine-tuning it to support Korean dialects.<\/p>\n<p>The Gemma2 family of models was introduced to achieve higher performance with fewer parameters. Compared to similar models from other companies, such as Meta, Gemma2 performs equally well, and in some cases the 200 million parameter Gemma2 is able to outperform some models with 70 billion parameters, such as LLaMA-2.<\/p>\n<p>Developers and researchers can access Gemma-2-2B models and other Gemma models through free programs at Hugging Face, Google AI Studio, and Google Colab, in addition to finding them in the Vertex AI Model Garden.<\/p>\n<p>Official Portal:https:\/\/aistudio.google.com\/app\/prompts\/new_chat?model=gemma-2-2b-it<\/p>\n<p>Hugging Face:https:\/\/huggingface.co\/google<\/p>\n<p>Google Colab:https:\/\/ai.google.dev\/gemma\/docs\/keras_inference?hl=de<\/p>","protected":false},"excerpt":{"rendered":"<p>At the recent Gemma Developer Day in Tokyo, Google officially launched a new Japanese version of the Gemma AI model. The model's performance rivals that of GPT-3.5, but it only has a mere 2 billion covariates, making it very small and suitable for running on mobile devices. The Gemma model in this release excels in Japanese language processing while maintaining its capabilities in English. This is especially important for small models, which can face the problem of \"catastrophic forgetting\" when fine-tuning to a new language, i.e., newly learned knowledge overwrites previously learned information. But Gemma has managed to overcome this problem, demonstrating strong language processing capabilities. What's more, Google has also provided a new example of this through the Kaggl<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[167,1289,281],"collection":[],"class_list":["post-22246","post","type-post","status-publish","format-standard","hentry","category-news","tag-ai","tag-gemma","tag-281"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/22246","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=22246"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/22246\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=22246"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=22246"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=22246"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=22246"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}