{"id":48698,"date":"2026-01-14T12:17:45","date_gmt":"2026-01-14T04:17:45","guid":{"rendered":"https:\/\/www.1ai.net\/?p=48698"},"modified":"2026-01-14T12:17:45","modified_gmt":"2026-01-14T04:17:45","slug":"nano-banana-pro-%e6%96%b0%e5%af%b9%e6%89%8b%ef%bc%8c%e6%99%ba%e8%b0%b1%e8%81%94%e5%90%88%e5%8d%8e%e4%b8%ba%e5%bc%80%e6%ba%90%e9%a6%96%e4%b8%aa%e5%9b%bd%e4%ba%a7%e8%8a%af%e7%89%87%e8%ae%ad%e7%bb%83","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/48698.html","title":{"rendered":"Nano Banana Pro, a new rival, multimodular SOTA model for the first nationally produced chip in Union of Thoughts"},"content":{"rendered":"<p>January 14th.<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e6%99%ba%e8%b0%b1\" title=\"[View articles tagged with [Smart Spectrum]]\" target=\"_blank\" >Zhipu<\/a>today announced a joint<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%8d%8e%e4%b8%ba\" title=\"_Other Organiser\" target=\"_blank\" >Huawei<\/a><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%bc%80%e6%ba%90\" title=\"[View articles tagged with [open source]]\" target=\"_blank\" >Open Source<\/a>New Generation<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%9b%be%e5%83%8f%e7%94%9f%e6%88%90%e6%a8%a1%e5%9e%8b\" title=\"[Sees articles with tags]\" target=\"_blank\" >Image Generation Model<\/a>\u00a0<strong>GLM-Image<\/strong>The model is based on the Atlas 800T A2 device and the MindSpore AI framework to complete the entire process from data to training<strong>IT'S THE FIRST SOTA MULTI-MODEL TO COMPLETE FULL TRAINING ON A NATIONAL CHIP<\/strong>.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-48699\" title=\"55c15af4j00t8u6k30090d000v9009hp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2026\/01\/55c15af4j00t8u6k30090d000v9009hp.jpg\" alt=\"55c15af4j00t8u6k30090d000v9009hp\" width=\"1125\" height=\"341\" \/><\/p>\n<p>GLM-Image uses the self-innovated \"Self-Return + Proliferation Decoding\" hybrid structure, which is an autonomous, innovative, self-repeated, self-demolition-demolator complex<strong>Joint image generation with language models achieved<\/strong>.<\/p>\n<p>1AI with GLM-Image core highlights as follows:<\/p>\n<ul>\n<li>Structural innovation, oriented towards \"cognitive generation\" technology exploration: using a \"self-regression + proliferation encoder\" hybrid structure, taking into account global command understanding and local detail<strong>OVERCOMING THE CHALLENGES OF CREATING KNOWLEDGE-INTENSIVE SCENES SUCH AS POSTERS, PPT AND COPUTU<\/strong>, a step towards exploring a new generation of \u201cknowledge + reasoning\u201d cognitive generation models, represented by Nano Banana Pro\u3002<\/li>\n<li>The first SOTA model to complete full-scale training in a nationally produced chip: the model ' s self-return structure base is based on the Rotation Atlas 800T A2 device and the MindSpore AI framework, which completes the full process construction from pre-processing of data to large-scale training, and validates the feasibility of training forward models on the National Product Total Calculator Base\u3002<\/li>\n<li>Text Rendering Open Source SOTA: First on the CVTTG-2K (complex visual text generation) and LongText-Bench (long text rendering) lists<strong>He's very good at the Chinese word generation<\/strong>.<\/li>\n<li>PRICE-FOR-MONEY VERSUS SPEED OPTIMIZATION: API CALL MODE<strong>Generating a picture cost $ 0.1<\/strong>, the speed optimization version is about to be updated\u3002<\/li>\n<\/ul>\n<p>According to the official spectrograph, GLM-Image was able to adapt itself to multiple resolution, by improving the Tokenizer strategy, and originals supported the task of generating an arbitrary proportion of images from 1024 x 1024 to 2048 x 2048, without retraining\u3002<\/p>\n<p>GLM-Image reached in an authoritative list of words<strong>OPEN SOURCE SOTA HORIZONTAL<\/strong>.<\/p>\n<p>GLM-Image tested the following in the actual complex graphic tasks:<\/p>\n<p>Scene I: Cope Art<\/p>\n<p>The GLM-Image is better at drawing schematics and schematics of science with complex logical processes and narratives\u3002<\/p>\n<p>Scene II: Dog Pictures<\/p>\n<p>GLM-Image is able to keep styles and subjects consistent and to guarantee the accuracy of multiple text creations when generating multi-grand drawings such as electric graphs and comics\u3002<\/p>\n<p>Scene three: social media graphic cover<\/p>\n<p>GLM-Image is used to create complex images, such as social media cover and content, to make your creation more free\u3002<\/p>\n<p>Site IV: Business poster<\/p>\n<p>GLM-Image is able to generate design-sensitive, text-embedded holiday posters and business outreach maps\u3002<\/p>\n<p>Scene Five: Live Photography<\/p>\n<p>In addition to text replicating, GLM-Image is also very good at generating images, pets, landscapes, still objects of all kinds and sizes\u3002<\/p>\n<p>1AI with GLM-Image experience and open source addresses as follows:<\/p>\n<ul>\n<li>online experience: https:\/\/bigmodel.cn\/trialcenter\/modeltrial\/image<\/li>\n<li>API access: https:\/\/docs.bigmodel.cn\/cn\/guide\/models\/image-gender\/glm-image<\/li>\n<li>GitHub: https:\/\/github.com\/zai-org\/GLM-Image<\/li>\n<li>Hugging Face: https:\/\/huggingface.co\/zai-org\/GLM-Image<\/li>\n<li>ZhipuAI\/GLM-Image<\/li>\n<\/ul>","protected":false},"excerpt":{"rendered":"<p>On January 14, according to the news, Union Hua was declared today as an open-source new generation image generation model, GMM-Image, based on the roll-out of Atlas 800T A2 equipment and the MindSpore AI framework to complete the full process from data to training, the first SOTA multi-modulate model to complete full training on a national chip. GLM-Image combines image generation with language models using an autonomous and innovative \"self-return + proliferation decoder\" hybrid structure. 1AI with GLM-Image core highlights as follows: Structural innovation, technological exploration for \u201ccognitive generation\u201d: using a hybrid structure of \u201cself-return + proliferation encoder\u201d, balancing global command understanding with local detail, overcoming<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[1117,4881,219,2680],"collection":[],"class_list":["post-48698","post","type-post","status-publish","format-standard","hentry","category-news","tag-1117","tag-4881","tag-219","tag-2680"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/48698","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=48698"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/48698\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=48698"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=48698"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=48698"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=48698"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}