{"id":32933,"date":"2025-04-14T10:37:53","date_gmt":"2025-04-14T02:37:53","guid":{"rendered":"https:\/\/www.1ai.net\/?p=32933"},"modified":"2025-04-14T10:37:53","modified_gmt":"2025-04-14T02:37:53","slug":"%e8%b0%b7%e6%ad%8c%e8%ae%a1%e5%88%92%e8%9e%8d%e5%90%88-gemini-%e4%b8%8e-veo-%e6%a8%a1%e5%9e%8b%ef%bc%8c%e6%89%93%e9%80%a0%e5%85%a8%e8%83%bd-ai%e5%8a%a9%e6%89%8b","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/32933.html","title":{"rendered":"Google plans to fuse Gemini and Veo models to create an all-in-one AI assistant"},"content":{"rendered":"<p><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%b0%b7%e6%ad%8c\" title=\"[View articles tagged with [Google]]\" target=\"_blank\" >Google<\/a> DeepMind CEO Demis Hassabis revealed on Possible, a podcast co-hosted by Collage co-founder Reid Hoffman, that Google plans to bring its <a href=\"https:\/\/www.1ai.net\/en\/tag\/gemini\" title=\"[View articles tagged with [Gemini]]\" target=\"_blank\" >Gemini<\/a> AI models with <a href=\"https:\/\/www.1ai.net\/en\/tag\/veo\" title=\"_Other Organiser\" target=\"_blank\" >Veo<\/a> Video generation models are fused to enhance Gemini's understanding of the physical world.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-32934\" title=\"26f23815j00suoslu005cd000o800d5p\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/04\/26f23815j00suoslu005cd000o800d5p.jpg\" alt=\"26f23815j00suoslu005cd000o800d5p\" width=\"872\" height=\"473\" \/><\/p>\n<p>We've built Gemini, the base model, as a multimodal model from the beginning,\" Hassabis said.<strong>Because we have a vision of building a universal digital assistant that can actually help you in the real world<\/strong>. &quot;<\/p>\n<p>at present,<strong>The entire AI industry is gradually moving towards \"all-purpose\" models.<\/strong>These models are capable of understanding and integrating multiple media forms. Google's latest Gemini model can generate not only images and text, but also audio, while OpenAI's default model in ChatGPT can now create images, including Hayao Miyazaki-style artwork. Amazon has also announced plans to release an \"any-to-any\" model later this year.<\/p>\n<p>According to 1AI, these \"omnipotent\" models require a lot of training data, including images, video, audio, text, etc. Hassabis hinted that Veo's video data comes mainly from Google's YouTube platform. Hassabis hinted that Veo's video data comes primarily from Google's YouTube platform, saying, \"By watching tons of YouTube videos, Veo 2 is able to understand the physical laws of the world.\" Previously, Google told TechCrunch that its model may be trained using \"some\" YouTube content under a deal with YouTube creators. The company reportedly expanded parts of its terms of service last year to allow for more data to train its AI models.<\/p>","protected":false},"excerpt":{"rendered":"<p>Google DeepMind CEO Demis Hassabis revealed on the podcast Possible, co-hosted by Collage co-founder Reid Hoffman, that Google plans to fuse its Gemini AI model with the Veo video generation model as a way to improve Gemini's understanding of the physical world. Hassabis said, \"We've been building this base model of Gemini as a multimodal model from the beginning because we have a vision of building a universal digital assistant that can actually help you in the real world.\" Currently, the AI industry as a whole is moving toward \"all-purpose\" models that are<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[448,436,2597,281],"collection":[],"class_list":["post-32933","post","type-post","status-publish","format-standard","hentry","category-news","tag-ai","tag-gemini","tag-veo","tag-281"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/32933","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=32933"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/32933\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=32933"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=32933"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=32933"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=32933"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}