{"id":30183,"date":"2025-03-06T18:23:36","date_gmt":"2025-03-06T10:23:36","guid":{"rendered":"https:\/\/www.1ai.net\/?p=30183"},"modified":"2025-03-06T18:23:36","modified_gmt":"2025-03-06T10:23:36","slug":"%e8%85%be%e8%ae%af%e6%b7%b7%e5%85%83%e5%8f%91%e5%b8%83%e5%b9%b6%e5%bc%80%e6%ba%90%e5%9b%be%e7%94%9f%e8%a7%86%e9%a2%91%e6%a8%a1%e5%9e%8b%ef%bc%9a%e5%8f%af%e7%94%9f%e6%88%90-5-%e7%a7%92%e7%9f%ad","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/30183.html","title":{"rendered":"Tencent hybrid released and open-sourced graphic video model: can generate 5-second short videos, but also automatically with background sound effects"},"content":{"rendered":"<p>March 6, 2011 - 1AI has learned from the<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%85%be%e8%ae%af%e6%b7%b7%e5%85%83\" title=\"[View articles tagged with [Tencent Hybrid]]\" target=\"_blank\" >Tencent Hunyuan<\/a>WeChat was informed that<strong>Tencent Hybrid Release<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%9b%be%e7%94%9f%e8%a7%86%e9%a2%91%e6%a8%a1%e5%9e%8b\" title=\"[See articles with tags]\" target=\"_blank\" >Image video model<\/a>externally<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%bc%80%e6%ba%90\" title=\"[View articles tagged with [open source]]\" target=\"_blank\" >Open Source<\/a>The game also features lip-syncing and motion-driven gameplay, as well as support for generating background sound effects and 2K high-quality video.<\/strong><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-30184\" title=\"dc0432fdj00ssp63d00jnd000u000flp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/03\/dc0432fdj00ssp63d00jnd000u000flp.jpg\" alt=\"dc0432fdj00ssp63d00jnd000u000flp\" width=\"1080\" height=\"561\" \/><\/p>\n<p>Based on the ability of graphic video, users only need to upload a picture and briefly describe how they want the picture to move, how the camera is dispatched, etc., and Hybrid can make the picture move according to the requirements.<strong>Turns into a 5-second video with automatic background sound effects<\/strong>In addition, by uploading a picture of a character and entering the text or audio you wish to \"lip sync\", the character in the picture can \"talk\" or \"sing\". In addition, by uploading a picture of a character and typing in the text or audio you want to \"lip-sync\", the character in the picture can \"talk\" or \"sing\"; using the \"motion-driven\" capability, you can also generate a dance video of the same character in one click. With the \"motion-driven\" ability, you can also generate the same dancing video with one click.<\/p>\n<p>Currently, users are able to view the video through the official Mixed AI Video website (<a href=\"https:\/\/www.1ai.net\/en\/26196.html\/\">https:\/\/www.1ai.net\/26196.html<\/a>) can be experienced, and enterprises and developers can apply to use the API interface at Tencent Cloud.<\/p>\n<p>The open-source graph-generated video model is a continuation of the open-source work on the hybrid Vincennes video model, which maintains a total number of 13 billion participants in the model.<strong>Models are suitable for many types of characters and scenes, including generation for realistic video production, anime characters and even CGI character production.<\/strong><\/p>\n<p>The open source content includes weights, inference code and LoRA training code, which supports developers to train proprietary LoRA and other derived models based on mixed elements. Currently, it can be downloaded from Github, HuggingFace and other mainstream developer communities.<\/p>\n<p>The hybrid open source technical report discloses that the hybrid video generation model has flexible scalability, with graph-generated and text-generated videos carrying out pre-training on the same dataset.<strong>On the basis of maintaining the characteristics of ultra-realistic picture quality, smooth rendition of large-scale actions, and native camera switching, the model is allowed to capture rich visual and semantic information and combine multiple input conditions such as image, text, audio, and gesture to realize multi-dimensional control of the generated video<\/strong>.<\/p>\n<p>At present, the open source model series of Mixed Meta has a complete coverage of text, image, video and 3D generation and other modalities, and has gained the attention and star of more than 23,000 developers in Github.<\/p>\n<p><strong>Attachment: hybrid Tusheng video open source link\u00a0<\/strong><\/p>\n<p><strong>Github:<\/strong>https:\/\/github.com\/Tencent\/HunyuanVideo-I2V<\/p>\n<p><strong>Huggingface:<\/strong>https:\/\/huggingface.co\/tencent\/HunyuanVideo-I2V<\/p>","protected":false},"excerpt":{"rendered":"<p>March 6 news, 1AI from tencent mixed yuan weibo public number was informed that tencent mixed yuan released graphic video model and open source, at the same time on-line lip-synching and action-driven gameplay, and support for the generation of background sound effects and 2K high-quality video. Based on the ability of graphic video, users only need to upload a picture and briefly describe how they want the screen to move, how the camera is dispatched, etc., and the hybrid can make the picture move according to the requirements and turn it into a 5-second short video, which can also be automatically matched with background sound effects. In addition, by uploading a picture of a character and inputting the text or audio you want to \"lip-sync\", the character in the picture can \"talk\" or \"sing\"; using the \"Motion Driven\" feature, Mixmaster can make the picture move and turn it into a 5-second short video according to the requirements, and also automatically match the background sound effects. With the \"motion-driven\" ability, you can also generate the same kind of dancing video with one click. Currently, users can access the official website of Mixed AI Video (https:\/\/www).<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[1711,219,2657],"collection":[],"class_list":["post-30183","post","type-post","status-publish","format-standard","hentry","category-news","tag-1711","tag-219","tag-2657"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/30183","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=30183"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/30183\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=30183"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=30183"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=30183"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=30183"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}