{"id":24349,"date":"2024-12-04T02:53:50","date_gmt":"2024-12-03T18:53:50","guid":{"rendered":"https:\/\/www.1ai.net\/?p=24349"},"modified":"2024-12-03T21:55:09","modified_gmt":"2024-12-03T13:55:09","slug":"%e8%85%be%e8%ae%af%e6%b7%b7%e5%85%83%e5%a4%a7%e6%a8%a1%e5%9e%8b%e4%b8%8a%e7%ba%bf%e5%b9%b6%e5%bc%80%e6%ba%90%e6%96%87%e7%94%9f%e8%a7%86%e9%a2%91%e8%83%bd%e5%8a%9b%ef%bc%9a%e6%94%af%e6%8c%81%e4%b8%ad","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/24349.html","title":{"rendered":"Tencent hybrid large model on-line and open source Vincennes video capabilities: support for Chinese and English bilingual input, the number of participants 13 billion"},"content":{"rendered":"<p><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%85%be%e8%ae%af\" title=\"[View articles tagged with [Tencent]]\" target=\"_blank\" >Tencent<\/a>\u00a0On December 3, it was announced that<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e6%b7%b7%e5%85%83%e5%a4%a7%e6%a8%a1%e5%9e%8b\" title=\"[Sees articles with labels of [mixed model]\" target=\"_blank\" >Hunyuan Large Model<\/a>Go live and open source Vincennes video capabilities, with 13 billion participants, to support the Chinese<strong>Bilingual English input.<\/strong><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24350\" title=\"f647eed0j00snx7yw00dzd000r600udp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/f647eed0j00snx7yw00dzd000r600udp.jpg\" alt=\"f647eed0j00snx7yw00dzd000r600udp\" width=\"978\" height=\"1093\" \/><\/p>\n<p>Officials claim that Tencent's hybrid video generation large model can generate \"<strong>Ultra-realistic \"high quality video<\/strong>The screen generated is not easily distorted; in the mirror or mirror scene, the mirror reflection action and the outside can be completely synchronized.<strong>The reflection of light and shadow is basically in accordance with the laws of physics<\/strong>.<\/p>\n<p>According to the introduction, the Tencent hybrid video generation of large models<strong>Utilizes DiT architecture<\/strong>,<strong>Adaptation of a new generation of text encoders to improve semantic compliance<\/strong>, better able to cope with multiple subject depictions and achieve more detailed instructions and picture presentation.<\/p>\n<p>In \"Tencent Yuanbao App\", go to \"AI Application\" and select \"AI Video\" to apply for a trial.<\/p>\n<p>Tencent said that the open source contains a complete model of model weights, inference code, model algorithms, etc.<strong>Free to use and develop eco-plugins for corporate and individual developers<\/strong>.<\/p>","protected":false},"excerpt":{"rendered":"<p>Tencent announced on December 3, mixed yuan large model on-line and open source literate video capabilities, the number of participants 13 billion, support for bilingual input in English and Chinese. Officials claimed that Tencent hybrid video generation model can generate \"ultra-realistic\" high-quality video, generated by the screen is not easy to deform; in the mirror or mirror scene, you can do the mirror reflection action and the outside completely synchronized, light and shadow reflections basically in line with the laws of physics. According to reports, Tencent hybrid video generation model using DiT architecture, adapted to a new generation of text encoder to improve semantic compliance, better response to multiple subjects depicted, to achieve more detailed instructions and screen presentation. In the \"Tencent Yuanbao App\", enter \"AI applications\", select \"AI video\", you can apply for a trial. Tencent said that this open source contains models<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[322,323],"collection":[],"class_list":["post-24349","post","type-post","status-publish","format-standard","hentry","category-news","tag-322","tag-323"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/24349","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=24349"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/24349\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=24349"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=24349"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=24349"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=24349"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}