{"id":12646,"date":"2024-06-08T09:18:37","date_gmt":"2024-06-08T01:18:37","guid":{"rendered":"https:\/\/www.1ai.net\/?p=12646"},"modified":"2024-06-08T09:18:37","modified_gmt":"2024-06-08T01:18:37","slug":"%e8%85%be%e8%ae%af%e8%81%94%e5%90%88%e4%b8%ad%e5%b1%b1%e5%a4%a7%e5%ad%a6%e3%80%81%e6%b8%af%e7%a7%91%e5%a4%a7%e6%8e%a8%e5%87%ba%e5%9b%be%e7%94%9f%e8%a7%86%e9%a2%91%e6%a8%a1%e5%9e%8bfollow-your","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/12646.html","title":{"rendered":"Tencent, Sun Yat-sen University and Hong Kong University of Science and Technology jointly launched the image-generated video model &quot;Follow-Your-Pose-v2&quot;"},"content":{"rendered":"<p data-vmark=\"0f1a\"><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%85%be%e8%ae%af\" title=\"[View articles tagged with [Tencent]]\" target=\"_blank\" >Tencent<\/a>The Hunyuan team, Sun Yat-sen University and the Hong Kong University of Science and Technology jointly launched a new<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%9b%be%e7%94%9f%e8%a7%86%e9%a2%91%e6%a8%a1%e5%9e%8b\" title=\"[See articles with tags]\" target=\"_blank\" >Image video model<\/a>&quot;Follow-Your-Pose-v2&quot;, the related results have been published on arxiv (attached <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2406.03035\" target=\"_blank\" rel=\"noopener\"><span class=\"link-text-start-with-http\">DOI:10.48550\/arXiv.2406.03035<\/span><\/a>).<\/p>\n<p data-vmark=\"65d4\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-12647\" title=\"659b2cb9-db60-4290-8adb-b7ac7cb7dee8\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/659b2cb9-db60-4290-8adb-b7ac7cb7dee8.png\" alt=\"659b2cb9-db60-4290-8adb-b7ac7cb7dee8\" width=\"727\" height=\"928\" \/><\/p>\n<p data-vmark=\"769f\">According to reports, &quot;Follow-Your-Pose-v2&quot; only needs to input a picture of a person and an action video, and it can make the person in the picture move along with the action in the video, and the generated video can be up to 10 seconds long.<\/p>\n<p data-vmark=\"f49a\">Compared with the previously launched model, &quot;Follow-Your-Pose-v2&quot; can support multi-person video action generation with less inference time.<\/p>\n<p data-vmark=\"3cd5\">In addition, the model has strong generalization capabilities and can generate high-quality videos regardless of the age and clothing of the input character, how cluttered the background is, or how complex the movements in the action video are.<\/p>\n<p data-vmark=\"943f\">Tencent has released an acceleration library for the Tencent Hunyuan Text Generator open source model (Hunyuan DiT), claiming to greatly improve reasoning efficiency and shorten the image generation time by 75%.<\/p>\n<p data-vmark=\"a809\">Officials said that the usage threshold of the Hunyuan DiT model has also been greatly lowered, and users can use the Tencent Hunyuan Wenshengtu model capabilities based on the ComfyUI graphical interface.<\/p>","protected":false},"excerpt":{"rendered":"<p>Tencent hybrid team jointly with Sun Yat-sen University, the Hong Kong University of Science and Technology jointly introduced a new graphical video model \"Follow-Your-Pose-v2\", the relevant results have been published in arxiv (with DOI: 10.48550\/arXiv.2406.03035). According to the introduction, \"Follow-Your-Pose-v2\" only needs to input a picture of a person and a video of the action, it can make the person on the picture follow the action on the video to move, and the length of the generated video can be up to 10 seconds. Compared with previous models, \"Follow-Your-Pose-v2\" can support multi-person video action generation with less inference time. In addition, the model has a strong generalization capability, with no<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[1711,323],"collection":[],"class_list":["post-12646","post","type-post","status-publish","format-standard","hentry","category-news","tag-1711","tag-323"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/12646","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=12646"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/12646\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=12646"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=12646"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=12646"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=12646"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}