{"id":43649,"date":"2025-09-20T14:10:28","date_gmt":"2025-09-20T06:10:28","guid":{"rendered":"https:\/\/www.1ai.net\/?p=43649"},"modified":"2025-09-20T14:33:23","modified_gmt":"2025-09-20T06:33:23","slug":"%e4%b8%8a%e4%bc%a0%e4%b8%80%e5%bc%a0%e5%9b%be%e3%80%81%e4%b8%bb%e6%bc%94%e4%bb%bb%e4%bd%95%e8%a7%86%e9%a2%91%ef%bc%8c%e6%80%a7%e8%83%bd%e6%9c%80%e5%bc%ba%e5%8a%a8%e4%bd%9c%e7%94%9f%e6%88%90","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/43649.html","title":{"rendered":"Upload a map, lead any video, \"Model for the most powerful action\" Alithongyan Wan2.2-Animate Open Source"},"content":{"rendered":"<p>September 20th, Ali<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e9%80%9a%e4%b9%89%e4%b8%87%e7%9b%b8\" title=\"_Other Organiser\" target=\"_blank\" >Tongyi Wanxiang<\/a>all-new<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%8a%a8%e4%bd%9c%e7%94%9f%e6%88%90%e6%a8%a1%e5%9e%8b\" title=\"[Sees articles with [action generation model] labels]\" target=\"_blank\" >Action Generation Model<\/a>\u00a0<strong>Wan2.2-Animate<\/strong>\u00a0formal<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%bc%80%e6%ba%90\" title=\"[View articles tagged with [open source]]\" target=\"_blank\" >Open Source<\/a>I don't know. The model supports drivers, animation images and animal photographs and can be applied in areas such as short video creation, dance template generation and animation production\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-43651\" title=\"10309d1fj00t2vift00bd000ukjp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/09\/10309d1fj00t2vift00bbd000u000kjp.jpg\" alt=\"10309d1fj00t2vift00bd000ukjp\" width=\"1080\" height=\"739\" \/><\/p>\n<p>The Wan2.2-Animate model is based on a full upgrade of the previously generic open-source Animate Anyone model, which not only increases significantly on indicators such as the consistency of the person, the quality of the generation, but also supports both action imitation and role-playing patterns:<\/p>\n<ul>\n<li><strong>Role imitation<\/strong>Enter a character picture and a reference video that moves the action and expression of the video role into the picture role and gives the picture role dynamic performance<\/li>\n<li><strong>role-playing (as a game of chess)<\/strong>: The role in the video can also be replaced by the one in the picture on the basis of the action, expression and environment of the original video\u3002<\/li>\n<\/ul>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-43650\" title=\"d2110fb0j00t2viet005ld000fs008wm\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/09\/d2110fb0j00t2viet005ld000fs008wm.jpg\" alt=\"d2110fb0j00t2viet005ld000fs008wm\" width=\"568\" height=\"320\" \/><\/p>\n<p>This time<strong>The general team built a large-scale video data set of people who speak, face and body<\/strong>And post-training is based on a generic, live video model\u3002<\/p>\n<p>Wan2.2-Animate regulates role information, environmental information and actions to a uniform expression format, and achieves a single model that is compatible with two reasoning models; for body movements and facial expressions, use bone signals and hidden features, respectively, in conjunction with the action re-direction module, to achieve a precise reset of action and expression. In the replacement model, the team also designed an independent light integration LoRA to ensure a perfect photo integration effect\u3002<\/p>\n<p>The results show that Wan2.2-Animate goes beyond open-source models such as StableAnimator and LivePortrait in key indicators such as video production quality, body consistency and perception loss<strong>It's the most powerful action generation model available<\/strong>In human subjective assessments, Wan2.2-Animate goes beyond even the closed-source model represented by Runway Act-two\u3002<\/p>\n<p>As of this date, users can download models and codes from Github, HuggingFace and the magic community, and can call API on the Aliyun Refinery platform, or directly experience it through the Pan Am Network. 1AI with open source addresses as follows:<\/p>\n<ul>\n<li>https:\/\/github.com\/Wan-Video\/Wan2.2<\/li>\n<li>https:\/\/modelscope.cn\/models\/Wan-AI\/Wan2.2-Animate-14B<\/li>\n<li>https:\/\/huggingface.co\/Wan-AI\/Wan2.2-Animate-14B<\/li>\n<\/ul>","protected":false},"excerpt":{"rendered":"<p>On September 20th, Alitun Yiwan, a new action generation model, Wan2.2-Animate, officially opened. The model supports drivers, animation images and animal photographs and can be applied in areas such as short video creation, dance template generation and animation production. The Wan2.2-Animate model is based on a full upgrade of the previously popular and open-source Animate Anyone model, which not only increases significantly in indicators such as the consistency of the person and the quality of the generation, but also supports both action mimicry and role-playing: Role imitation: input of a role picture and a reference video that allows the action and expression of the video role to be moved to a picture role, giving the image role dynamic performance; Role-playing: You can also keep the original video action<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[7638,219,621],"collection":[],"class_list":["post-43649","post","type-post","status-publish","format-standard","hentry","category-news","tag-7638","tag-219","tag-621"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/43649","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=43649"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/43649\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=43649"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=43649"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=43649"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=43649"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}