{"id":43717,"date":"2025-09-22T12:22:16","date_gmt":"2025-09-22T04:22:16","guid":{"rendered":"https:\/\/www.1ai.net\/?p=43717"},"modified":"2025-09-22T12:22:16","modified_gmt":"2025-09-22T04:22:16","slug":"%e4%b8%8a%e4%bc%a0%e4%b8%80%e5%bc%a0%e5%9b%be%e5%a4%8d%e5%88%bb%e4%bb%bb%e4%bd%95%e8%a7%86%e9%a2%91%e5%8a%a8%e4%bd%9c%e8%bf%98%e8%83%bd%e6%8d%a2%e5%8f%82%e8%80%83%e8%a7%86%e9%a2%91%e7%9a%84%e8%a7%92","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/43717.html","title":{"rendered":"Upload a map to any video action and replace the video-referenced role. Wan-animate localcomfyui runs a tutorial"},"content":{"rendered":"<p><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e9%80%9a%e4%b9%89%e4%b8%87%e7%9b%b8\" title=\"_Other Organiser\" target=\"_blank\" >Tongyi Wanxiang<\/a>It's a new action generation model<a href=\"https:\/\/www.1ai.net\/en\/tag\/wan2-2-animate\" title=\"[See articles with [Wan2.2-Animate] labels]\" target=\"_blank\" >Wan2.2-Animate<\/a>The model supports both action mimics and role-playing<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%b7%a5%e4%bd%9c%e6%b5%81\" title=\"_Other Organiser\" target=\"_blank\" >Workflow<\/a>Let's see how we use it locally\u3002<\/p>\n<p>Let's look at the full workflow:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-43720\" title=\"92a44a0bj00t2z2ad00fd000u000ksm\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/09\/92a44a0bj00t2z2ad00fdd000u000ksm.jpg\" alt=\"92a44a0bj00t2z2ad00fd000u000ksm\" width=\"1080\" height=\"748\" \/><\/p>\n<p>First of all, we need to update the latest version of the comfyUI and KJnodes, and here we need to use its latest dynamic mask\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-43718\" title=\"4c5c282bj00t2z2ad00axd000lc00nym\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/09\/4c5c282bj00t2z2ad00axd000lc00nym.jpg\" alt=\"4c5c282bj00t2z2ad00axd000lc00nym\" width=\"768\" height=\"862\" \/><\/p>\n<p>There is a hint about how to use this mask\u3002<\/p>\n<p>in short, red is the exclusion area and green is the reserved area. if the masked area is not accurate, you can add the identification point manually by pressing the shift+left mouse button or drag the identification point\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-43721\" title=\"4c2361e4j00t2z2ab0017d000dc00bcm\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/09\/4c2361e4j00t2z2ab0017d000dc00bcm.jpg\" alt=\"4c2361e4j00t2z2ab0017d000dc00bcm\" width=\"480\" height=\"408\" \/><\/p>\n<p>KJnodes then separates the part of the part just identified from the part of the part of the person to be replaced by the part of the part of the part of the part of the video, which turns into a rectangular Marseille\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-43719\" title=\"c1473972j00t2z2ac0034d000u0009bm\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/09\/c1473972j00t2z2ac0034d000u0009bm.jpg\" alt=\"c1473972j00t2z2ac0034d000u0009bm\" width=\"1080\" height=\"335\" \/><\/p>\n<p>In order to prevent the inaccuracy of the masked area, a mask extension was also made after the mask, with the default extension of 10 pixels, and an attempt was made to change the extension value if the person video generated was inaccurate\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-43722\" title=\"0ff2e0d5j00t2z2ab000fd000be005xm\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/09\/0ff2e0d5j00t2z2ab000fd000be005xm.jpg\" alt=\"0ff2e0d5j00t2z2ab000fd000be005xm\" width=\"410\" height=\"213\" \/><\/p>\n<p>A size reset node is connected after the reference photo node\u3002<\/p>\n<p>The width of the uploaded reference pictures and reference videos is as high as possible, otherwise the picture will be stretched and the number of characters generated will be more odd\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-43724\" title=\"0fde8266j00t2z2ac0066id000is00ggm\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/09\/0fde8266j00t2z2ac006id000is00ggm.jpg\" alt=\"0fde8266j00t2z2ac0066id000is00ggm\" width=\"676\" height=\"592\" \/><\/p>\n<p>And here's what happens when the scale of the upload reference picture is not consistent, and it stretches and turns\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-43726\" title=\"0879e7fcj00t2z2ad00mmd000pk00pum\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/09\/0879e7fcj00t2z2ad00mmd000pk00pum.jpg\" alt=\"0879e7fcj00t2z2ad00mmd000pk00pum\" width=\"920\" height=\"930\" \/><\/p>\n<p>Model Area<\/p>\n<p>The main point here is to load the WanWan22Animate model and the wananimate re-empting lora and lightx2v acceleration lora, which, if you want to move faster, can be replaced with the FP8 model, and other places that have little attention\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-43725\" title=\"0d29c7d9j00t2z2ac005yd000u000bqm\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/09\/0d29c7d9j00t2z2ac005yd000u000bqm.jpg\" alt=\"0d29c7d9j00t2z2ac005yd000u000bqm\" width=\"1080\" height=\"422\" \/><\/p>\n<p>IF THIS IS SET AT 40, RUNNING 720 P, IT'LL TAKE ABOUT 24 GS, AND IF 480 PS, IT'S AROUND 15 GS\u3002<\/p>\n<p>So if you're small, you can lower the video resolution\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-43727\" title=\"4eef41e6j00t2z2ac001qd000jg00g2m\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/09\/4eef41e6j00t2z2ac001qd000jg00g2m.jpg\" alt=\"4eef41e6j00t2z2ac001qd000jg00g2m\" width=\"700\" height=\"578\" \/><\/p>\n<p>The most important node in the entire workflow is WanVideo AnimateEmbeds\u3002<\/p>\n<p>AS CAN BE SEEN FROM THIS NODE, IN ADDITION TO THE VAE, THE CLIP VIDEO ENCODER, IT ALSO CONNECTS OUR UPLOADED REFERENCE PICTURES, AS WELL AS THE CHARACTER POSITION IMAGES, PROFILE IMAGES, BACKGROUND IMAGES AND MASK IMAGES THAT COME FROM OUR UPLOADED REFERENCE VIDEOS\u3002<\/p>\n<p>On the other hand, these connections can be made on their own, and we can choose to connect them all or only part of them to achieve different effects\u3002<\/p>\n<p>There's another way to play it: we're connected to a single facial, pose or background reference to achieve different functions\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-43723\" title=\"408d6b61j00t2z2ac002kd000gz00nam\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/09\/408d6b61j00t2z2ac002kd000gz00nam.jpg\" alt=\"408d6b61j00t2z2ac002kd000gz00nam\" width=\"611\" height=\"838\" \/><\/p>\n<p>The facial area is used to capture changes in the face of people in reference videos\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-43729\" title=\"3ed46309j00t2z2ad0088d000u000g2m\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/09\/3ed46309j00t2z2ad0088d000u000g2m.jpg\" alt=\"3ed46309j00t2z2ad0088d000u000g2m\" width=\"1080\" height=\"578\" \/><\/p>\n<p>How non-humans do facial reference<\/p>\n<p>If we use a non-human picture to generate a video, for example, a monkey picture and a woman's singing video can also generate a monkey singing video\u3002<\/p>\n<p>But monkeys' mouths are human mouths\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-43732\" title=\"24e1a7aej00t2z2ae00ukd000sslm\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/09\/24e1a7aej00t2z2ae00ukd000ls00slm.jpg\" alt=\"24e1a7aej00t2z2ae00ukd000sslm\" width=\"784\" height=\"1029\" \/><\/p>\n<p>So how do we keep the monkey face<\/p>\n<p>We can make some changes to the original workflow by connecting the face image with an empty image (remember the size and total frame of the image)\u3002<\/p>\n<p>So the video that is generated does not refer to the face of the original video, but to the face of our reference picture\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-43728\" title=\"5e60ea3ej00t2z2ad004id000s200ikm\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/09\/5e60ea3ej00t2z2ad004id000s200ikm.jpg\" alt=\"5e60ea3ej00t2z2ad004id000s200ikm\" width=\"1010\" height=\"668\" \/><\/p>\n<p>So we create a monkey's singing video, and the face is based on the monkey's, and the position is based on the original video\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-43730\" title=\"bfadb768j00t2z2ad00kxd000j200llm\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/09\/bfadb768j00t2z2ad00kxd000j200llm.jpg\" alt=\"bfadb768j00t2z2ad00kxd000j200llm\" width=\"686\" height=\"777\" \/><\/p>\n<p>Examples of role replacement workflows:<\/p>\n<p>As long as the WanVideo AnimateEmbeds node is fully connected, the resulting video is the role replacement workflow\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-43731\" title=\"91387959j00t2z2ad004hd000n00lkm\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/09\/91387959j00t2z2ad004hd000nn00lkm.jpg\" alt=\"91387959j00t2z2ad004hd000n00lkm\" width=\"851\" height=\"776\" \/><\/p>\n<p>And here's the video that comes out of the default work stream, the face of the person and the re-emergence of the movement\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-43733\" title=\"da6a40cj00t2z2dz007id000ik00dsp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/09\/da6aa40cj00t2z2dz007id000ik00dsp.jpg\" alt=\"da6a40cj00t2z2dz007id000ik00dsp\" width=\"668\" height=\"496\" \/><\/p>\n<p>Action Reference Video Workflow Example:<\/p>\n<p>Disconnecting the background and mask connection is the action action reference video workflow\u3002<\/p>\n<p>After these two break-ups, the resulting video will no longer take background pictures and masks from the reference video and will be sampled to generate the video\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-43736\" title=\"75c3003aj00t2z2ac002rd000gk00l2m\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/09\/75c3003aj00t2z2ac002rd000gk00l2m.jpg\" alt=\"75c3003aj00t2z2ac002rd000gk00l2m\" width=\"596\" height=\"758\" \/><\/p>\n<p>The left figure above is a reference picture, and the right is a reference video, which is generated by reference only to the action of the reference video, or by background or by reference image\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-43735\" title=\"3a861645j00t2z2ad00c2d000u09om\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/09\/3a861645j00t2z2ad00c2d000u0009om.jpg\" alt=\"3a861645j00t2z2ad00c2d000u09om\" width=\"1080\" height=\"348\" \/><\/p>\n<p>The video is as follows:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-43734\" title=\"d72b3065j00t2z2en008md000il00a1p\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/09\/d72b3065j00t2z2en008md000il00a1p.jpg\" alt=\"d72b3065j00t2z2en008md000il00a1p\" width=\"669\" height=\"361\" \/><\/p>\n<p>Below is the downloading address of the model and the project page address:<\/p>\n<p>Model face download address:<\/p>\n<p>relighting-lora:<\/p>\n<p>https:\/\/huggingface.co\/Kijai\/WanVideo_comfy\/blob\/main\/WanAnimate_relight_lora_fp16.safetensors<\/p>\n<p>Kijai main model:<\/p>\n<p>https:\/\/huggingface.co\/Kijai\/WanVideo_comfy_fp8_scaled\/tree\/main\/Wan22Animate<\/p>\n<p>Kijai version of gguf model:<\/p>\n<p>https:\/\/huggingface.co\/Kijai\/WanVideo_comfy_GGUF\/tree\/main\/Wan22Animate<\/p>\n<p>can't you see<\/p>\n<p>https:\/\/huggingface.co\/Comfy-Org\/Wan_2.2_<a href=\"https:\/\/www.1ai.net\/en\/tag\/comfyui\" title=\"_Other Organiser\" target=\"_blank\" >ComfyUI<\/a>_Repackaged\/blob\/main\/split_files\/diffusion_models\/wan2.2_animate_14B_bf16.safetensors<\/p>\n<p>KJnodes address:<\/p>\n<p>https:\/\/github.com\/kijai\/ComfyUI-KJNodes<\/p>\n<p>Kj wanvideo warpper address:<\/p>\n<p>https:\/\/github.com\/kijai\/ComfyUI-WanVideoWrapper<\/p>","protected":false},"excerpt":{"rendered":"<p>The new action generation model, which supports both action mimicry and role-playing, has opened up to a whole new action model called Wan2.2-Animate. Today, the Great God has released its work stream, and together we see how to use it locally. Let's look at the full workflow: First of all, we need to update the latest versions of the comfyUI and KJnodes, and here we need to use its latest dynamic mask. There is a hint about how this mask will be used. In short, red is the exclusion area and green is the reserved area. If the mask area is not accurate, you can add the identification point by pressing the Shift+left mouse button manually or drag the identification point. KJnodes goes through Sam2 again<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[144],"tags":[1989,4749,7641,5145,621],"collection":[],"class_list":{"0":"post-43717","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"hentry","6":"category-baike","7":"tag-comfyui","9":"tag-wan2-2-animate","10":"tag-5145","11":"tag-621"},"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/43717","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=43717"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/43717\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=43717"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=43717"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=43717"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=43717"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}