{"id":5925,"date":"2024-03-21T09:50:48","date_gmt":"2024-03-21T01:50:48","guid":{"rendered":"https:\/\/www.1ai.net\/?p=5925"},"modified":"2024-03-21T09:50:57","modified_gmt":"2024-03-21T01:50:57","slug":"%e5%ad%97%e8%8a%82%e5%8f%91%e5%b8%83animatediff-lightning%e6%a8%a1%e5%9e%8b-4%e6%ad%a5%e6%8e%a8%e7%90%86%e5%b0%b1%e8%83%bd%e7%94%9f%e6%88%90%e9%ab%98%e8%b4%a8%e9%87%8f%e8%a7%86%e9%a2%91","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/5925.html","title":{"rendered":"ByteDance releases AnimateDiff-Lightning model that can generate high-quality videos in 4 steps of reasoning"},"content":{"rendered":"<p>recently,<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%ad%97%e8%8a%82\" title=\"[See articles with [byte] labels]\" target=\"_blank\" >byte<\/a>Released a new product called<a href=\"https:\/\/www.1ai.net\/en\/tag\/animatediff\" title=\"_Other Organiser\" target=\"_blank\" >AnimateDiff<\/a>-Lightning's model, this model has impressive performance in video generation. With only 4-8 steps of reasoning, it is possible to generate very good quality videos, which is undoubtedly a major technological breakthrough for the video production industry.<\/p>\n<p class=\"article-content__img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-5926\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/03\/6384653152429496007675814.jpg\" alt=\"\" width=\"735\" height=\"626\" \/><\/p>\n<p>Paper address:https:\/\/arxiv.org\/html\/2403.12706v1<\/p>\n<p>The AnimateDiff-Lightning model also works very well with Contorlnet, which means that the<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%a7%86%e9%a2%91%e8%bd%ac%e7%bb%98\" title=\"[Sees articles with tags]\" target=\"_blank\" >Video transfer<\/a>workflow needs to be upgraded. At the same time, Byte has also introduced the corresponding Comfyui workflow, an open source workflow implementation that makes the AnimateDiff-Lightning model even more in place.<\/p>\n<p>It is understood that the AnimateDiff-Lightning model is refined from AnimateDiff SD1.5v2, which contains 1-step, 2-step, 4-step and 8-step refined model versions. Among them, the generation quality of 2-step, 4-step and 8-step models is very good, which undoubtedly provides more choices and possibilities for video producers.<\/p>\n<p>When using the AnimateDiff-Lightning model, Byte also recommends using the motion<a href=\"https:\/\/www.1ai.net\/en\/tag\/lora\" title=\"_Other Organiser\" target=\"_blank\" >LoRA<\/a>, because Sport LoRA produces a stronger motion effect. In order to avoid watermarks, it is recommended to keep the intensity of motion LoRA use between 0.7 and 0.8.<\/p>\n<p>Overall, the AnimateDiff-Lightning model released by Byte brings new possibilities to the video production industry with its powerful video generation capabilities, as well as providing more options and convenience for video producers.<\/p>","protected":false},"excerpt":{"rendered":"<p>Recently, Byte released a model called AnimateDiff-Lightning, which has impressive performance in video generation. With only 4-8 steps of reasoning, it can generate very good quality videos, which is undoubtedly a major technological breakthrough for the video production industry. The paper address:https:\/\/arxiv.org\/html\/2403.12706v1 The AnimateDiff-Lightning model also works very well with Contorlnet, which means that the video-to-paint workflow needs to be upgraded. At the same time, Byte has also introduced the corresponding Comfyui workflow, and the implementation of this open source workflow makes the AnimateDiff-Li<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[1798,185,1532,1799],"collection":[],"class_list":["post-5925","post","type-post","status-publish","format-standard","hentry","category-news","tag-animatediff","tag-lora","tag-1532","tag-1799"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/5925","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=5925"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/5925\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=5925"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=5925"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=5925"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=5925"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}