{"id":30110,"date":"2025-03-05T16:18:10","date_gmt":"2025-03-05T08:18:10","guid":{"rendered":"https:\/\/www.1ai.net\/?p=30110"},"modified":"2025-03-05T16:18:10","modified_gmt":"2025-03-05T08:18:10","slug":"%e5%8d%b3%e6%a2%a6ai-%e4%b8%8a%e7%ba%bf-%e5%8a%a8%e4%bd%9c%e6%a8%a1%e4%bb%bf-%e5%8a%9f%e8%83%bd%ef%bc%9a%e7%85%a7%e7%89%87-%e5%8f%82%e8%80%83%e8%a7%86%e9%a2%91%e5%8d%b3%e5%8f%af","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/30110.html","title":{"rendered":"Instant Dream AI goes online with \"Motion Imitation\" feature: photos + reference videos can make characters move"},"content":{"rendered":"<p>March 5, 2011 - 1AI has learned from the<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%ad%97%e8%8a%82%e8%b7%b3%e5%8a%a8\" title=\"[View articles tagged with [bytejump]]\" target=\"_blank\" >ByteDance<\/a>Learned,<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%8d%b3%e6%a2%a6ai\" title=\"[View articles tagged with [i.e. Dream AI]]\" target=\"_blank\" >Dream AI<\/a> The \"Action Mimic\" feature was launched today. Users enter through the \"Digital Person\" portal and simply upload<strong>A character picture and a reference video<\/strong>This allows you to generate a dynamic video in which the characters in the picture simulate the movements of the characters in the reference video, as well as to achieve<strong>One-to-one reduction of emotions<\/strong>.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-30111\" title=\"7177c155j00ssn5of004rd000ic00qup\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/03\/7177c155j00ssn5of004rd000ic00qup.jpg\" alt=\"7177c155j00ssn5of004rd000ic00qup\" width=\"660\" height=\"966\" \/><\/p>\n<p>Support for this feature includes<strong>Portraits, busts and full bodies<\/strong>The different frames within are powered by the ByteDance Intelligent Creation digital human team. The team uses a hybrid explicit and implicit feature-driven approach that can synchronize the restoration of the<strong>Body movements and facial expressions in various frames.<\/strong>In terms of face expression control, with its self-developed face motion tokenizer, it is able to accurately capture the expression details from the driving video to enhance the vividness of the generated video.<\/p>\n<p>Currently, Dream AI officially offers<strong>\u00a0<\/strong><strong>3 action templates<\/strong>It also supports users to upload local files by themselves, and the maximum length of the video is 30 seconds. It is reported that \"Action Mimic\" has been launched on both the App and Web sides of IMO, and the platform will conduct security audits of the video content and output videos.<strong>Adding an \"AI-generated\" watermark<\/strong>.<\/p>","protected":false},"excerpt":{"rendered":"<p>March 5 news, 1AI learned from byte jumping, that dream AI today on-line \"action imitation\" function, the user from the \"digital person\" entrance to enter, just upload a picture of a character and a reference video, you can generate a dynamic video, so that the picture of the character simulation reference video character action can also realize the emotion of a one-to-one reproduction. The character in the picture can simulate the movements of the character in the reference video, and can also realize the one-to-one restoration of emotions. The feature supports different frame sizes, including portrait, half-body and full-body, and is powered by ByteDance's Intelligent Creation Digital People team. The team uses a mix of explicit and implicit feature-driven methods to synchronously restore body movements and facial expressions in various frames; in terms of facial expression control, with its self-developed face motion tokenizer, the team can accurately capture the expression details from the driving video, improving<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[3402,548],"collection":[],"class_list":["post-30110","post","type-post","status-publish","format-standard","hentry","category-news","tag-ai","tag-548"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/30110","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=30110"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/30110\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=30110"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=30110"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=30110"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=30110"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}