{"id":43889,"date":"2025-09-25T12:31:59","date_gmt":"2025-09-25T04:31:59","guid":{"rendered":"https:\/\/www.1ai.net\/?p=43889"},"modified":"2025-09-25T12:31:59","modified_gmt":"2025-09-25T04:31:59","slug":"%e8%8b%b1%e4%bc%9f%e8%be%be%e5%bc%80%e6%ba%90-audio2face-%e6%a8%a1%e5%9e%8b%ef%bc%9aai-%e5%ae%9e%e6%97%b6%e7%94%9f%e6%88%90%e9%9d%a2%e9%83%a8%e5%8a%a8%e7%94%bb%ef%bc%8c%e5%a4%9a%e8%af%ad%e8%a8%80","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/43889.html","title":{"rendered":"Open-source Audio2Face model: AI real-time generation of facial animation, multilingual mouth sync"},"content":{"rendered":"<p>The news of September 25th<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%8b%b1%e4%bc%9f%e8%be%be\" title=\"Look at the article with the label\" target=\"_blank\" >Nvidia<\/a>Yesterday, 24 September, a Boeparture was published and announced<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%bc%80%e6%ba%90\" title=\"[View articles tagged with [open source]]\" target=\"_blank\" >Open Source<\/a>GENERATING AI ANIMATION MODEL <a href=\"https:\/\/www.1ai.net\/en\/tag\/audio2face\" title=\"_Other Organiser\" target=\"_blank\" >Audio2Face<\/a>IT COVERS MODELS, SOFTWARE DEVELOPMENT TOOLKITS (SDK) AND A COMPLETE TRAINING FRAMEWORK<strong>HOPE TO ACCELERATE THE DEVELOPMENT OF AI SMART VIRTUAL CHARACTERS IN GAME AND 3D APPLICATIONS\u3002<\/strong><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-43890\" title=\"fb2261dj00t34n8903qd000go09em\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/09\/fb22261dj00t34n89003qd000go009em.jpg\" alt=\"fb2261dj00t34n8903qd000go09em\" width=\"600\" height=\"338\" \/><\/p>\n<p>By analysing the acoustic characteristics of audio, speech, etc. in the audio, driving virtual role facial actions in real time, the technology produces accurate oral synchronization and natural emotional expressions, which can be widely applied in areas such as games, video production and customer services\u3002<\/p>\n<p>Audio2Face supports two operating modes: pre-recorded offline rendering of audio and real-time current processing of dynamic AI roles. 1AI quotes from Boven, and Young Wei Da has several core components at the opening:<\/p>\n<ul>\n<li>Audio2Face SDK<\/li>\n<li>A 2.0 local execution plugin for Autodesk Maya<\/li>\n<li>Unreal Engineering 5.5 and 2.5 plugins above<\/li>\n<li>return model (v2.2)<\/li>\n<li>proliferation model (v3.0)<\/li>\n<li>The Open Source Training Framework supports developers in using their own data fine-tuning models to adapt to specific application scenarios\u3002<\/li>\n<\/ul>\n<p>The technology has been widely adopted by industry. The game developer, Survios, has integrated Audio2Face in Alien: Rogue Incursion Evolution (Alien: Rogue Involved Evolution), which greatly simplifys the oral synchronization and facial capture process\u3002<\/p>\n<p>Farm 51 has also been used in Chernobyl People 2: Forbidden Zone (Chernobylite 2: Exclusion Zone), which directly produces fine facial animations through audio, saves considerable production time, enhances role realism and insinuation. The Director of Innovation, Wojciech Pazdur, called it a \u201crevolutionary breakthrough\u201d\u3002<\/p>","protected":false},"excerpt":{"rendered":"<p>On September 25th, Young Weida released a book yesterday, September 24th, announcing the Open Source Generation AI Autio2Face, which covers models, software development toolkits (SDK) and a complete training framework, hoping to accelerate the development of AI smart virtual roles in games and 3D applications. By analysing the acoustic characteristics of audio, speech, etc. in the audio, driving virtual role facial actions in real time, the technology produces accurate oral synchronization and natural emotional expressions that can be widely applied in the areas of games, video production and customer services. Audio2Face supports two operating modes: pre-recorded offline rendering of audio and real-time current processing of dynamic AI roles. 1AI quotes Boven, and Young Weidar has several nuclear sources<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[7656,219,239],"collection":[],"class_list":["post-43889","post","type-post","status-publish","format-standard","hentry","category-news","tag-audio2face","tag-219","tag-239"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/43889","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=43889"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/43889\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=43889"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=43889"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=43889"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=43889"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}