Mission release and open source SOTA-class virtual human video generation model LongCat-Video-Avatar

The company LongCat team officially released and opened the SOTA-class Virtual People Video Generation Model, LongCat-Video-Avatar. The model is based on the LongCat-Video base and continues the core design of a “one model for multitasking”, with originals supporting core functions such as Audio-Text-to-Video (AT2V), Audio-Text-Image-to-Video (ATI2V) and video continuation, while fully upgrading the bottom structure to achieve a breakthrough in the three dimensions of operational integrity, long video stability and identity consistency。

Search