June 10 (Bloomberg) - ByteDanceVolcano EngineThe official public website announced today that it will be released on June 11thall-newBean curdVideo Generation Model.

According to reports, the new beanbag video generation model has a number of "hardcore capabilities", 1AI with examples as follows:
- Supports seamless multi-camera narrativesThat is, through efficient model structure, multimodal positional coding and multitask unified modeling, the model can support unique and stable multicamera representations.
- Supports multi-movement and on-the-fly mirroringThe video is a rich set of scenes, subjects and behavioral actions, which can respond more accurately to users' fine-grained commands, and smoothly generate complex video content with multiple subjects, multiple actions and random camera movements.
- Supports stable motion and true aestheticsThat is to say, the picture and subject dynamics are more natural, better structured, with a lower crash rate, and different styles of video content can be generated according to the instructions, such as realism, animation, film and television, and advertisements.