Dream“Digital Human 1.5's all online。
- Update bright spots: This time, digital people, with role imitation, that is, before the comparison, they can only add roles, and now digital people can allow characters to make video moves related to audio content。
You can control your emotions, your movements, your movements, your mirrors. To put it simply, the digital person 1.5, in addition to placing a role on it, can generate video。
The video can now be used both to make characters sing and to follow action tips: to change the scene, to move and to have more hand gestures。
As you can imagine, the number 1.5 will beAI MusicMany videos, videos, videos, etc. have brought new ways of playing。
THIS ARTICLE, I'M GOING TO USE MY A.M.V. TO DECOMPOSE。
Generate Group Chart
First of all, the right way to open is to enter a reference chart in conjunction with the now hot "photogram 4.0 model" and automatically generate a series of clusters。
FOR EXAMPLE, TAKE THE PRE-GENERATED MV MASTER MAP TO EXPAND THE SCENE, A LITTLE SISTER IN THE STUDIO。

Generate a spectrograph using image 4.0:
- "
- The girl who sings is a singer, she sings a full song in the studio, finishes the album, the scene is so moving, the camera is switched 10 times

The operation is simple, upload reference maps, enter the command above, click to send it。
CAREFUL FRIENDS HAVE FOUND OUT THAT DREAM 4 ALREADY SUPPORTS THE PRODUCTION OF 4K SUPER-CLEAR PICTURES。
When a good picture is generated, the picture is saved in the computer。
Generate a digital person
Access to the Dreamnet, where you can see a "digital person" when you create a page, and then you can see a new "action description" when you open it。

Operational elements:
Pictures of left uploading characters
2. Upload audio: There are two modes in which you can select a sound color and enter a file. Or upload the audio, it's the music you made。
Map of uploading audio:

Select a schematic of the sound and input text:

3. Action description: We'll just enter the graphics video description, usually the view, the mirror description, the character action description
4. Models: 1.5 have three models, master and fast-track models, basic models. Master mode consumption points
Specifies the talking role: if there are many characters in the uploaded picture, you can switch to the talking role at the Role Sayer。
Comment: I was asked in the video comment area how a dream could produce a digital person for more than 15 seconds, and so I used the Dream 4 image model to generate eight drawings, and I took my music out of several pieces。
Because it's a direct, dream 4.0, and when I did the video, I found that there was something wrong with the consistency of the person's face with the individual lens, and I stepped on a little pit to fine-tune the picture。
The image of the person using the master model is natural and more effective, but the disadvantage is that the points are high。
Toussaint Video Prompt Words
The most important question to make is how to write video tips, how to make them feel about the camera, how people move, how to convert。
HERE'S A SIMPLE TUSHENG VIDEO ALERT FOR YOU
- "
- Using this picture as a first shot, a five-second video spectroscopy is produced, which provides verbs, text structure: it includes a view, perspective, mirrors, image content, person expression, and can't use a word over 200 words. And finally, the resulting spectrophs are refined into a text

This hint is just to make it easy for you to make the base video, but it really needs to be good and creative。
Words and music
By the way, the lyrics and the music, I used DeepSeek to generate the lyrics and then the music。
Open Deepseek, open the reasoning model and throw the following lyrics to AI:
- "
- Write me a love song, like Mistaped Time, that depicts deep, infective, young people's favorite song between male and female protagonists

After a while of thinking, we got a complete lyrics:
- "
- ♪ To find your tenderness ♪
- The reflection of the light in the cafe
- It's the sediment of my heart at the bottom of my coffee cup
- A cup of mouth -- a lap of your fingerprints
- I can't touch you
- I'm sorry.
If you're satisfied with the lyrics, you go to the Ai music, like Suno, which is mainly about digitals, so it's not that detailed。
Synthetic Video
The last step is to import generated digital music clips into the clips for later processing。
Let's reorder the footage, use smart subtitles, and identify the lyrics。

THAT'S IT. A LIVE AISinging VideoAND IT'S DONE, AND IT'S AMAZING TO SEE IF IT'S DONE, AND IT'S DONE USING AI TECHNOLOGY TO CREATE FINE AI MUSIC。
That's all we got today