{"id":27325,"date":"2025-01-21T09:58:06","date_gmt":"2025-01-21T01:58:06","guid":{"rendered":"https:\/\/www.1ai.net\/?p=27325"},"modified":"2025-01-18T15:12:34","modified_gmt":"2025-01-18T07:12:34","slug":"%e6%89%8b%e6%8a%8a%e6%89%8b%e6%95%99%e4%bd%a0%e7%94%a8%e5%8f%af%e7%81%b5ai%ef%bc%8c%e5%8f%af%e7%81%b5ai%e8%a7%86%e9%a2%91%e7%94%9f%e6%88%90%e6%95%99%e7%a8%8b","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/27325.html","title":{"rendered":"Hands-on teaching you to use Kerin AI, Kerin AI video generation tutorials"},"content":{"rendered":"<p>About Kling: Kling is a large model for video generation developed by Racer's large model team, which now supports multiple capabilities such as text-generated video, graph-generated video, video continuation, mirror control, first and last frames, etc., allowing users to easily and efficiently complete the creation of artistic videos.<\/p>\n<p>Platform Links:<a href=\"https:\/\/www.1ai.net\/en\/12558.html\/\">https:\/\/www.1ai.net\/12558.html<\/a><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-27326\" title=\"0503c734j00sq9vft000vd000p400dep\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/01\/0503c734j00sq9vft000vd000p400dep.jpg\" alt=\"0503c734j00sq9vft000vd000p400dep\" width=\"904\" height=\"482\" \/><\/p>\n<p><strong>Basic Functions<\/strong><\/p>\n<p><strong>Vincent Video<\/strong><\/p>\n<p>Input a piece of text, and Keyline's large model generates a 5s or 10s video based on the text expression, transforming the text into a video screen. Now supports \"standard\" and \"high-quality\" generation modes, the standard mode generates faster, high-quality mode better picture quality; \"Kerin\" also supports 16:9, 9:16 and 1:1 three kinds of aspect ratio, more diversified to meet the needs of everyone's video creation. It also supports 16:9, 9:16 and 1:1 aspect ratios to meet your video creation needs.<\/p>\n<p>We know that \u201cPrompt\u201d as the most important interactive language for the vagabond video model will directly determine the video content returned by the model, and therefore how to do it using an effective Prompt! Video creation is what every creator wants to know and learn. [Cable], as the new life of the AI video model 2.0, continues to be selected and updated, and we need to continue to explore and develop the potential of the vagabond, so that we can better play the spin-off and the AI video, and we have prepared for you the formula of the verbs that can be used for reference:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-27327\" title=\"82879de0j00sq9vgo002cd000qv009up\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/01\/82879de0j00sq9vgo002cd000qv009up.jpg\" alt=\"82879de0j00sq9vgo002cd000qv009up\" width=\"967\" height=\"354\" \/><\/p>\n<p>The core components of the above formula are Subject, Motion and Scene, which are the simplest and most basic units to describe a video frame. When we want to describe the subject and scene in more detail, we just need to maintain the integrity of the elements we want to appear in the Promot by listing multiple descriptor phrases, and \"Kerin| will expand the prompts according to our expressions to generate a video that meets the expectations.<\/p>\n<p>For example, \u201cA giant cat reads a book in a caf\u00e9\u201d, we can add the details of the subject and the scene. A big cat reads a book with black-frame glasses on the coffee shop, a book on the table, a cup of coffee on the table, a hot air next to the coffee shop's window\u201d, so that the phantom image can be more specific, and if we want to add some lens language and a light atmosphere, we can also try to \u201cscene the scenery, vanish the background, the atmosphere light, a big cat with black-frame glasses on the coffee shop, a book on the table, a cup of coffee on the table, a hot air on the table, next to the family of the caf\u00e9, a film-level coloring, so that the video sense that is generated will be further raised and likely to produce more than expected results\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-27328\" title=\"3b0531fej00sq9vvr0099d000qz007kp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/01\/3b0531fej00sq9vvr0099d000qz007kp.jpg\" alt=\"3b0531fej00sq9vvr0099d000qz007kp\" width=\"971\" height=\"272\" \/><\/p>\n<p>The meaning of the formula is to help us better describe the desired video image, we can also let our imagination run wild and not be limited by the formula, to freely and boldly communicate with \"Korin\", there may be more surprising results!<\/p>\n<p><strong>Some tips on how to use it:<\/strong><\/p>\n<p>Try to use simple words and sentence structures and avoid overly complex language; and<\/p>\n<p>The content of the screen is as simple as possible and can be done in 5s to 10s ;)<\/p>\n<p>It is easier to generate Chinese style and Chinese people by using words such as \"Oriental mood, China, Asia\".<\/p>\n<p>Currently, video models are not sensitive to numbers, such as \"10 puppies on the beach\", and it's hard to keep the numbers consistent.<\/p>\n<p>(a) a break-up scene, with the following: \"four slots, spring, summer, autumn and winter\";<\/p>\n<p>At this stage it is more difficult to generate complex physical motions, such as bouncing of balls and overhead throws.<\/p>\n<p><strong>Figure video<\/strong><\/p>\n<p>Input a picture, Keyline big model according to the picture understanding to generate 5s or 10s video, the picture will be transformed into a video screen; input a picture and text description, Keyline big model according to the text expression of the picture to generate a video, now supports \"standard\" and \"high quality\" two generation mode, as well as 16:9, 9:16 and 1:1 three frame ratio, more diversified to meet the needs of everyone's video creation. Now it supports \"Standard\" and \"High Quality\" modes, as well as 16:9, 9:16 and 1:1 aspect ratios, which are more diversified to meet your video creation needs.<\/p>\n<p>Graphical video is the most frequently used feature by current creators, because, from the point of view of video creation, graphic video is more manageable and the creator can generate dynamic video by using well-generated images from an early-drawn card, significantly reducing the cost and threshold of professional video creation; From the point of view of video-based creativity, Cream provides another creative platform for users to use text to control the movement of the objects in the picture, such as the recent rise of old photographs from the web fire, the hugging of themselves when they were young, and what is called \"Eat the fungus Fantasia!\" The expression of \"Crem!\" as a creative tool offers unlimited possibilities for creative realization\u3002<\/p>\n<p>For graph generated video, controlling the motion of the subject in the image is central, and we provide you with the following formulas for reference:.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-27329\" title=\"cf9dfd83j00sq9vmi000fd000r0004fp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/01\/cf9dfd83j00sq9vmi000fd000r0004fp.jpg\" alt=\"cf9dfd83j00sq9vmi000fd000r0004fp\" width=\"972\" height=\"159\" \/><\/p>\n<p>The most core components of the above formula are the subject and the motion, unlike the text-born video, the graph-born video already has a scene, so it only needs to describe the subject in the image and the motion that we want the subject to realize, if it involves more than one motion of more than one subject, it is enough to list them in order, and [Kerin] will expand the cue words according to our expression and understanding of the image picture and generate a video that meets the expectation.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-27330\" title=\"71abbb33j00sq9vwn004ed000qz0055p\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/01\/71abbb33j00sq9vwn004ed000qz0055p.jpg\" alt=\"71abbb33j00sq9vwn004ed000qz0055p\" width=\"971\" height=\"185\" \/><\/p>\n<p>If we want to \u201cgrow the sunglasses on\u201d in the painting, the model is more difficult to understand when we only enter \u201cskinglasses\u201d, so it is more likely that the video will be generated by our own judgement. \u201cWhen it is judged that this is a painting, it is more likely to produce an exhibition of pictures with the effect of a mirror, which is why photographs like these can easily generate static videos (do not upload pictures with a frame). Therefore, we need to describe \"subject+motion\" to make models understand instructions, such as \"Mona Lisa with her hands on the mirror\" or \"Mona Lisa with her hands on the sunglasses\" with a dry subject, with a back sign that shows a light.\"\u3002<\/p>\n<p>Similarly, the meaning of the formula had been in helping everyone to better use the Tousen video ability to improve video card draw, and more creativity still needs to be explored together to freely and boldly communicate with \"Korin\"!<\/p>\n<p><strong>Some tips:<\/strong><\/p>\n<p>Try to use simple words and sentence structures and avoid overly complex language; and<\/p>\n<p>Motion conforms to the laws of physics, try to describe motion as it might occur in the picture;<\/p>\n<p>The description differs greatly from the picture, which may cause the lens to switch; the<\/p>\n<p>At this stage it is more difficult to generate complex physical motions, such as bouncing of balls and overhead throws.<\/p>\n<p><strong>Video Extension<\/strong><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-27331\" title=\"e36d748bj00sq9vr6008jd000qx007bp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/01\/e36d748bj00sq9vr6008jd000qx007bp.jpg\" alt=\"e36d748bj00sq9vr6008jd000qx007bp\" width=\"969\" height=\"263\" \/><\/p>\n<p>The video after AI generation can be renewed for 4~5 seconds, supports multiple renewals (up to 3 minutes), and can be created by fine-tuning the cue words for video renewal.<\/p>\n<p>The video extension function is located in the lower left corner of the Tab after the video is generated, there are two modes: \"Auto Extension\" and \"Customized Creative Extension\", \"Auto Extension\" means no need to input the Prompt, the model will continue the video according to the understanding of the video itself. Prompt\" means that there is no need to input the Prompt, the model is based on the understanding of the video itself to continue the video, \"customized creative extension\" is that the user can control the extended video through the text, here the Prompt needs to be related to the original video, write the original video's \"subject + motion Here, the Prompt needs to be related to the original video, stating the \"subject + motion\" of the original video, in order to try to realize that the extended video does not collapse, we provide you with the following formula for reference.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-27332\" title=\"c108dedap00sq9vpk000cd000qy003sp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/01\/c108dedap00sq9vpk000cd000qy003sp.png\" alt=\"c108dedap00sq9vpk000cd000qy003sp\" width=\"970\" height=\"136\" \/><\/p>\n<p><strong>Some tips:<\/strong><\/p>\n<p>The Prompt in the video \"Custom Creative Extension\" needs to be consistent with the original video body, irrelevant text may cause the camera to switch.<\/p>\n<p>Extension has a certain probability and may require multiple extensions to generate a video that meets expectations; the<\/p>","protected":false},"excerpt":{"rendered":"<p>About Kling: Kling is a video generator model developed by the Shutterbugs Big Model team, which now supports text-generated video, picture-generated video, video continuation, mirror control, first and last frames and other capabilities, allowing users to easily and efficiently complete the creation of artistic videos. Platform Link: https:\/\/www.1ai.net\/12558.html Basic Functions Text-toVideo Input a text, and the big model can generate 5s or 10s video according to the text expression, transforming the text into a video screen. Now it supports two generation modes, \"Standard\" and \"High Quality\", the standard mode generates faster and the high quality mode has better picture quality; \"Keling\" also supports three picture ratios, 16:9, 9:16 and 1:1, which is more diversified to satisfy everyone's visual needs.<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[149,144],"tags":[981,3676],"collection":[],"class_list":["post-27325","post","type-post","status-publish","format-standard","hentry","category-jiaocheng","category-baike","tag-ai"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/27325","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=27325"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/27325\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=27325"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=27325"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=27325"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=27325"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}