{"id":11377,"date":"2024-05-27T10:19:54","date_gmt":"2024-05-27T02:19:54","guid":{"rendered":"https:\/\/www.1ai.net\/?p=11377"},"modified":"2024-05-27T10:19:54","modified_gmt":"2024-05-27T02:19:54","slug":"stable-diffusion%e6%95%99%e7%a8%8b%ef%bc%8c%e5%ae%9e%e7%8e%b0%e9%bb%8f%e5%9c%9f%e6%bb%a4%e9%95%9c%ef%bc%88%e7%89%b9%e6%95%88%ef%bc%89","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/11377.html","title":{"rendered":"Stable Diffusion tutorial, realize clay filter (special effect)"},"content":{"rendered":"<p data-pm-slice=\"0 0 []\">Hello everyone, I have introduced several articles about Stable diffusion before, including its installation, introduction to Wensheng pictures, and model sorting. Today I will introduce you to the actual combat of Stable diffusion, which is also the most popular clay filter (special effect).<\/p>\n<p data-track=\"102\">1. Large model selection<\/p>\n<p data-track=\"103\">As mentioned in the previous article on model sorting, the large model (base mold) of SD is very important and determines the style of the output.<\/p>\n<p data-track=\"105\">And the pictures that we usually use to generate clay filters are real people, so here we use real-system models: LEOSM's HelloWorld XL<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-11378\" title=\"get-615\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/05\/get-615.jpg\" alt=\"get-615\" width=\"1080\" height=\"491\" \/><\/div>\n<p data-track=\"107\">2. Lora model selection<\/p>\n<p data-track=\"108\">After selecting the large model, you actually have to choose the Lora model, because like the large model above, its function is only to draw realistic characters as much as possible, but if you want it to draw realistic + clay style characters, it is actually very difficult to do.<\/p>\n<p data-track=\"110\">Without using Lora, the prompt words are:<\/p>\n<pre><code>clay,1man,black hair,<\/code><\/pre>\n<p data-track=\"113\">Generated images<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-11379\" title=\"get-616\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/05\/get-616.jpg\" alt=\"get-616\" width=\"1080\" height=\"458\" \/><\/div>\n<p data-track=\"115\">But sometimes you can get a slightly better one without Lora, like this<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-11380\" title=\"get-617\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/05\/get-617.jpg\" alt=\"get-617\" width=\"1020\" height=\"765\" \/><\/div>\n<p data-track=\"116\">Note: This picture is also generated by Tusheng<\/p>\n<p data-track=\"118\">The Lora model can be found on Station C by typing<strong>clay keywords[1]<\/strong>You can<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-11381\" title=\"get-618\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/05\/get-618.jpg\" alt=\"get-618\" width=\"1080\" height=\"482\" \/><\/div>\n<p data-track=\"120\">And here I've chosen the CLAYMATE-Claymation Style for SDXL, the Lora model\u3002<\/p>\n<p data-track=\"122\">3. Select the image<\/p>\n<p data-track=\"123\">Since we want to add a clay filter style to the picture this time, we choose a picture from the original image, and SD needs a picture as a reference. Although we didn&#039;t talk about the picture from the original image before, it is generally not much different from the picture from the original image.<\/p>\n<p data-track=\"125\">4. Writing prompt words<\/p>\n<p data-track=\"126\">My reference image this time is relatively simple, which is the picture of Sam Altman from OpenAI, so my positive prompt words are actually relatively simple:<\/p>\n<pre><code>clay,1man,black hair, ,<\/code><\/pre>\n<p data-track=\"129\">Clay means clay. This is the prompt word that triggers Lora. How is this obtained? In fact, it is also very simple. Just click on the corresponding Lora model (the prompt word area will be automatically displayed).<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-11382\" title=\"get-619\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/05\/get-619.jpg\" alt=\"get-619\" width=\"1080\" height=\"615\" \/><\/div>\n<p data-track=\"131\">Reverse prompt words (this depends on the situation):<\/p>\n<pre><code>lowres,bad anatomy,((bad hands)),(worst quality:2),(low quality:2),sketches,bad hands,text,error,missing fingers,<\/code><\/pre>\n<p data-track=\"134\">5. Sampling method<\/p>\n<p data-track=\"135\">If you're like me in choosing the LEOSAM's HelloWorld XL big model, the preferred sampling method is Eluer a, because this model is specific to Eluer a, as mentioned in the model's page description, but you can also try the DPM+2M Karras, which I tried to do<\/p>\n<p data-track=\"137\">6. Iteration steps<\/p>\n<p data-track=\"138\">After I tried this, 25-30 steps are generally more suitable<\/p>\n<p data-track=\"140\">7. Redraw size<\/p>\n<p data-track=\"141\">Since the reference images have different sizes, using a fixed size may not match the style of the original image. Therefore, it is generally recommended to use the same size (or the same proportion) as the original image. Here is a little trick. Click here to keep the width and height consistent with the reference image.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-11383\" title=\"get-620\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/05\/get-620.jpg\" alt=\"get-620\" width=\"1080\" height=\"198\" \/><\/div>\n<p data-track=\"143\">8. Prompt word coefficient<\/p>\n<p data-track=\"144\">This has been discussed before. It determines the influence (weight) of the prompt word on the drawing. I chose 7.<\/p>\n<p data-track=\"146\">9. Redraw range<\/p>\n<p data-track=\"147\">This parameter is very important and is not included in the Wensheng graph. It represents the similarity between the generated image and the reference image. The range is 0-1. The smaller the value, the closer it is to the original image, and the larger the value, the farther it is from the original image. Generally, 0.5-0.7 is selected.<\/p>\n<p data-track=\"149\">The above are the models and key parameters needed to generate the clay filter. Here are some of the renderings I generated:<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-11385\" title=\"get-622\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/05\/get-622.jpg\" alt=\"get-622\" width=\"1080\" height=\"409\" \/><\/div>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-11384\" title=\"get-621\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/05\/get-621.jpg\" alt=\"get-621\" width=\"1080\" height=\"409\" \/><\/div>\n<p data-track=\"151\">What do you think of the effect produced by this clay filter?<\/p>","protected":false},"excerpt":{"rendered":"<p>Hello, I've been introducing several articles on Stable Diffusion, including its installation, the introduction of the texture, and the modelling. Today's battle is the battle of Stable diffusion, and it's the current fire-fired clay filter. One, big model selection, as mentioned earlier in the article on model combing, the large model of SD is very important and determines the style of the drawing. And the pictures that we usually use to generate clay filters are real people, so here we're using real-system models: LEOSAM's HelloWorld XL 2, the Lora model selection, and then the Lora model<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[149,144],"tags":[197,198],"collection":[262],"class_list":{"0":"post-11377","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"hentry","6":"category-jiaocheng","7":"category-baike","8":"tag-stable-diffusion","10":"collection-stablediffusion"},"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/11377","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=11377"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/11377\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=11377"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=11377"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=11377"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=11377"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}