{"id":14539,"date":"2024-07-02T09:43:12","date_gmt":"2024-07-02T01:43:12","guid":{"rendered":"https:\/\/www.1ai.net\/?p=14539"},"modified":"2024-07-02T09:43:53","modified_gmt":"2024-07-02T01:43:53","slug":"%e7%9c%9f%e4%ba%ba%e7%9a%84%e5%9b%be%e7%89%87%e8%bd%ac%e6%bc%ab%e7%94%bb%e6%95%88%e6%9e%9c%e6%98%af%e5%a6%82%e4%bd%95%e5%88%b6%e4%bd%9c%e7%9a%84%ef%bc%9f%e7%94%a8stable-diffusion%e5%ae%9e%e7%8e%b0","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/14539.html","title":{"rendered":"How to create a comic effect from a real person&#039;s picture? Using Stable Diffusion to achieve real person comic adaptation"},"content":{"rendered":"<p data-pm-slice=\"0 0 []\">So-called<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e7%9c%9f%e4%ba%ba%e6%bc%ab%e6%94%b9\" title=\"[Sees articles with [real people] labels]\" target=\"_blank\" >Live-action comic adaptation<\/a>, which is to generate a new two-dimensional picture from a real person&#039;s picture. In AI painting, this is a very common application scenario. Regarding live-action comic adaptations, several production methods have been shared in the previous advanced series, but the implementation methods are relatively simple. It is difficult to achieve consistency between real people and corresponding two-dimensional effect pictures, whether it is character clothing, background elements, picture colors, etc. Today, I will share a live-action comic adaptation production method that can basically achieve commercial delivery. Let&#039;s first look at the renderings of live-action comic adaptations.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-14544\" title=\"get-82\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/07\/get-82.jpg\" alt=\"get-82\" width=\"1080\" height=\"541\" \/><\/div>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-14541\" title=\"get-79\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/07\/get-79.jpg\" alt=\"get-79\" width=\"1080\" height=\"541\" \/><\/div>\n<p data-track=\"146\"><strong>1. Production methods of live-action comic adaptations<\/strong><\/p>\n<p data-track=\"147\">Next, we will use the real-life picture below as an example to explain the production method of real-life comic adaptations.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-14540\" title=\"get-78\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/07\/get-78.jpg\" alt=\"get-78\" width=\"1024\" height=\"1536\" \/><\/div>\n<p data-track=\"148\"><strong>[Step 1]: Selection of large model<\/strong><\/p>\n<p data-track=\"149\">Live-action comic adaptations require the generation of two-dimensional images, so the large model needs to be a two-dimensional large model.<\/p>\n<p data-track=\"150\">Used here: AWPainting<\/p>\n<p data-track=\"151\">Model download address<strong>(You can also get the network disk address at the end of the article)<\/strong><\/p>\n<blockquote class=\"pgc-blockquote-abstract\">\n<p data-track=\"152\">https:\/\/www.liblib.art\/modelinfo\/1fd281cf6bcf01b95033c03b471d8fd8<\/p>\n<\/blockquote>\n<p data-track=\"153\"><strong>\u3010Step 2\u3011\uff1aWriting prompt words<\/strong><\/p>\n<p data-track=\"154\">We use the WD1.4 tagger to obtain the prompt word information of the image.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-14542\" title=\"get-80\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/07\/get-80.jpg\" alt=\"get-80\" width=\"1080\" height=\"425\" \/><\/div>\n<p data-track=\"155\">The prompt words obtained through the WD1.4 labeler plug-in are:<\/p>\n<p data-track=\"156\">1girl, jewelry, solo, bracelet, hair ornament, hair flower, braid, flower, earrings, long hair, necklace, navy, looking at viewer, realistic, breasts, strapless, skirt, outdoors, midriff, blurry, bare shoulders, brown hair , single braid, blurry background, tube top, medium breasts, parted lips, crop top, brown eyes, pink skirt, black hair<\/p>\n<p data-track=\"157\">Because we need to generate corresponding two-dimensional images from real-life comics, we need to check the prompt words. It is recommended to use translation software to translate the prompt words and check them in detail, and remove the prompt words that do not meet the requirements. For example, the prompt word realistic needs to be removed. Some of the prompt words inferred from the pictures include freckles, mole, and other keywords that affect the beauty of the face, which can also be removed.<\/p>\n<p data-track=\"158\"><strong>Positive prompt words<\/strong><\/p>\n<blockquote class=\"pgc-blockquote-abstract\">\n<p data-track=\"159\"><strong>Prompt<\/strong>\uff1a1girl, jewelry, solo, bracelet, hair ornament, hair flower, braid, flower, earrings, long hair, necklace, navy, looking at viewer, breasts, strapless, skirt, outdoors, midriff, blurry, bare shoulders, brown hair, single braid, blurry background, tube top, medium breasts, parted lips, crop top, brown eyes, pink skirt, black hair<\/p>\n<\/blockquote>\n<blockquote class=\"pgc-blockquote-abstract\">\n<p data-track=\"160\"><strong>Prompt word<\/strong>: girl, jewelry, solo, bracelet, hair accessories, hair flowers, braids, flowers, earrings, long hair, necklace, belly button, looking at viewer, breasts, strapless, skirt, outdoors, abdomen, blurred, bare shoulders, brown hair, single braid, blurred background, halter top, medium breasts, parted lips, belly top, brown eyes, pink skirt, black hair<\/p>\n<\/blockquote>\n<p data-track=\"161\"><strong>Reverse prompt word<\/strong><\/p>\n<blockquote class=\"pgc-blockquote-abstract\">\n<p data-track=\"162\">ng_deepnegative_v1_75t,(badhandv4:1.2),(worst quality:2),(low quality:2),(normal quality:2),lowres,bad anatomy,(bad hands),((monochrome)),((grayscale)) watermark,moles,many fingers,(broken hands),nsfw,<\/p>\n<\/blockquote>\n<p data-track=\"163\">Related parameter settings<\/p>\n<ul>\n<li data-track=\"164\">Sampler: DPM++ 2M Karras<\/li>\n<li data-track=\"165\">Sampling iteration number: 30<\/li>\n<li data-track=\"166\">Image width and height: 512*768.<\/li>\n<li data-track=\"167\">Prompt word guidance coefficient (CFG): 7<\/li>\n<\/ul>\n<p data-track=\"168\"><strong>[Step 3]: ControlNet settings<\/strong><\/p>\n<p data-track=\"169\">Here we use 3 ControlNet units.<\/p>\n<p data-track=\"170\"><strong>ControlNet Unit 1<\/strong>: The Depth control model is used to control the distance between screen elements to be consistent<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-14543\" title=\"get-81\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/07\/get-81.jpg\" alt=\"get-81\" width=\"860\" height=\"992\" \/><\/div>\n<p data-track=\"171\">The relevant parameter settings are as follows:<\/p>\n<ul>\n<li data-track=\"172\">Control type: Select \"Depth\"<\/li>\n<li data-track=\"173\">Preprocessor: depth_zoe<\/li>\n<li data-track=\"174\">Model: control_v11f1p_sd15_depth<\/li>\n<li data-track=\"175\">Control weight: 1<\/li>\n<li data-track=\"176\">Guided intervention time: 0<\/li>\n<li data-track=\"177\">Boot termination time: 1<\/li>\n<\/ul>\n<p data-track=\"179\"><strong>ControlNet Unit 2<\/strong>: Lineart control model is used to control the consistency of the lines of the picture<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-14545\" title=\"get-83\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/07\/get-83.jpg\" alt=\"get-83\" width=\"857\" height=\"985\" \/><\/div>\n<p data-track=\"180\">The relevant parameter settings are as follows:<\/p>\n<ul>\n<li data-track=\"181\">Control type: Select \"Lineart\"<\/li>\n<li data-track=\"182\">Preprocessor: lineart_realistic (used to identify lines in real-life pictures)<\/li>\n<li data-track=\"183\">Model: control_v11p_sd15_lineart<\/li>\n<li data-track=\"184\">Control weight: 0.6 (The facial features of real-life photos and 2D photos do not match, so the control weight of ControlNet is appropriately reduced here)<\/li>\n<li data-track=\"185\">Guided intervention time: 0<\/li>\n<li data-track=\"186\">Boot termination time: 1<\/li>\n<\/ul>\n<p data-track=\"188\"><strong>ControlNet Unit 3<\/strong>: The tile control model is used to control the consistency of the screen color<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-14546\" title=\"get-84\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/07\/get-84.jpg\" alt=\"get-84\" width=\"855\" height=\"1014\" \/><\/div>\n<p data-track=\"189\">The relevant parameter settings are as follows:<\/p>\n<ul>\n<li data-track=\"190\">Control type: Select \"Tile\/Blur\"<\/li>\n<li data-track=\"191\">Preprocessor: tile_colorfix (used to fix the color of the image in blocks)<\/li>\n<li data-track=\"192\">Model: control_v11f1e_sd15_tile<\/li>\n<li data-track=\"193\">Control weight: 1<\/li>\n<li data-track=\"194\">Guided intervention time: 0<\/li>\n<li data-track=\"195\">Boot termination time: 1<\/li>\n<\/ul>\n<p data-track=\"197\"><strong>\u3010Step 4\u3011Image generation<\/strong><\/p>\n<p data-track=\"198\">Click the [Generate] button and let\u2019s take a look at the final generated image effect.<\/p>\n<p data-track=\"199\"><strong>Original real person picture<\/strong><\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-14547\" title=\"get-85\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/07\/get-85.jpg\" alt=\"get-85\" width=\"1024\" height=\"1536\" \/><\/div>\n<p data-track=\"200\"><strong>Second dimension pictures<\/strong><\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-14548\" title=\"get-86\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/07\/get-86.jpg\" alt=\"get-86\" width=\"1024\" height=\"1536\" \/><\/div>\n<p data-track=\"201\"><strong>2. Related instructions<\/strong><\/p>\n<p data-track=\"202\">(1) Regarding the choice of large models, you can try various styles of two-dimensional large models.<\/p>\n<p data-track=\"203\">(2) Three ControlNet control models are used in this example:<\/p>\n<ul>\n<li data-track=\"204\">The Depth control model controls the distance of the image<\/li>\n<li data-track=\"205\">The Lineart control model controls the overall lines of the image<\/li>\n<li data-track=\"206\">The Tile model controls the color of the image.<\/li>\n<\/ul>\n<p data-track=\"207\">Since the Tile model has the effect of changing the details of the image, in this example, the background elements of the two-dimensional image are leaves, which will appropriately enrich the background element information of the image.<\/p>\n<p data-track=\"208\">(3) The production method still needs to be improved in some details. For example, the style and color of the earrings worn by the beauty are not consistent.<\/p>\n<p data-track=\"209\">Okay, that\u2019s all for today\u2019s sharing. I hope that what I shared today will be helpful to you.<\/p>\n<p data-track=\"210\">The model is placed in the network disk, and those who are interested can take it!<\/p>\n<p data-track=\"211\">https:\/\/pan.quark.cn\/s\/b3df771404e2<\/p>\n<p>&nbsp;<\/p>","protected":false},"excerpt":{"rendered":"<p>The so-called live-action manga modification is to generate a new secondary image from a picture of a live person. In AI drawing, this is a very common application scenario. About the real person manga in the previous advanced chapter series shared several production methods, but the realization of the way are relatively simple, real people and the corresponding secondary effects of the picture whether it is difficult to achieve consistency in the character clothing, background elements, screen color, etc., today to share a basic can be achieved commercially available to deliver the production of the real person manga, let's look at the effect of the real person manga first. I. The production method of the real manga The following we take the following picture of the real picture as an example to explain the production method of the real manga \u3010Step 1\u3011: the choice of the big model The real manga needs to generate the picture of the secondary yuan, so the big model needs to choose the big model of the secondary yuan.<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[144],"tags":[197,198,493,1601,3304],"collection":[302,262],"class_list":{"0":"post-14539","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"hentry","6":"category-baike","7":"tag-stable-diffusion","10":"tag-1601","11":"tag-3304","12":"collection-prompt","13":"collection-stablediffusion"},"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/14539","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=14539"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/14539\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=14539"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=14539"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=14539"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=14539"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}