{"id":29362,"date":"2025-02-28T09:24:37","date_gmt":"2025-02-28T01:24:37","guid":{"rendered":"https:\/\/www.1ai.net\/?p=29362"},"modified":"2025-02-21T21:31:51","modified_gmt":"2025-02-21T13:31:51","slug":"stable-diffusion%e6%80%8e%e4%b9%88%e7%94%a8%ef%bc%9fstable-diffusion-controlnet%e7%95%8c%e9%9d%a2%e5%8f%82%e6%95%b0%e8%af%a6%e8%a7%a3","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/29362.html","title":{"rendered":"How does Stable Diffusion work?Stable Diffusion Plugin ControlNet Interface Parameters Details"},"content":{"rendered":"<p>In this section we'll dive into<a href=\"https:\/\/www.1ai.net\/en\/tag\/stable-diffusion\" title=\"_Other Organiser\" target=\"_blank\" >Stable Diffusion<\/a>The ControlNet plug-in is the core of our business applications and the cornerstone of Stable Diffusion. In this section, we will take a look at ControlNet as a whole and explain its general parameters in detail.<\/p>\n<p>I. Why do you need ControlNet?<\/p>\n<p>Before learning ControlNet, we need to understand what pain points it solves in Stable Diffusion, whose original version lacked the ability to accurately generate images because of its broad constraint dimensions.ControlNet is designed to solve this problem by flexibly and freely adding constraints from multiple perspectives to improve image generation control. It can flexibly and freely add constraints from multiple perspectives to improve the controllability of image generation.<\/p>\n<p>II. ControlNet workflow<\/p>\n<p>The workflow of ControlNet consists of four steps: inputting images, extracting features, understanding features, and generating results. Understanding this process helps us to better grasp the parameter settings of ControlNet.<\/p>\n<p>1.\u00a0<strong>Enter image<\/strong>: Provide reference images and tell ControlNet to extract features from them.<\/p>\n<p>2.\u00a0<strong>Extraction Characteristics<\/strong>: Select the preprocessor to extract the desired features in the image.<\/p>\n<p>3.\u00a0<strong>Understanding Features<\/strong>: The ControlNet model helps Stable Diffusion understand the extracted feature maps.<\/p>\n<p>4.\u00a0<strong>Generate results<\/strong>: Generate the final image based on the understood feature map.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-29363\" title=\"c2d0d5c5j00ss1bz200bgd000u000esm\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/02\/c2d0d5c5j00ss1bz200bgd000u000esm.jpg\" alt=\"c2d0d5c5j00ss1bz200bgd000u000esm\" width=\"1080\" height=\"532\" \/><\/p>\n<p>III. Common parameters of ControlNet<\/p>\n<p>1. Number of control units<\/p>\n<p>The ControlNet plug-in allows us to control the screen from multiple dimensions. By adjusting the number of control units, we can constrain the screen from multiple angles at the same time.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-29364\" title=\"1fada494j00ss1bz200bed000u000esm\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/02\/1fada494j00ss1bz200bed000u000esm.jpg\" alt=\"1fada494j00ss1bz200bed000u000esm\" width=\"1080\" height=\"532\" \/><\/p>\n<p>2, single picture and batch processing<\/p>\n<p>ControlNet supports single image uploading and batch processing, the latter allowing us to process a large number of images at once, which greatly improves work efficiency.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-29365\" title=\"ca39681fj00ss1bz1001nd000pg009tm\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/02\/ca39681fj00ss1bz1001nd000pg009tm.jpg\" alt=\"ca39681fj00ss1bz1001nd000pg009tm\" width=\"916\" height=\"353\" \/><\/p>\n<p>3. Pixel perfect mode<\/p>\n<p>Perfect Pixel mode automatically calculates the optimal resolution, simplifying the setting of the preprocessor resolution and ensuring the quality of the feature map and the accuracy of the generated images.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-29366\" title=\"377b1841j00ss1bz2005wd000pr00egm\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/02\/377b1841j00ss1bz2005wd000pr00egm.jpg\" alt=\"377b1841j00ss1bz2005wd000pr00egm\" width=\"927\" height=\"520\" \/><\/p>\n<p>4. Allow preview<\/p>\n<p>The Allow Preview feature allows us to visualize the extracted feature maps in order to make adjustments.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-29367\" title=\"137be712j00ss1bz20089d000pz00nzm\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/02\/137be712j00ss1bz20089d000pz00nzm.jpg\" alt=\"137be712j00ss1bz20089d000pz00nzm\" width=\"935\" height=\"863\" \/><\/p>\n<p>5. Control types, preprocessors and models<\/p>\n<p>ControlNet extracts picture features with a preprocessor and makes them understandable to Stable Diffusion with the help of a model. The control type then categorizes and one-to-one correspondence between the preprocessor and the model, reducing the difficulty of use.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-29368\" title=\"77b49278j00ss1bz20082d000p700nym\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/02\/77b49278j00ss1bz20082d000p700nym.jpg\" alt=\"77b49278j00ss1bz20082d000p700nym\" width=\"907\" height=\"862\" \/><\/p>\n<p>6. Controlling weights, guiding the timing of intervention and guiding the timing of termination<\/p>\n<p>The control weight determines how much ControlNet influences the generation results. The bootstrap intervention timing and bootstrap termination timing, on the other hand, are related to the number of sampling iteration steps, and determine when ControlNet intervenes and terminates during the generation process.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-29369\" title=\"c4b3a727j00ss1bz2003fd000q300h5m\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/02\/c4b3a727j00ss1bz2003fd000q300h5m.jpg\" alt=\"c4b3a727j00ss1bz2003fd000q300h5m\" width=\"939\" height=\"617\" \/><\/p>\n<p>7. Control mode<\/p>\n<p>The control mode determines which has a greater influence on the generated results between cue words and ControlNet.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-29370\" title=\"65c1f515j00ss1bz10015d000pd007wm\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/02\/65c1f515j00ss1bz10015d000pd007wm.jpg\" alt=\"65c1f515j00ss1bz10015d000pd007wm\" width=\"913\" height=\"284\" \/><\/p>\n<p>8. Zoom mode<\/p>\n<p>The scaling mode provides different strategies to handle situations where the resolution of the uploaded image does not match the resolution of the generated settings.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-29371\" title=\"686d6ddaj00ss1bz1000td000pe005um\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/02\/686d6ddaj00ss1bz1000td000pe005um.jpg\" alt=\"686d6ddaj00ss1bz1000td000pe005um\" width=\"914\" height=\"210\" \/><\/p>\n<p>IV. Conclusion<\/p>\n<p>Through the study of this section, we have mastered the basic parameters and workflow of ControlNet, laying a solid foundation for subsequent in-depth study of ControlNet. This knowledge will help us to control image generation more flexibly and accurately in real projects.<\/p>\n<p>I hope this article has helped you better understand the basic parameters of the ControlNet plugin. We'll see you in the next section!<\/p>","protected":false},"excerpt":{"rendered":"<p>In this section, we will take a closer look at the basic parameters of the ControlNet plug-in, which is the core of our business applications and the cornerstone of Stable Diffusion. In this section, we will get to know ControlNet as a whole and explain its general parameters in detail. Why do we need ControlNet? Before learning ControlNet, we need to understand what pain points of Stable Diffusion it solves. the original version of Stable Diffusion is broader in the dimensions of the image constraints, and lacks the ability to accurately generate images. controlNet's<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[149,144],"tags":[2328,197,198],"collection":[262],"class_list":{"0":"post-29362","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"hentry","6":"category-jiaocheng","7":"category-baike","8":"tag-ai","9":"tag-stable-diffusion","11":"collection-stablediffusion"},"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/29362","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=29362"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/29362\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=29362"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=29362"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=29362"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=29362"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}