{"id":24596,"date":"2024-12-06T09:46:34","date_gmt":"2024-12-06T01:46:34","guid":{"rendered":"https:\/\/www.1ai.net\/?p=24596"},"modified":"2024-12-06T09:46:34","modified_gmt":"2024-12-06T01:46:34","slug":"comfyui%e7%9a%84%e5%b7%a5%e4%bd%9c%e6%b5%81%e5%85%a5%e9%97%a8%e6%8c%87%e5%8d%97%ef%bc%8c%e5%b0%8f%e7%99%bd%e4%b9%9f%e8%83%bd%e7%9c%8b%e6%87%82%e7%9a%84%e5%90%84%e8%8a%82%e7%82%b9%e5%8a%9f%e8%83%bd","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/24596.html","title":{"rendered":"ComfyUI's work into the door guide, even a small white man can understand the function of each node interpretation"},"content":{"rendered":"<p>Today we're going to learn<a href=\"https:\/\/www.1ai.net\/en\/tag\/comfyui\" title=\"_Other Organiser\" target=\"_blank\" >ComfyUI<\/a>of<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%b7%a5%e4%bd%9c%e6%b5%81\" title=\"_Other Organiser\" target=\"_blank\" >Workflow<\/a>. The best tutorials for learning a new thing are always the examples provided on the official website.<\/p>\n<p>Here I'll illustrate with an example from the official website, followed by an example.<\/p>\n<p>This will allow us to get up to speed quickly and go further.<\/p>\n<p>Official Tutorial: https:\/\/comfyanonymous.github.io\/ComfyUI_examples\/2_pass_txt2img\/<\/p>\n<p>Download the text: https:\/\/pan.quark.cn\/s\/46a899c45618<\/p>\n<p><strong>Example of official website<\/strong><\/p>\n<p><em>Tips: For the workflow just loaded in, we don't know whether all the local models exist, so the best practice is to click Run on the right side directly after loading to see what big model files are missing locally, and then download them according to the model file names in the prompt. It is recommended to use the common ftp download tool to download the models after getting the model name and model download address, the download speed of ComfyUI Model Manager is too slow!<\/em><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24597\" title=\"116cf7efj00so1tuk000wd000u000a6p\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/116cf7efj00so1tuk000wd000u000a6p.jpg\" alt=\"116cf7efj00so1tuk000wd000u000a6p\" width=\"1080\" height=\"366\" \/><\/p>\n<p>Note: For those who don't have ComfyUI, you can read my post from yesterday.<\/p>\n<h3><a href=\"https:\/\/www.1ai.net\/en\/24510.html\/\">Learn ComfyUI from the ground up with the ComfyUI Getting Started Tutorial!<\/a><\/h3>\n<p><strong>Workflow functionality:<\/strong>This workflow generates images using a text-generated graph, then scales them using an amplifier, and then saves the images after processing;<\/p>\n<p><strong>Load this workflow:<\/strong>You can drag this image onto the page and the workflow is loaded in.<\/p>\n<p><strong>Reading order:<\/strong>These workflows are read in the same order as we read a book, left to right from top to bottom.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24598\" title=\"6d891b9aj00so1tv5000rd000u0009bp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/6d891b9aj00so1tv5000rd000u0009bp.jpg\" alt=\"6d891b9aj00so1tv5000rd000u0009bp\" width=\"1080\" height=\"335\" \/><\/p>\n<p><strong>Process Description:<\/strong><\/p>\n<p>1. Node 1, node 2 node 3, node 5 is mainly for node 4 to provide some basic raw material data, so that node 4 can generate Latent at ease;<\/p>\n<p>2. Node 4 is processed using the previous base raw material to generate Latent;<\/p>\n<p>3. Node 7, node 8 and node 10, node 11 convert Latent into a viewable image before saving it;<\/p>\n<p>4. node 6 is scaling the Latent of node 4 and still outputs the Latent;<\/p>\n<p>5. node 9 is to process the Latent of node 6 and other underlying raw material data to finally generate a new Latent for node 10 to process;<\/p>\n<p><strong>Overall:<\/strong>This workflow is to generate images based on the cue words, and then zoom in on the images, similar to a restaurant where there is a chef cooking and a lot of small workers serving him, and the chef's cooking will be given to the waiter;<\/p>\n<p><strong>Functional interpretation of each node:<\/strong><\/p>\n<p>Node 1: Checkpoint Loader<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24599\" title=\"fe1133b3j00so1tvl0004d0007r003kp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/fe1133b3j00so1tvl0004d0007r003kp.jpg\" alt=\"fe1133b3j00so1tvl0004d0007r003kp\" width=\"279\" height=\"128\" \/><\/p>\n<p>This node is used to<strong>Loading model files<\/strong>, the working stream contains the \"v2-1_768-ema-pruned.ckpt\" model file by default\u3002<\/p>\n<p>Provide other subsequent nodes with<strong>Model<\/strong>Parameters.<\/p>\n<p>VAE: This parameter is used to help the model generate images better;<\/p>\n<p>Node 2, Node 3: CLIP text encoder<\/p>\n<p>These are 2 identical nodes that are used to fill in the prompts, and it's common to see what positive prompts and negative prompts are set up through these nodes.<\/p>\n<p>Node 4: K Sampler<\/p>\n<p>The core component that receives all the input data and processes it before feeding it into the image.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24600\" title=\"1092aa41j00so1tvu000cd0008b0082p\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/1092aa41j00so1tvu000cd0008b0082p.jpg\" alt=\"1092aa41j00so1tvu000cd0008b0082p\" width=\"299\" height=\"290\" \/><\/p>\n<p>Node 5: Empty Latent<\/p>\n<p>Blank images, used to set the size and number of images, and to give the sampler an empty Latent for the K sampler to fill;<\/p>\n<p>Node 6: Latent Scaling<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24601\" title=\"e1bf3e60j00so1tw40005d0007p003up\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/e1bf3e60j00so1tw40005d0007p003up.jpg\" alt=\"e1bf3e60j00so1tw40005d0007p003up\" width=\"277\" height=\"138\" \/><\/p>\n<p>This node sets the size of the scaled image, the scaling method, whether to crop or not; and passes the processed Latent to the subsequent node;<\/p>\n<p>Node 7, Node 10: VAE decoding<\/p>\n<p>Receive VAE, Latent and convert to real image<\/p>\n<p>Node 8, Node 11: Save image<\/p>\n<p>It's for saving images.<\/p>\n<p>Node 9: K Sampler<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24602\" title=\"52d3869bj00so1twh000cd0008y0077p\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/52d3869bj00so1twh000cd0008y0077p.jpg\" alt=\"52d3869bj00so1twh000cd0008y0077p\" width=\"322\" height=\"259\" \/><\/p>\n<p>Receive the model provided by the previous Checkpoint loader, the Latent provided by the previous node, and the positive and negative cue words, and generate the scaled Latent for subsequent nodes after processing;<\/p>\n<p><strong>raise one and infer three1<\/strong><\/p>\n<p>Since the previous example used text to generate a Latent, then scaled the Latent, and then processed it with a K-sampler to get a new Latent, what if we want to scale an existing image? What should we do?<\/p>\n<p><strong>My thought process is as follows:<\/strong><\/p>\n<p>I take a similar approach by loading an existing image into the workflow, converting the image to Latent, then scaling it with Latent, then processing it with the K sampler, and it's basically the same thing later.<\/p>\n<p>Steps:<\/p>\n<p>1. Load image<\/p>\n<p>Double-click left mouse in blank position on the desktop and enter Load Image in the popup box<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24603\" title=\"8ec95b8fj00so1twy000yd000u000d8p\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/8ec95b8fj00so1twy000yd000u000d8p.jpg\" alt=\"8ec95b8fj00so1twy000yd000u000d8p\" width=\"1080\" height=\"476\" \/><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24604\" title=\"07e7b103j00so1tx5000ed0008v009kp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/07e7b103j00so1tx5000ed0008v009kp.jpg\" alt=\"07e7b103j00so1tx5000ed0008v009kp\" width=\"319\" height=\"344\" \/><\/p>\n<p>2. Next node:<\/p>\n<p>If we meet and do not know what to do next, we can work with the current output and our whole idea. Here's Image, when we drag a line out of the left mouse button, ComfyUI will show us what nodes we can use\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24605\" title=\"3a3ea3bbj00so1txb000id000d4008sp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/3a3ea3bbj00so1txb000id000d4008sp.jpg\" alt=\"3a3ea3bbj00so1txb000id000d4008sp\" width=\"472\" height=\"316\" \/><\/p>\n<p>HERE WE CHOOSE THE VAE CODE<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24606\" title=\"aaaf4447j00so1txi000fd000f1008up\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/aaaf4447j00so1txi000fd000f1008up.jpg\" alt=\"aaaf4447j00so1txi000fd000f1008up\" width=\"541\" height=\"318\" \/><\/p>\n<p>3. VAE code<\/p>\n<p>VAE encoding has an input VAE, and an output Latent, and here we get a Latent, but don't scale it.<\/p>\n<p>So next we drag a line at Latent and add a node with Latent scaling.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24607\" title=\"fbdd8ef4j00so1txp000kd000jl007op\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/fbdd8ef4j00so1txp000kd000jl007op.jpg\" alt=\"fbdd8ef4j00so1txp000kd000jl007op\" width=\"705\" height=\"276\" \/><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24608\" title=\"ae27b994j00so1txv000ld000nm007mp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/ae27b994j00so1txv000ld000nm007mp.jpg\" alt=\"ae27b994j00so1txv000ld000nm007mp\" width=\"850\" height=\"274\" \/><\/p>\n<p>Here, I've set the width and height to be larger than the size of the original image.<\/p>\n<p>4. Next node<\/p>\n<p>Referring to the previous example on the official website, after Latent scaling, the next step is the K sampler, so I followed suit and added all the subsequent nodes<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24609\" title=\"b4e01ee4j00so1ty3000gd000u0004up\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/b4e01ee4j00so1ty3000gd000u0004up.jpg\" alt=\"b4e01ee4j00so1ty3000gd000u0004up\" width=\"1080\" height=\"174\" \/><\/p>\n<p>5. Completion of other nodes<\/p>\n<p>Here we find that the VAE Encoding, K Sampler, VAE Decoding, Positive and Negative Conditioning nodes all have some input parameters that are not connected, so we still have to complete them. Refer to the previous example on the official website to add other nodes so that we can connect all these unconnected nodes.<\/p>\n<p>Similarly, I press the mouse to drag a line from the VAE here, and after releasing the left mouse button ComfyUI prompts me which nodes I can use, and I can select the target node from the list.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24610\" title=\"7ee42ae6j00so1tyc000hd000u0005jp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/7ee42ae6j00so1tyc000hd000u0005jp.jpg\" alt=\"7ee42ae6j00so1tyc000hd000u0005jp\" width=\"1080\" height=\"199\" \/><\/p>\n<p>Then we drag out of the line, connect all the unconnected parameters and eventually get the following work flow, noting that the model file in the \"Checkpoint Loader (Simply)\" here is consistent with the official web example\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24611\" title=\"68f9923cj00so1tyk000qd000u0007jp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/68f9923cj00so1tyk000qd000u0007jp.jpg\" alt=\"68f9923cj00so1tyk000qd000u0007jp\" width=\"1080\" height=\"271\" \/><\/p>\n<p>6. Execute and see the effect<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24612\" title=\"2731eb4bj00so1tys000sd000u0007wp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/2731eb4bj00so1tys000sd000u0007wp.jpg\" alt=\"2731eb4bj00so1tys000sd000u0007wp\" width=\"1080\" height=\"284\" \/><\/p>\n<p>The execution turned out to be a big disappointment, so far from our original image, so what's going on here?<\/p>\n<p>According to the official website, the noise reduction parameter in the K-sampler has a big influence on this, we change this noise reduction parameter to the same value as the noise reduction parameter of the K-sampler in the official website's example, and try again.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24613\" title=\"cb9f1fb1j00so1tyz000td000u00078p\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/cb9f1fb1j00so1tyz000td000u00078p.jpg\" alt=\"cb9f1fb1j00so1tyz000td000u00078p\" width=\"1080\" height=\"260\" \/><\/p>\n<p>Hmmmmmmmmmmm, works great, now you get a zoomed in image that resembles the original just fine.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24614\" title=\"8d7b3081j00so1tz6001jd000nk00fmp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/8d7b3081j00so1tz6001jd000nk00fmp.jpg\" alt=\"8d7b3081j00so1tz6001jd000nk00fmp\" width=\"848\" height=\"562\" \/><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24615\" title=\"69d85e63j00so1tzc003id000u000s4p\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/69d85e63j00so1tzc003id000u000s4p.jpg\" alt=\"69d85e63j00so1tzc003id000u000s4p\" width=\"1080\" height=\"1012\" \/><\/p>\n<p><strong>raise one and infer three2<\/strong><\/p>\n<p>Hmmmmmmmmm, by creating our own new workflow earlier in the day has given me a lot of confidence, so is there any more ways to implement this kind of image enlargement? Let's keep trying.<\/p>\n<p><strong>My thoughts:<\/strong><\/p>\n<p>Is there a model for zooming in on an image? I'll try it with this and see?<\/p>\n<p><strong>1. Search images, models<\/strong><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24616\" title=\"f499a54ej00so1tzr000vd000u000dnp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/f499a54ej00so1tzr000vd000u000dnp.jpg\" alt=\"f499a54ej00so1tzr000vd000u000dnp\" width=\"1080\" height=\"491\" \/><\/p>\n<p>A node was found through which the image was zoomed through the model.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24617\" title=\"7ab5ea8dj00so1tzx0004d0008w003lp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/7ab5ea8dj00so1tzx0004d0008w003lp.jpg\" alt=\"7ab5ea8dj00so1tzx0004d0008w003lp\" width=\"320\" height=\"129\" \/><\/p>\n<p><strong>2. Next node<\/strong><\/p>\n<p>For this node, we still use the practice of 1 in the previous example, dragging out a line for the input and output parameters, and later selecting the appropriate node.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24618\" title=\"86e7f66dj00so1u07000sd000u0007sp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/86e7f66dj00so1u07000sd000u0007sp.jpg\" alt=\"86e7f66dj00so1u07000sd000u0007sp\" width=\"1080\" height=\"280\" \/><\/p>\n<p>\"Images amplified by model\" directly connects the loaded images with the left image parameters<\/p>\n<p><strong>3. Scale-up models<\/strong><\/p>\n<p>Next press the left mouse button and drag out a line to see what nodes can be selected.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24620\" title=\"930a5c56j00so1u0d000td000jc009up\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/930a5c56j00so1u0d000td000jc009up.jpg\" alt=\"930a5c56j00so1u0d000td000jc009up\" width=\"696\" height=\"354\" \/><\/p>\n<p>Select a Magnifying Model Loader<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24619\" title=\"de8d86d7j00so1u0o0018d000o700ebp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/de8d86d7j00so1u0o0018d000o700ebp.jpg\" alt=\"de8d86d7j00so1u0o0018d000o700ebp\" width=\"871\" height=\"515\" \/><\/p>\n<p>This loader uses the real_esrgan_x2plus.pth model file.<\/p>\n<p><strong>4. Improve all the connections<\/strong><\/p>\n<p>After adding the loader, basically after the transformation of the workflow we have finished developing, and then the new workflow all connected to the line are connected, as shown in the figure:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24621\" title=\"4421cf0fj00so1u42008sd000u0009rp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/4421cf0fj00so1u42008sd000u0009rp.jpg\" alt=\"4421cf0fj00so1u42008sd000u0009rp\" width=\"1080\" height=\"351\" \/><\/p>\n<p><strong>5. View the effect<\/strong><\/p>\n<p>Here we execute the workflow directly to see how the final enlarged image looks.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24622\" title=\"2f0a2a67j00so1u1b001cd000u000fbp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/2f0a2a67j00so1u1b001cd000u000fbp.jpg\" alt=\"2f0a2a67j00so1u1b001cd000u000fbp\" width=\"1080\" height=\"551\" \/><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24623\" title=\"c356c20dj00so1u1g001ld000n000fqp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/c356c20dj00so1u1g001ld000n000fqp.jpg\" alt=\"c356c20dj00so1u1g001ld000n000fqp\" width=\"828\" height=\"566\" \/><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24624\" title=\"6d6d26baj00so1u1m002td000u000rwp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/6d6d26baj00so1u1m002td000u000rwp.jpg\" alt=\"6d6d26baj00so1u1m002td000u000rwp\" width=\"1080\" height=\"1004\" \/><\/p>\n<p>Note: For those of us who are just learning ComfyUI, let's put aside the effect of enlarged images for now and focus on how to build a new workflow first. After you are familiar with building the workflow, then you can improve the quality of your work.<\/p>\n<p>This article uses the following workflow:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24625\" title=\"ebbcdb05j00so1u1u001ad000u000gtp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/ebbcdb05j00so1u1u001ad000u000gtp.jpg\" alt=\"ebbcdb05j00so1u1u001ad000u000gtp\" width=\"1080\" height=\"605\" \/><\/p>\n<p>Note: The workflow is exported as an image, you can right mouse click and in the right click option you can export it as a png carrying the workflow metadata to share it with more people.<\/p>\n<p><strong>at last<\/strong><\/p>\n<p>Well, through the previous official website examples of learning, we have mastered the text of the map and picture enlargement. And then by way of example, we can zoom in on existing pictures, the process we mastered the process of building a new workflow.<\/p>","protected":false},"excerpt":{"rendered":"<p>Today we are going to learn about ComfyUI's workflow. The best tutorials for learning something new are always the examples provided on the official website. Here I'll illustrate with the examples from the official website and proceed with an example. This will allow us to get up to speed quickly and go further. Official Tutorial: https:\/\/comfyanonymous.github.io\/ComfyUI_examples\/2_pass_txt2img\/ Download: https:\/\/pan.quark.cn\/s\/46a899c45618 Official Example Tip: For the workflow that just loaded in, we don't know the local stream, we don't know whether the local model exists or not, so the best practice is to load and then directly click the<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[149,144],"tags":[1989,4749,5145],"collection":[],"class_list":{"0":"post-24596","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"hentry","6":"category-jiaocheng","7":"category-baike","8":"tag-comfyui","10":"tag-5145"},"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/24596","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=24596"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/24596\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=24596"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=24596"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=24596"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=24596"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}