{"id":6769,"date":"2024-03-31T09:26:34","date_gmt":"2024-03-31T01:26:34","guid":{"rendered":"https:\/\/www.1ai.net\/?p=6769"},"modified":"2024-03-31T09:26:34","modified_gmt":"2024-03-31T01:26:34","slug":"%e5%bc%ba%e5%a4%a7%e7%9a%84ai%e7%94%9f%e5%9b%be%e5%b7%a5%e5%85%b7comfyui%ef%bc%8ccomfyui%e4%bb%8b%e7%bb%8d%e8%af%a6%e7%bb%86%e9%83%a8%e7%bd%b2%e6%95%99%e7%a8%8b%e5%92%8c%e4%bd%bf%e7%94%a8","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/6769.html","title":{"rendered":"ComfyUI, a powerful AI image creation tool, introduces detailed deployment tutorials and usage of ComfyUI"},"content":{"rendered":"<h1 class=\"pgc-h-arrow-right\" spellcheck=\"false\" data-track=\"1\" data-pm-slice=\"0 0 []\">1. What is ComfyUI?<\/h1>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-6770\" title=\"get-106\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/03\/get-106.jpg\" alt=\"get-106\" width=\"1080\" height=\"449\" \/><\/div>\n<p data-track=\"2\"><strong><a href=\"https:\/\/www.1ai.net\/en\/tag\/comfyui\" title=\"_Other Organiser\" target=\"_blank\" >ComfyUI<\/a> Is a modular stable diffusion graphical interface.<\/strong><\/p>\n<p data-track=\"3\">Such a description may make many people wonder, what exactly is modularity?<\/p>\n<p data-track=\"4\">Let&#039;s take a simple example. It&#039;s like assembling a computer, which has components such as the motherboard, CPU, memory, and hard disk. Each component can be selected according to the user&#039;s needs, and then assembled into a complete computer host. This modular design allows users to customize it according to their own needs, and when they need to upgrade in the future, they can also easily replace specific components without replacing the entire computer.<\/p>\n<p data-track=\"5\">In contrast, mobile phones are usually non-modular because the various components of the phone are fixed from the factory and cannot be easily replaced or upgraded. This means that when users buy a mobile phone, they can only choose the overall configuration, and cannot customize it according to their own needs like a computer, nor can they replace specific components for upgrading later.<\/p>\n<p data-track=\"6\">ComfyUI adheres to the modular design concept, breaking down the complex AI drawing process into independent steps. Each step is defined as a module. Each module has specific functions and can be adjusted individually or freely linked with other modules according to rules, making the AI drawing process more flexible and diverse. However, this modular design also means that if you want to be proficient in using ComfyUI for drawing, you need to have a certain understanding of the working principle of stable diffusion. Therefore, compared with A1111, ComfyUI is slightly more difficult to get started.<\/p>\n<h1 class=\"pgc-h-arrow-right\" spellcheck=\"false\" data-track=\"7\">2. Why do we need ComfyUI?<\/h1>\n<p data-track=\"8\">The original intention of the author to write ComfyUI code was to gain a deeper understanding of how stable diffusion works, and to have a powerful and concise tool for unlimited stable diffusion experiments.<\/p>\n<p data-track=\"9\">If you are also interested in learning more about how stable diffusion works and are looking for a tool to create your own image generation process, then ComfyUI is definitely your best choice.<strong>The main reason for using ComfyUI is that it optimizes SDXL better, takes up less video memory and runs faster.<\/strong><\/p>\n<p data-track=\"10\">Many people may find it strange, isn&#039;t A1111 already good enough? Why use ComfyUI? Then let&#039;s talk about the difference between A1111 and ComfyAI.<\/p>\n<p data-track=\"11\">\u2460<strong>user interface<\/strong>. The user interface of A1111 is closer to our usage habits. For each setting, we only need to make a selection or adjustment. For example, when we need to perform image generation operations, we only need to click the Image Generation Label, upload the image, and then set the parameters. Finally, the model will complete the entire image generation process according to the internal setting process of A1111. But ComfyUI is different. If we need to perform image generation operations, we need to build the process ourselves, consider which modules (nodes) to add, and how these nodes are connected, etc., and then set the parameters. Finally, ComfyUI generates the final image according to the process we set. Therefore, if you know nothing about stable diffusion, then A1111 is a more suitable choice for you. When you have mastered a certain stable diffusion operation mechanism, ComfyUI will also be a very good substitute.<\/p>\n<p data-track=\"12\">\u2461<strong>Extended support<\/strong>A1111 has stronger extension support than ComfyUI. Entering &quot;ComfyUI&quot; in the github search box only yields 22 pages of search results, while entering &quot;stable diffusion webui&quot; yields 100 pages of results. This means that more plugins support A1111, which means richer extension functions and stronger overall capabilities.<\/p>\n<p data-track=\"13\">\u2462<strong>Drawing speed<\/strong>. Recently, the sudden rise in popularity of ComfyUI is mainly due to its good support for SDXL. In A1111, using SDXL models often takes up a lot of memory, and the video memory usage is also high, which leads to slow image generation. ComfyUI, on the other hand, can complete the same content at a faster speed under the conditions of lower video memory and memory usage. For the SD v1 version, there is no obvious difference in the drawing speed between A1111 and ComfyUI, but using SDXL to draw, ComfyUI is almost twice as fast as A1111. Therefore, if you want to experience SDXL better even with low video memory, then ComfyUI is my current more recommended choice.<\/p>\n<h1 class=\"pgc-h-arrow-right\" spellcheck=\"false\" data-track=\"14\">3. How to install ComfyUI?<\/h1>\n<p data-track=\"15\">The installation method is very simple, the steps are as follows:<\/p>\n<p data-track=\"16\">\u2460 Download this compressed file: https:\/\/github.com\/comfyanonymous\/ComfyUI\/releases\/download\/latest\/ComfyUI_windows_portable_nvidia_cu118_or_cpu.7z<\/p>\n<p data-track=\"17\">\u2461 After decompression, enter the directory and click run_nvidia_gpu.bat to run. If you want to run on the CPU, open run_cpu.bat, but it is not recommended.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-6771\" title=\"get-107\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/03\/get-107.jpg\" alt=\"get-107\" width=\"784\" height=\"279\" \/><\/div>\n<p data-track=\"18\">\u2462 After successful running, the program automatically uses the default browser to open the UI address. The default interface is as follows.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-6772\" title=\"get-108\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/03\/get-108.jpg\" alt=\"get-108\" width=\"1080\" height=\"449\" \/><\/div>\n<p data-track=\"19\">\u2463 Users who often use ComfyUI should have used A1111 before. If they need to copy a copy of the model file to ComfyUI, the storage usage will become very high. With a simple setting, ComfyUI can load all model files in the stable-diffusion-webui directory, including SD models, Lora and embedding, etc., without copying these files.<\/p>\n<p data-track=\"20\">\u2464 First, open the file named &quot;extra_model_paths.yaml.example&quot; in the path of \\ComfyUI_windows_portable\\ComfyUI, then open Notepad and drag the file to Notepad to open it, and change the path after base path to the path of the stable-diffusion-webui folder. For example, my storage path is C:\\AI-stable-diffusion-webui. Finally, press ctrl+S to save.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-6773\" title=\"get-109\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/03\/get-109.jpg\" alt=\"get-109\" width=\"735\" height=\"246\" \/><\/div>\n<p data-track=\"21\">\u2465 Rename the &quot;extra_model_paths.yaml.example&quot; file to &quot;extra_model_paths.yaml&quot;. After restarting ComfyUI, you can load all the models in the webui directory.<\/p>\n<p data-track=\"22\"><strong>4. Basic Usage <\/strong><\/p>\n<p data-track=\"23\">One, as soon as you get to the interface, you'll see several separate boxes, which we call \"Node\" and which are free to move. The leftmost node is \u201cLoad Checkpoint\u201d as shown in the figure below. The role of the \u201cmodel loading\u201d node is to load the model into memory and apply its weight to the neural network. Simply put, the main function of the interface is to select a different stable model. You will note that there are three outputs for the loading model node, namely, \u201cMODEL\u201d, \u201cCLIP\u201d and \u201cVAE\u201d, which I will describe in turn\u3002<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-6774\" title=\"get-110\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/03\/get-110.jpg\" alt=\"get-110\" width=\"870\" height=\"288\" \/><\/div>\n<p data-track=\"24\">2. First, let&#039;s look at CLIP. The CLIP model is connected to the &quot;CLIP Text Encode&quot; node, as shown in the figure below. The role of the CLIP model is to encode the content we input so that it can guide the model to generate the content we specify. You will find that there are two CLIP text encoders, one of which encodes the positive prompt word and the other encodes the reverse prompt word. These two CLIP text encoder nodes are no different from the two prompt word input boxes in A1111, and their functions are exactly the same.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-6775\" title=\"get-111\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/03\/get-111.jpg\" alt=\"get-111\" width=\"1080\" height=\"483\" \/><\/div>\n<p data-track=\"25\">3 Next is the Model, which is the main model of SD. So how did the picture come out? In stable diffusion, the picture is generated by \"Sampler\". The sampler receives the main model and a positive and reverse hint coded by CLIP as input and requires an empty \"potential picture\". Potential pictures can be understood as data expressions used in SD models, while pixel images are the form of data that we eventually see. Potential pictures are in the form of highly compressed pixel pictures, which preserves advanced features of pixel pictures and significantly lower data volumes. It is worth noting that the data processed by the sampler are potential images rather than pixels, which is why a steady spread can run on a consumer-grade graphic card and can produce pictures so quickly\u3002<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-6776\" title=\"get-112\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/03\/get-112.jpg\" alt=\"get-112\" width=\"1001\" height=\"720\" \/><\/div>\n<p data-track=\"26\">4. Finally, VAE. The sampler receives 4 inputs and forms a new latent image output. As we just said, the latent image is a data form that the SD model can understand. It needs to be converted into a pixel image before we can understand it. What VAE needs to do is to convert the latent image into a pixel image. The role of the VAE Decode node is to receive the latent image generated by the sampler, and use the VAE model to decode it into a pixel image, and finally output it as a PNG format image.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-6777\" title=\"get-113\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/03\/get-113.jpg\" alt=\"get-113\" width=\"1020\" height=\"406\" \/><\/div>\n<p data-track=\"27\">5. In general, the text generation process of stable diffusion is that CLIP first encodes the text we input into data that can be understood by the SD model. Then the sampler accepts the CLIP-encoded data and an empty latent image, and adds noise to the empty latent image according to the seed value. Then the sampler restores the noisy latent image into a clear latent image according to the set parameters. Finally, the clear latent image is decoded by VAE into a clear pixel image and saved as a png format file.<\/p>\n<p data-track=\"28\">6. If you have a general understanding of the above, then you can try to think about how the image generation process works. If you were asked to create a image generation process yourself, could you create it successfully?<\/p>\n<h1 class=\"pgc-h-arrow-right\" spellcheck=\"false\" data-track=\"29\">5. Advanced Usage<\/h1>\n<p data-track=\"30\"><strong>1. Create a graph generation process<\/strong><\/p>\n<p data-track=\"31\">\u2460 The process of stable diffusion for image generation is slightly different from that for text generation.<\/p>\n<p data-track=\"32\">\u2461 The first step is to load the image. The node for loading the image is &quot;Add Node&quot; &gt;&gt;&gt; &quot;image&quot; &gt;&gt;&gt; &quot;Load Image&quot;<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-6778\" title=\"get-114\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/03\/get-114.jpg\" alt=\"get-114\" width=\"992\" height=\"472\" \/><\/div>\n<p data-track=\"33\">\u2462 As we mentioned before, the image processed by stable diffusion is a latent image, so next we need to use VAE to convert the pixel image into a latent image. Select &quot;Add Node&quot; &gt;&gt;&gt; &quot;loaders&quot; &gt;&gt;&gt; &quot;Load VAE&quot;.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-6779\" title=\"get-115\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/03\/get-115.jpg\" alt=\"get-115\" width=\"1120\" height=\"622\" \/><\/div>\n<p data-track=\"34\">\u2463 Just loading VAE is not enough, you also need to add a VAE encoder node to encode the pixel image into a latent image. Select &quot;Add Node&quot; &gt;&gt;&gt; &quot;latent&quot; &gt;&gt;&gt; &quot;VAE Encode&quot;<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-6780\" title=\"get-116\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/03\/get-116.jpg\" alt=\"get-116\" width=\"1062\" height=\"578\" \/><\/div>\n<p data-track=\"35\">\u2464 At this point, we have added three nodes, and the connection method is as shown in the figure below. The blue line is the connection line between the pixel image and the VAE encoder, and the red line is the connection line between the VAE loading node and the VAE encoder.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-6781\" title=\"get-117\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/03\/get-117.jpg\" alt=\"get-117\" width=\"1288\" height=\"952\" \/><\/div>\n<p data-track=\"36\">\u2465 Of course, if you do not download the independent VAE model, you can also use the VAE model that comes with the SD model to encode pixel images.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-6783\" title=\"get-119\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/03\/get-119.jpg\" alt=\"get-119\" width=\"1278\" height=\"1038\" \/><\/div>\n<p data-track=\"37\">\u2466 Next, use the main model of SD to process the latent image. Select \u201cAdd Node\u201d &gt;&gt;&gt; \u201cconditioning\u201d &gt;&gt;&gt; \u201cCLIP Text Encode (Prompt)\u201d to add twice as the forward and reverse prompts of the image generation process. Select \u201cAdd Node\u201d &gt;&gt;&gt; \u201csampling\u201d &gt;&gt;&gt; \u201cKSampler\u201d to add a sampler. And connect it as follows.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-6782\" title=\"get-118\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/03\/get-118.jpg\" alt=\"get-118\" width=\"1080\" height=\"716\" \/><\/div>\n<p data-track=\"38\">\u2467 The latent image output by the sampler node can generate a pixel image after VAE decoding, and finally save it as a PNG format image file. Therefore, you need to add a VAE decoder node and an image saving node. Select &quot;Add Node&quot; &gt;&gt;&gt; &quot;latent&quot; &gt;&gt;&gt; &quot;VAE Decode&quot;, select &quot;Add Node&quot; &gt;&gt;&gt; &quot;image&quot; &gt;&gt;&gt; &quot;Save Image&quot;. Finally, connect the nodes as shown in the figure below.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-6784\" title=\"get-120\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/03\/get-120.jpg\" alt=\"get-120\" width=\"1080\" height=\"580\" \/><\/div>\n<p data-track=\"39\">\u2468 In the &quot;Load Image&quot; node, click &quot;choose file to upload&quot; to upload the image.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-6785\" title=\"get-121\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/03\/get-121.jpg\" alt=\"get-121\" width=\"872\" height=\"940\" \/><\/div>\n<p data-track=\"40\">\u2469 Enter the prompt word, set the redraw amplitude in the denosie option of the &quot;KSampler&quot; node, and finally click &quot;Queue Prompt&quot; in the upper right corner to generate the image. The final effect is shown in the figure below<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-6786\" title=\"get-122\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/03\/get-122.jpg\" alt=\"get-122\" width=\"1080\" height=\"498\" \/><\/div>\n<p data-track=\"41\">11 Next you should be able to set up various nodes according to your goals, and have fun exploring!<\/p>\n<p data-track=\"42\"><strong>2. Save and use the created process<\/strong><\/p>\n<p data-track=\"43\">The images generated by ComfyUI contain the process information of generating the image, which means you can get all the node information by loading the image. In addition, you can also save the created process in the ComfyUI interface.<\/p>\n<p data-track=\"44\">Here are the steps:<\/p>\n<p data-track=\"45\">\u2460 Click the &quot;Save&quot; button in the upper right corner<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-6787\" title=\"get-123\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/03\/get-123.jpg\" alt=\"get-123\" width=\"1080\" height=\"498\" \/><\/div>\n<p data-track=\"46\">\u2461 Enter a new file name in the pop-up dialog box and click &quot;OK&quot; to save the file in a place where you can easily find it.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-6788\" title=\"get-124\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/03\/get-124.jpg\" alt=\"get-124\" width=\"1080\" height=\"498\" \/><\/div>\n<p data-track=\"47\">\u2462 The saved file can be loaded into the ComfyUI interface again. Press &quot;Clear&quot; to clear the interface, then click &quot;Load&quot;<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-6789\" title=\"get-125\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/03\/get-125.jpg\" alt=\"get-125\" width=\"1080\" height=\"498\" \/><\/div>\n<p data-track=\"48\">\u2463 Find the folder where the template file is stored, select the process you want to load, and click Open<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-6790\" title=\"get-126\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/03\/get-126.jpg\" alt=\"get-126\" width=\"731\" height=\"616\" \/><\/div>\n<p data-track=\"49\">\u2464 You can reload all previously saved node information<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-6792\" title=\"get-128\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/03\/get-128.jpg\" alt=\"get-128\" width=\"1080\" height=\"498\" \/><\/div>\n<p data-track=\"50\">\u2465 In addition to loading .json files, we can also use the image generated by ComfyUI as a template to load and obtain all the node settings that generated the image. The loading steps are exactly the same as the above steps. Click &quot;Load&quot;, select an image generated by ComfyUI, and then click Open.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-6791\" title=\"get-127\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/03\/get-127.jpg\" alt=\"get-127\" width=\"741\" height=\"618\" \/><\/div>\n<p data-track=\"51\">\u2466 Finally, you can get the generation information of the picture<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-6793\" title=\"get-129\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/03\/get-129.jpg\" alt=\"get-129\" width=\"1080\" height=\"498\" \/><\/div>","protected":false},"excerpt":{"rendered":"<p>I. What is ComfyUI? ComfyUI is a modularized stable diffusion graphical interface. This description may confuse many people, what exactly is modularity? Let's take a simple example. It's like the assembly of a computer, in which there are components such as motherboard, CPU, memory and hard disk. Each component can be selected according to the user's needs of different performance models, and then assembled into a complete computer mainframe. This modular design allows the user to customize the device according to their needs, and when upgrades are needed in the future, specific components can be easily replaced without having to replace the entire computer. In contrast, cell phones are usually non-modular, as the components are fixed from the factory and cannot be easily<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[149,144],"tags":[1990,1989],"collection":[],"class_list":["post-6769","post","type-post","status-publish","format-standard","hentry","category-jiaocheng","category-baike","tag-ai","tag-comfyui"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/6769","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=6769"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/6769\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=6769"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=6769"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=6769"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=6769"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}