{"id":13308,"date":"2024-06-16T10:51:31","date_gmt":"2024-06-16T02:51:31","guid":{"rendered":"https:\/\/www.1ai.net\/?p=13308"},"modified":"2024-06-16T10:51:31","modified_gmt":"2024-06-16T02:51:31","slug":"%e7%94%a8ai%e7%94%9f%e6%88%90%e5%9b%be%e7%89%87%ef%bc%8c%e6%95%99%e4%bd%a0%e7%94%a8comfyui%e6%9c%ac%e5%9c%b0%e9%83%a8%e7%bd%b2stable-diffusion-3","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/13308.html","title":{"rendered":"Generate pictures with AI and teach you how to deploy Stable Diffusion 3 locally with ComfyUI"},"content":{"rendered":"<p data-track=\"1\" data-pm-slice=\"0 0 []\">about<strong>Stable diffusion 3<\/strong>The advantages will not be elaborated here, here we will mainly talk about how ordinary users can deploy it locally.<\/p>\n<p data-track=\"3\">Currently, the SD3 model has been open sourced in HuggingFace. The address is:<strong>https:\/\/huggingface.co\/stabilityai\/stable-diffusion-3-medium<\/strong><\/p>\n<p data-track=\"5\">However, to download the model, you need to log in to your Hugging Face account and sign a license agreement.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-13309\" title=\"get-513\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/get-513.jpg\" alt=\"get-513\" width=\"1080\" height=\"491\" \/><\/div>\n<p data-track=\"7\">After signing, you can see the file list of the entire project.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-13310\" title=\"get-514\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/get-514.jpg\" alt=\"get-514\" width=\"1080\" height=\"460\" \/><\/div>\n<p data-track=\"9\">The above file can be roughly divided into three parts:<\/p>\n<p data-track=\"10\">1. comfy_example_workflows<\/p>\n<p data-track=\"11\">This part mainly contains three Comfyui workflow files, which will be used below.<\/p>\n<p data-track=\"13\">The reason why Comfyui is used is that the current (as of the time of article publication) Stable diffusion webui does not support SD3, so it is not possible to run SD3&#039;s large model. I also tried to load sd3_medium.safetensors with the current Stable diffusion webui, but it reported this error.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-13311\" title=\"get-515\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/get-515.jpg\" alt=\"get-515\" width=\"1080\" height=\"599\" \/><\/div>\n<p data-track=\"15\">Comfyui currently supports it, but you need to pay attention to upgrading the kernel version. Lower versions are not supported. As you can see from the picture below, the June 11 version has just started to support SD3 (it can be seen that Comfyui received the news very early).<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-13312\" title=\"get-516\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/get-516.jpg\" alt=\"get-516\" width=\"1080\" height=\"520\" \/><\/div>\n<p data-track=\"17\">I personally don&#039;t recommend just upgrading to version 8c4a9be, because the subsequent release notes actually contain a lot of updates about SD3, which shows that 8c4a9be is just a relatively rough version. I personally upgraded directly to the latest version.<\/p>\n<p data-track=\"19\">2. text_encoders<\/p>\n<p data-track=\"20\">There are four model files in text_encoders, namely:<\/p>\n<pre><code>\u251c\u2500\u2500 text_encoders\/ \u2502 \u251c\u2500\u2500 clip_g.safetensors \u2502 \u251c\u2500\u2500 clip_l.safetensors \u2502 \u251c\u2500\u2500 t5xxl_fp16.safetensors \u2502 \u2514\u2500\u2500 t5xxl_fp8_e4m3fn.safetensors<\/code><\/pre>\n<p data-track=\"23\">There are three different text encoders here, namely two CLIP models and a T5 model. The T5 model also provides two quantization versions, one 16-bit quantization (fp16) and the other 8-bit quantization (fp8).<\/p>\n<p data-track=\"25\">Let me add here,<strong>CLIP model[1]<\/strong>Originally developed by OpenAI, it is a multimodal pre-trained model that can understand the relationship between images and text. CLIP is trained on a large number of image and text pairs to learn a representation method that can align text descriptions with image content. This representation method enables CLIP to understand the content of the text description and match it with the image content.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-13313\" title=\"get-517\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/get-517.jpg\" alt=\"get-517\" width=\"1080\" height=\"381\" \/><\/div>\n<p data-track=\"27\">In simple terms,<strong>CLIP[2]<\/strong>It is to convert our prompt words into a &quot;language&quot; (vector) that the SD model can understand, so that the SD model knows what kind of pictures to generate.<\/p>\n<p data-track=\"29\">3. Checkpoints<\/p>\n<p data-track=\"30\">This time, SD3 released a total of four large models, namely:<\/p>\n<pre><code>\u251c\u2500\u2500 sd3_medium.safetensors (4.34G) \u251c\u2500\u2500 sd3_medium_incl_clips.safetensors (5.97 G) \u251c\u2500\u2500 sd3_medium_incl_clips_t5xxlfp8.safetensors (10.9G) \u2514\u2500\u2500 sd3_medium_incl_clips_t5xxlfp16.safe tensors (15.8G)<\/code><\/pre>\n<p data-track=\"32\">sd3_medium.safetensors is a relatively pure base model that only contains the relevant weights of MMDiT and VAE, but does not contain any of the text encoders mentioned in point 2 above.<\/p>\n<p data-track=\"34\">sd3_medium_incl_clips.safetensors is a combination of sd3_medium + clip_g + clip_l. This model will be smaller, but the performance of the model without the T5XXL encoder will be different.<\/p>\n<p data-track=\"36\">sd3_medium_incl_clips_t5xxlfp8.safetensors is a combination of sd3_medium + clip_g + clip_l + t5xxl_fp8_e4m3fn, a model that strikes a balance between quality and resource requirements.<\/p>\n<p data-track=\"38\">sd3_medium_incl_clips_t5xxlfp16.safetensors is a combination of sd3_medium + clip_g + clip_l + t5xxl_fp16. The quality should be higher, but the video memory occupied is also the highest.<\/p>\n<p data-track=\"40\">Next, enter<strong>Practice<\/strong>If you have already installed Comfyui, you can skip the first part.<\/p>\n<h1 class=\"pgc-h-arrow-right\" spellcheck=\"false\" data-track=\"41\">1. Install Comfyui<\/h1>\n<p data-track=\"42\">The first step is to download the installation package. Here I have prepared two forms: Quark Cloud Disk and Baidu Cloud Disk<\/p>\n<pre><code># Quark Cloud Disk is the installation package provided by Qiuye, so it contains more content https:\/\/pan.quark.cn\/s\/64b808baa960#\/list\/share\/377bd955c75a411c8d1d01f366255cdb-ComfyUI*101aki # Baidu Cloud Disk is downloaded and uploaded by myself, and only an integrated package is put in it for the convenience of friends who don\u2019t have Quark Cloud Disk Link: https:\/\/pan.baidu.com\/s\/103QhzN5R7m-19-JFW6Z9Yw Extraction code: 6666<\/code><\/pre>\n<p data-track=\"45\">The second step is to unzip and install. Double-click the launcher. The layout of the launcher is basically the same as the previous SD launcher.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-13314\" title=\"get-518\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/get-518.jpg\" alt=\"get-518\" width=\"1080\" height=\"621\" \/><\/div>\n<p data-track=\"47\">After the startup is successful, click one-key start, and this screen appears, which means you are halfway there.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-13315\" title=\"get-519\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/get-519.jpg\" alt=\"get-519\" width=\"1080\" height=\"535\" \/><\/div>\n<p data-track=\"49\">The third step is to upgrade the kernel version. If you have already upgraded, you can skip it.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-13316\" title=\"get-520\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/get-520.jpg\" alt=\"get-520\" width=\"1080\" height=\"374\" \/><\/div>\n<p data-track=\"51\">I have upgraded to the latest version, including the extended version.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-13317\" title=\"get-521\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/get-521.jpg\" alt=\"get-521\" width=\"1080\" height=\"382\" \/><\/div>\n<p data-track=\"53\">There are some errors when I start it (probably due to the version upgrade), but after actual testing, it does not affect the output of SD3.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-13318\" title=\"get-522\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/get-522.jpg\" alt=\"get-522\" width=\"1080\" height=\"335\" \/><\/div>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-13319\" title=\"get-523\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/get-523.jpg\" alt=\"get-523\" width=\"1080\" height=\"245\" \/><\/div>\n<h1 class=\"pgc-h-arrow-right\" spellcheck=\"false\" data-track=\"54\">2. Download the model<\/h1>\n<p data-track=\"55\">We don&#039;t need to use all the above models this time, we only need to use these four models:<\/p>\n<ul>\n<li data-track=\"56\">sd3_medium.safetensors<\/li>\n<li data-track=\"57\">clip_g.safetensors<\/li>\n<li data-track=\"58\">clip_l.safetensors<\/li>\n<li data-track=\"59\">t5xxl_fp8_e4m3fn.safetensors<\/li>\n<\/ul>\n<p data-track=\"61\">After downloading, put sd3_medium.safetensors into the models\\checkpoints folder, and the other three into the models\\clip folder.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-13320\" title=\"get-524\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/get-524.jpg\" alt=\"get-524\" width=\"1080\" height=\"449\" \/><\/div>\n<h1 class=\"pgc-h-arrow-right\" spellcheck=\"false\" data-track=\"62\">3. Import workflow<\/h1>\n<p data-track=\"63\">The one used here is sd3_medium_example_workflow_basic.json under the comfy_example_workflows folder<\/p>\n<p data-track=\"65\">Click the folder icon in the upper left corner and a list of workflow files will pop up.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-13321\" title=\"get-525\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/get-525.jpg\" alt=\"get-525\" width=\"1030\" height=\"582\" \/><\/div>\n<p data-track=\"67\">Click Import to import the previous sd3_medium_example_workflow_basic.json file<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-13322\" title=\"get-526\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/get-526.jpg\" alt=\"get-526\" width=\"980\" height=\"723\" \/><\/div>\n<p data-track=\"69\">If you import the workflow first and don&#039;t put the model, you may get this prompt, which actually tells you that the model failed to load, or your kernel version is not upgraded.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-13323\" title=\"get-527\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/get-527.jpg\" alt=\"get-527\" width=\"1070\" height=\"389\" \/><\/div>\n<p data-track=\"71\">Secondly, you may also need to adjust the video memory optimization options, otherwise the image may fail to be drawn (depending on your video memory situation)<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-13324\" title=\"get-528\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/get-528.jpg\" alt=\"get-528\" width=\"1080\" height=\"259\" \/><\/div>\n<p data-track=\"73\">Finally, click the &quot;Add prompt word queue&quot; button to generate the picture.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-13325\" title=\"get-529\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/get-529.jpg\" alt=\"get-529\" width=\"1080\" height=\"426\" \/><\/div>\n<h1 class=\"pgc-h-arrow-right\" spellcheck=\"false\" data-track=\"74\">4. Test results<\/h1>\n<p data-track=\"75\">I have made a simple comparison of facial painting, hand painting, font writing and typesetting, full-body photos, animals and plants. I used SDXL (with Refiner) and SD3 for the comparison, and controlled the image size to be 1024. However, there are slight differences in some detail parameters (CFG Scale). Please see the VCR below.<\/p>\n<p data-track=\"77\">Note: The photos on the left are all generated by SDXL, and the photos on the right are all generated by SD3<\/p>\n<p data-track=\"79\">4.1 Face painting<\/p>\n<p data-track=\"80\">Prompt words: happy indian girl,portrait photography,beautiful,morning sunlight,smooth light,shot on kodak portra 200,film grain,nostalgic mood,<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-13326\" title=\"get-530\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/get-530.jpg\" alt=\"get-530\" width=\"1080\" height=\"538\" \/><\/div>\n<p data-track=\"81\">Personal opinion: SDXL is slightly inferior, because it looks like a little girl, but the face looks like a middle-aged woman, which is not a good match.<\/p>\n<p data-track=\"83\">4.2 Hand painting<\/p>\n<p data-track=\"84\">Prompt words: a girl sitting in the cafe, playing guitar, comic, graphic illustration, comic art, graphic novel art, vibrant, highly detailed, colored, 2d minimalistic<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-13327\" title=\"get-531\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/get-531.jpg\" alt=\"get-531\" width=\"1080\" height=\"539\" \/><\/div>\n<p data-track=\"85\">Personal opinion: SDXL is still a little bit worse, the legs are obviously problematic, the hands of SD3 are slightly better, but not that amazing<\/p>\n<p data-track=\"87\">4.3 Fonts and Typesetting<\/p>\n<p data-track=\"88\">Synopsis provided by SD3: A vibrant street wall covered in colorful graffiti, the ceterpee spellls \u201cSD3 MEDIUM\u201d, in a story of colors<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-13328\" title=\"get-532\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/get-532.jpg\" alt=\"get-532\" width=\"1080\" height=\"539\" \/><\/div>\n<p data-track=\"90\">Epic anime artwork of a Wizard at a moment at night making a cosmic sound into the dark sky that says \"<a href=\"https:\/\/www.1ai.net\/en\/tag\/stable-diffusion\" title=\"_Other Organiser\" target=\"_blank\" >Stable Diffusion<\/a> three, make out of colorful energy<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-13329\" title=\"get-533\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/get-533.jpg\" alt=\"get-533\" width=\"1080\" height=\"541\" \/><\/div>\n<p data-track=\"91\">Personal opinion: From the above two sets of pictures, we can see that the font writing ability of SD3 is indeed much higher than that of SDXL, but it cannot be guaranteed to be perfect.<\/p>\n<p data-track=\"93\">4.4 Full body photo<\/p>\n<p data-track=\"94\">Prompt words: a full body portrait of an old hipster man with a ponytail, cigar in mouth, smoke, badass<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-13330\" title=\"get-534\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/get-534.jpg\" alt=\"get-534\" width=\"1080\" height=\"539\" \/><\/div>\n<p data-track=\"95\">Personal opinion: I actually wanted them to draw a full-body photo here. SD3 is a little better for understanding, but I prefer SDXL in terms of style.<\/p>\n<p data-track=\"97\">Prompt words: Editorial portrait, full body, 1male, dynamic pose, futuristic fashion, cinematic,<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-13331\" title=\"get-535\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/get-535.jpg\" alt=\"get-535\" width=\"1080\" height=\"540\" \/><\/div>\n<p data-track=\"98\">Personal opinion: It feels similar, but SD3 draws the whole body<\/p>\n<p data-track=\"100\">4.5 Plants<\/p>\n<p data-track=\"101\">Prompt words: a frozen cosmic rose, the petals glitter with a crystalline shimmer, swirling nebulas, 8k unreal engine photorealism, ethereal lighting, red, nighttime, darkness, surreal art<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-13333\" title=\"get-537\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/get-537.jpg\" alt=\"get-537\" width=\"1080\" height=\"540\" \/><\/div>\n<p data-track=\"103\">4.6 Anthropomorphic Animals<\/p>\n<p data-track=\"104\">Prompt words: full body, cat dressed as a Viking, with weapon in his paws, battle coloring, glow hyper-detail, hyper-realism, cinematic<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-13332\" title=\"get-536\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/get-536.jpg\" alt=\"get-536\" width=\"1080\" height=\"539\" \/><\/div>\n<p data-track=\"106\">4.7 Some rollovers of SD3<\/p>\n<p data-track=\"107\">You may have seen some cases of SD3 crashing on the Internet. I also encountered this when testing. Sometimes when drawing some people, the drawings do appear strange or even wrong. It is said that this is because of their<strong> NSFW Filter<\/strong>(Filter out non-compliant adult content), and judge all human images as NSFW, resulting in the accidental deletion of some harmless adult images.<\/p>\n<p data-track=\"109\">For example, this group of prompt words: A girl lying on the grass<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-13334\" title=\"get-538\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/get-538.jpg\" alt=\"get-538\" width=\"1080\" height=\"537\" \/><\/div>\n<p data-track=\"110\">Personal opinion: This set of SDXL is obviously better, the neck and hands of SD3 feel weird<\/p>\n<p data-track=\"112\">There is also this group: A couple lying on the beach in the sun<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-13335\" title=\"get-539\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/get-539.jpg\" alt=\"get-539\" width=\"1080\" height=\"542\" \/><\/div>\n<p data-track=\"113\">Both sides of this group are obviously not drawn well, and I also found a point that when drawing two people hugging or other actions between them, the hands of the SD model are particularly prone to errors. I think this is a point that they can improve in the future.<\/p>\n<p data-track=\"115\">This is also a classic case.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-13336\" title=\"get-540\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/get-540.jpg\" alt=\"get-540\" width=\"1080\" height=\"1080\" \/><\/div>\n<p data-track=\"117\">There is one last point, which was also discovered during testing, that is, the images generated by SD3 are obviously more vivid, and the exposure is just right, while the images generated by SDXL are darker and a bit underexposed.<\/p>\n<p data-track=\"119\">From the above test, we can see that the overall effect of SD3 is better than that of SDXL, which is reflected in the<strong>Details, colors, lighting<\/strong>The exposure will be better.<strong>Close to real photos<\/strong>; In addition, the ability to understand fonts, typesetting, and prompts is also stronger, but there are still some shortcomings in some aspects. I hope that it can be continuously optimized in the future.<\/p>\n<p>&nbsp;<\/p>","protected":false},"excerpt":{"rendered":"<p>The advantages of Stable Diffusion 3 are no longer repeated, and the main point here is how ordinary users are deployed locally. The current SD3 model is already open at: https:\/\/huggingFace.co\/stabilityai\/stable-diffusion-3-mediam, but downloading the model requires entering the HuggingFace account and signing a licence agreement. One, comfy_example_workflows<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[149,144],"tags":[197,1323,198],"collection":[262],"class_list":{"0":"post-13308","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"hentry","6":"category-jiaocheng","7":"category-baike","8":"tag-stable-diffusion","9":"tag-stable-diffusion3","11":"collection-stablediffusion"},"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/13308","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=13308"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/13308\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=13308"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=13308"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=13308"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=13308"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}