{"id":50196,"date":"2026-02-18T09:50:59","date_gmt":"2026-02-18T01:50:59","guid":{"rendered":"https:\/\/www.1ai.net\/?p=50196"},"modified":"2026-02-13T23:02:33","modified_gmt":"2026-02-13T15:02:33","slug":"%e6%88%91%e7%94%a8ai-%e5%a4%8d%e5%88%bb%e4%ba%86%e5%8d%83%e4%b8%87%e6%92%ad%e6%94%be%e7%9a%84youtube%e7%9c%9f%e4%ba%ba%e8%a7%86%e9%a2%91%ef%bc%81%e4%bf%9d%e5%a7%86%e7%ba%a7%e6%95%99%e7%a8%8b","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/50196.html","title":{"rendered":"I wrote back the YouTube reality video with AI! Nursery classes"},"content":{"rendered":"<p>I'd like to talk to you today about this dream<a href=\"https:\/\/www.1ai.net\/en\/tag\/seedance\" title=\"_Other Organiser\" target=\"_blank\" >Seedance<\/a> 2.0 model\u3002<\/p>\n<p>It seems to me that it did something very important:<\/p>\n<p>The traditional Ai video production<\/p>\n<p>\"The script, spectros, spectro, spectro, spectro, screech, screech.\" This whole set of processes<\/p>\n<p>Directly compressed<strong>\"Story scripts make video.\"<\/strong><\/p>\n<p>This has brought down the threshold for video production to a very low level, and the floor is high<\/p>\n<p>Of course, a certain amount of card-screening and editing skills is also required for a better quality video\u3002<\/p>\n<p>What does that mean? I'll settle for everyone\u3002<\/p>\n<p><strong>Let's start with the old process<\/strong><\/p>\n<p>I used to do one<strong>60 seconds<\/strong>of<a href=\"https:\/\/www.1ai.net\/en\/tag\/youtube\" title=\"_Other Organiser\" target=\"_blank\" >Youtube<\/a> Shorts video, by<strong>Three seconds, one shot<\/strong>I don't know<\/p>\n<p>At least<strong>We need 20 spectroscopes<\/strong>.<\/p>\n<p>What do these 20 specs mean? You have to figure out what the first frame of each shot looks like, and then you're going to use it to create a role reference, and then you're going to use it to map, and you're going to generate 20 frames. And then you turn 20 of them into 20 video clips with a graphic video model, and then you put them together with a cut-off tool, and you're going to have to mix them with sound, subtitles, spins, background music..<\/p>\n<p>The whole process down, to be honest, is torture\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-50197\" title=\"20c713acj00taek0i0013d000li00dhp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2026\/02\/20c713acj00taek0i0013d000li00dhp.jpg\" alt=\"20c713acj00taek0i0013d000li00dhp\" width=\"774\" height=\"485\" \/><\/p>\n<p>I'll use it myself\u00a0<strong>NanobananaPro+Veo3.1 FAST Model<\/strong>\u00a0It's been a long time<\/p>\n<p>The first frame per shot averages four times, and the video per shot requires more than three times\u3002<\/p>\n<p>one<strong>60 seconds<\/strong>The amount of material accumulated from the finished video:<\/p>\n<p><strong>There are about 80 initial frames and about 60 videos\u3002<\/strong><\/p>\n<p>On the cost side, the photo-generation price of the Api time-limited special price group in the cloud is 0.2 yuan per day and the video-generation price is 0.18 yuan per day<\/p>\n<p>The minimum cost of a 60-second finished video:<\/p>\n<p><strong>80 x 0.2 + 60 x 0.18 = 26.8 yuan<\/strong><\/p>\n<p>that's a conservative estimate. in practice, with a more complex lens, it may have to be supplemented by a char 2.3 model of $2.7\/5s<\/p>\n<p>A video cost easy to break<strong>Thirty bucks<\/strong>.<\/p>\n<p><strong>What's the change with Seedance 2.0<\/strong><\/p>\n<p>Now, all we need is..<\/p>\n<p><strong>Prepare a 60-second video script to generate role reference maps<\/strong><\/p>\n<p><strong>And then you plan the spectrophs according to the same scene \/ the same role \/ the same costume\u3002<\/strong><\/p>\n<p>What do you mean<\/p>\n<p><strong>i'm going to do a play in my head, and i'm going to do it in 15s, and then i'm going to give the 15s and the character reference diagrams that i'm using to make a video<\/strong><\/p>\n<p>On it\u3002<\/p>\n<p>A 60-second, one-minute video, four times, at 360 points<\/p>\n<p>I'll get it<strong>A continuous spectroscopy video with a smooth mirror, a consistent scene, a natural and synchronized character<\/strong>.<\/p>\n<p>A second readjustment based on the unsatisfactory spot in the first draw card will largely lead to a reasonably satisfactory product\u3002<\/p>\n<p>From February 8th to February 10th today, I spent more than 3,800 credits and produced five videos<\/p>\n<p><strong>The average single video consumes about 760 points, or approximately 21.9 yuan\/bar\u3002<\/strong><\/p>\n<p>It's not just about saving money\u3002<\/p>\n<p>And what's more, it's a straight-out video quality that I can't do through the \"Vencheng + Tusheng video.\"\u3002<\/p>\n<p>For me, it's a whole new tool for making videos\u3002<\/p>\n<p><strong>\"Practice case: Resetting a nearly billion-dollar YouTube Shorts<\/strong><\/p>\n<p>Maybe you don't have a very intuitive feeling about the data\u3002<\/p>\n<p>I'll share one<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e7%9c%9f%e4%ba%ba%e8%a7%86%e9%a2%91\" title=\"[Sees articles with [real video] labels]\" target=\"_blank\" >The reality video<\/a>I'm sorry\u3002<\/p>\n<p><strong>Original Video Link<\/strong>https:\/\/www.youtube.com\/shorts\/d_DCMhf9pWA<\/p>\n<p><strong>Total Playload<\/strong>:<strong>9.84 million (nearly 100 million)<\/strong><\/p>\n<p><strong>Video duration<\/strong>:19 seconds<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-50205\" title=\"9e2af10fj00taekcl0024d000uw9p\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2026\/02\/9e2af10fj00taekcl0024d000u000w9p.jpg\" alt=\"9e2af10fj00taekcl0024d000uw9p\" width=\"1080\" height=\"1161\" \/><\/p>\n<p>This is what I did to this video:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-50199\" title=\"707c1ffaj00taek2e003yd000u00159p\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2026\/02\/707c1ffaj00taek2e003yd000u00159p.jpg\" alt=\"707c1ffaj00taek2e003yd000u00159p\" width=\"1080\" height=\"1485\" \/><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-50200\" title=\"97ca8091j00taek 35001xd000ohp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2026\/02\/97ca8091j00taek35001xd000u000ohp.jpg\" alt=\"97ca8091j00taek 35001xd000ohp\" width=\"1080\" height=\"881\" \/><\/p>\n<p><strong>Story Outline<\/strong><\/p>\n<p>[Advanced] mat (0-3 seconds):<\/p>\n<p>\u2013 Show me a box of chickens<\/p>\n<p>\u2013 The hero picks it up, shakes his hand to his mouth and says, \"No.\"<\/p>\n<p>\u2013 Ready to throw in a trash can and shake hands to say \"too much waste.\"<\/p>\n<p>[undertakes] Friends test (4-14 seconds):<\/p>\n<p>\u2013Friend 1: Open your mouth and shake your hand<\/p>\n<p>\u2013Friend 2: Open your mouth and shake your hand<\/p>\n<p>\u2013Friend 3: Open your mouth and shake your hand<\/p>\n<p>\u2013 (Every time you take it back, you're too kind to give your friends dirty food)<\/p>\n<p>Friends (15-19 seconds):<\/p>\n<p>\u2013 Subtitle \"Beest friend\" appears<\/p>\n<p>\u2013 Best friend opens his mouth and doesn't hesitate to put it right in<\/p>\n<p>\u2013 Best friend chews, slams his mouth, his face is subtle<\/p>\n<p>\u2013 The camera is on Best friend's face<\/p>\n<p><strong>Split the camera<\/strong><\/p>\n<p>1 TP5T1T1T1T# SCENE 1: START + FOWL FALLING (0-2 SECONDS)<\/p>\n<p>The mirror, the view, the image, the action, the action<\/p>\n<p>|&#8212;&#8212;|&#8212;&#8212;|&#8212;&#8212;&#8212;-|&#8212;&#8212;|<\/p>\n<p>\u266a One, one, close-up \u266a<\/p>\n<p>\u266a Bang, bang, bang, bang \u266a<\/p>\n<p>#1T## SCENARIO 2: SELF-TEST (2-3 SECONDS)<\/p>\n<p>The mirror, the view, the image, the action, the action<\/p>\n<p>|&#8212;&#8212;|&#8212;&#8212;|&#8212;&#8212;&#8212;-|&#8212;&#8212;|<\/p>\n<p>\u266a Bang, bang, bang \u266a<\/p>\n<p>\u266a 4 in the middle, Zip, Zip, Zip, Zip, Zip, Zip, Zip, Zip, Zip, Zip, Zip, Zip, Zip<\/p>\n<p>#1T1T1T# scene 3: Friend 1 test (4-6 seconds)<\/p>\n<p>The mirror, the view, the image, the action, the action<\/p>\n<p>|&#8212;&#8212;|&#8212;&#8212;|&#8212;&#8212;&#8212;-|&#8212;&#8212;|<\/p>\n<p>\u266a Ta-da-da-da \u266a<\/p>\n<p>\u266a Six, six, close-up \u266a<\/p>\n<p>SUBTITLES**: FRIEND<\/p>\n<p>#1T1T#4:Friend 2 test (7-9 seconds)<\/p>\n<p>The mirror, the view, the image, the action, the action<\/p>\n<p>|&#8212;&#8212;|&#8212;&#8212;|&#8212;&#8212;&#8212;-|&#8212;&#8212;|<\/p>\n<p>\u266a Bang, bang, bang \u266a<\/p>\n<p>\u266a Eight, eight, close-up \u266a<\/p>\n<p>SUBTITLES**: FRIEND<\/p>\n<p>#1T1T1T# scenario 5: Freend 3 test (10-13 seconds)<\/p>\n<p>The mirror, the view, the image, the action, the action<\/p>\n<p>|&#8212;&#8212;|&#8212;&#8212;|&#8212;&#8212;&#8212;-|&#8212;&#8212;|<\/p>\n<p>\u266a Bang, bang, bang, bang \u266a<\/p>\n<p>\u266a Bang, bang, bang, bang \u266a<\/p>\n<p>SUBTITLES**: FRIEND<\/p>\n<p>#1T1T1T# scene 6: Best friend test (14-19 seconds) Core laughter<\/p>\n<p>The mirror, the view, the image, the action, the action<\/p>\n<p>|&#8212;&#8212;|&#8212;&#8212;|&#8212;&#8212;&#8212;-|&#8212;&#8212;|<\/p>\n<p>\u266a The white man standing by the black car \u266a<\/p>\n<p>** (no hesitation)<\/p>\n<p>\u266a Bang, bang, bang, bang \u266a<\/p>\n<p>Subtitles** Best friend<\/p>\n<p><strong>Okay, here's the original video storyline and the split<\/strong><\/p>\n<p><strong>Now, in order to avoid homogenization, I decided to replace the chickens with cakes and to replace the role with that of the Kpop Hunters to get the flow\u3002<\/strong><\/p>\n<p>Let's try the old process<\/p>\n<p>This video is rewritten in conjunction with the IP of the Kpop Game<\/p>\n<p>Look at what I've written before:<\/p>\n<p>#1T# CAMERA 1: CAKE FALLING<\/p>\n<p>**Characters**: [Rumi]<\/p>\n<p>**Analysis**: Create a \"dirty cake\" test props\u3002<\/p>\n<p>**Image Prompt (T2I)**<\/p>\n<p>&#8220;`<\/p>\n<p>First person called perspective. Outdoor lawn party scene, color balloon decoration\u3002<\/p>\n<p>[Rumi] With a triangle of birthday cake (white cream + strawberries)<\/p>\n<p>The cake just started falling from the hand. The bottom of the cake is green grass\u3002<\/p>\n<p>The close-up shot on the cake. Dynamic ambiguity. 9:16, vertical. Movie tuning\u3002<\/p>\n<p>&#8220;`<\/p>\n<p>**Video Prompt (I2V)**:<\/p>\n<p>&#8220;`<\/p>\n<p>First person called perspective. The cake fell completely out of the hand, slowly on the grass, and the butter splattered with grass\u3002<\/p>\n<p>A woman reached out and picked up the cake. The cake had visible green grass and dirt on its surface\u3002<\/p>\n<p>(no lines)<\/p>\n<p>&#8220;`<\/p>\n<p>&#8212;<\/p>\n<p>#1T1T# CAMERA 2: SELF-SUSPENSION<\/p>\n<p>**Lens ID**: Lens_2<\/p>\n<p>**Characters**: [Rumi]<\/p>\n<p>**Analysis**: The rule of \"this thing is dirty and can't eat.\"\u3002<\/p>\n<p>**Image Prompt (T2I)**<\/p>\n<p>&#8220;`<\/p>\n<p>Rumi took a birthday cake full of grasscrumbs and put it by his mouth. There's obvious green grass on the face of the cake\u3002<\/p>\n<p>Background degeneration is outdoor grassland. 9:16, vertical\u3002<\/p>\n<p>&#8220;`<\/p>\n<p>**Video Prompt (I2V)**:<\/p>\n<p>&#8220;`<\/p>\n<p>Rumi put the cake close to his mouth for half a second, and he said:<\/p>\n<p>And then the other wrist, right and left, means \"no.\"\u3002<\/p>\n<p>I'm not going to eat the cake on the ground<\/p>\n<p>&#8220;`<\/p>\n<p>&#8230;&#8230;<\/p>\n<p>Okay, let's take a look at these two shots<\/p>\n<p>Because I don't want to go down there\u3002<\/p>\n<p><strong>I know very well that the Veo3.1 model that I used to use would never make such a mission\u3002<\/strong><\/p>\n<p><strong>First pit<\/strong>:<\/p>\n<p>In order to keep the cake in line, I had to use a reference map of the original triangle for the birthday cake in lens 1<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-50201\" title=\"9f168a8bj00taek3000pd000u09op\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2026\/02\/9f168a8bj00taek3o000pd000u0009op.jpg\" alt=\"9f168a8bj00taek3000pd000u09op\" width=\"1080\" height=\"348\" \/><\/p>\n<p>Enter: \"A triangle of birthday cake (white cream + strawberries) with a white background.\" Getting Charts<\/p>\n<p>First of all, I'm very dissatisfied with this cake, and I'm gonna have to change the hint a few times, and I'm gonna have to draw a few cards\u3002<\/p>\n<p>And then, in a good video of camera one, you take the \"cake with grasscrumbs\" out and put it in the texture of camera two. It's been a long time\u3002<\/p>\n<p>And the cake I want is like this:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-50202\" title=\"2b1ee8cej00taek46002idse00r4p\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2026\/02\/2b1ee8cej00taek46002id000se00r4p.jpg\" alt=\"2b1ee8cej00taek46002idse00r4p\" width=\"1022\" height=\"976\" \/><\/p>\n<p>Believe me, you might have to go online and find a similar one for Nano Banana\u3002<\/p>\n<p><strong>The second pit<\/strong>: From my experience, the Veo3.1 model can't get this shot 10 times:<\/p>\n<p>The hand closes the cake to its mouth for half a second, and then the wrists to the left and right means \"no.\"<\/p>\n<p>I'm not going to eat the cake on the ground<\/p>\n<p>The Veo3.1 that used to feel so strong, it's so normal now..<\/p>\n<p><strong>Let's see how Dreamseedance 2.0 works<\/strong><\/p>\n<p>So what do I do when I come to a dream<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-50204\" title=\"39d58b7dj00taek5q006bd000id00dop\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2026\/02\/39d58b7dj00taek5q006bd000id00dop.jpg\" alt=\"39d58b7dj00taek5q006bd000id00dop\" width=\"661\" height=\"492\" \/><\/p>\n<p>What's the first 15 seconds of this video? These are the following:<\/p>\n<p>Outdoor grass party scenes, color balloon decorations, sunshine. A close-up shot\u3002<\/p>\n<p>Midview, Rumi's hands with a triangle of birthday cake (white cream and strawberries), and the cake fell out of his hand and fell on the grass, and the cream splattered with grass\u3002<\/p>\n<p>In close print, Rumi lay down and pick up the cake, with obvious green weed on its face\u3002<\/p>\n<p>I'm not going to eat the cake on the ground\u3002<\/p>\n<p>Switches to Rumi's first view, medium view, and her hand moves the cake to the next trash can to throw it away and shakes hands again to say: \u201cThis is too original, No!\u201d\u3002<\/p>\n<p>Middleview. Mira was sitting in a white folding chair. Rumi delivers the cake in front of Mira. Mira saw the cake's eyes shine and immediately opened his mouth and looked forward to feeding like birds. Rumi's hand stopped at Mira's mouth for a moment and started shaking, and Rumi said, \u201cNo no, can't give you this.\u201d The cake was withdrawn. Mira stayed in his mouth and stunned, and his face became confused\u3002<\/p>\n<p>Switch to party tent. Midway, Abby stood up and saw the cake and opened her mouth to eat. Rumi's hand stopped by his mouth again, and he started shaking and said, \"Nope, too dirty for you. \" Abby kept his mouth on hold and then his mouth slowly closed to his face\u3002<\/p>\n<p>The scene was switched to a picnic blanket. Zoe smiled and opened his mouth and greeted Rumi's paste and thought he could eat it. Rumi's hand stopped and started shaking again, and Rumi said, \u201cSorry bestie, I can't.\u201d and took it back again. Zoe, hold on and spread out his hands to both sides\u3002<\/p>\n<p>&#8230;&#8230;<\/p>\n<p>I only gave a dream<strong>Role Reference Chart<\/strong>and<strong>It's a screenplay<\/strong>Just..<strong>It's coming straight out<\/strong><\/p>\n<p><strong>I'm late for cat and mouse<\/strong>.<\/p>\n<p>The longest waiting time for the whole process is the time to line up to generate video<\/p>\n<p>And this is the right time for me to write a spectroscopy of the next video, not to waste at all\u3002<\/p>\n<p>How many cards did I draw to finish this piece<\/p>\n<p><strong>Four times, 360 points at a total cost of approximately $10.5\u3002<\/strong><\/p>\n<p>according to<strong>RPM FOR LANGUAGE CLASS 0.1<\/strong>Look, I just need less than 20,000 interactives to cover the cost\u3002<\/p>\n<p>And the actual data<\/p>\n<p>From 8 February to 10 February, I put together the number of videos I produced as a dream<strong>A million dollars<\/strong>The data curve is still going up, and Roi is extremely high\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-50203\" title=\"91fcb334j00taek6d0013d000udp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2026\/02\/91fcb334j00taek6d0013d000u000ddp.jpg\" alt=\"91fcb334j00taek6d0013d000udp\" width=\"1080\" height=\"481\" \/><\/p>\n<p><strong>At the end, a few notes and recommendations<\/strong><\/p>\n<p>Now the only question:<\/p>\n<p><strong>It's too slow<\/strong><\/p>\n<p>The first two days were fine, especially today, February 10th, as this model burstes into flames<\/p>\n<p>Even if I had a senior member, it would take about 40 minutes for a 15-second video<\/p>\n<p>THERE IS NO API TO CALL AT THE MOMENT, AND THERE IS A REAL RUSH\u3002<\/p>\n<p>In addition, a number of circles have mentioned the addition of real human face restrictions to the dream\u3002<\/p>\n<p>After my test, I found it<strong>By creating a third view of the person<\/strong><\/p>\n<p>\u2013 When you upload a single photo of a person, especially a face feature<\/p>\n<p>It's a hint<\/p>\n<p>\"Recognize that the material that you uploaded contains human face information and try it again.\"\u3002<\/p>\n<p>But if you upload the three-view reference, it doesn't trigger the limit\u3002<\/p>\n<p><strong>Why is it the best entry window<\/strong><\/p>\n<p>In fact, the model is still at the grey-scale test stage, and most people on the Internet are just having fun\u3002<\/p>\n<p>The servers can't carry the high-intensity use of a professional studio, let alone foreign users<\/p>\n<p><strong>This is a good time for our circle of oil pipelines\u3002<\/strong><\/p>\n<p>THE VIEWERS OVERSEAS HAVE NEVER SEEN SUCH A FLUENT AI VIDEO, WHICH IS AN EXCELLENT OPPORTUNITY TO CROSS THE CURVE\u3002<\/p>\n<p>All past explosions, especially the reality video, can be repeated in this window\u3002<\/p>\n<p>The main reason why real people's videos were hard to recoup is because of the performance\u3002<\/p>\n<p>After my tests, now the video generated by Feedance 2.0<\/p>\n<p><strong>It's not even real<\/strong>.<\/p>\n<p>It is recommended that all those who are working with the pipeline try to use it as soon as possible, with a focus on enhancing their own ability to write the spectroscopy\u3002<\/p>\n<p>The sooner the better\u3002<\/p>","protected":false},"excerpt":{"rendered":"<p>I'd like to talk to you today about this dream-driven Feedance 2.0 model. It seems to me that it did a very important thing: \"The script, the lens, the texture, the graphic video, the clip.\" This whole set of processes, which are being reduced directly to \"story scripts and videos,\" makes the threshold for video production extremely low and very high, and, of course, a certain amount of high-quality video screening and editing skills. What does that mean? I'll settle for everyone. Let's start with the old process of \"torture.\"<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[149,144],"tags":[5321,8287,423,8316],"collection":[],"class_list":["post-50196","post","type-post","status-publish","format-standard","hentry","category-jiaocheng","category-baike","tag-ai","tag-seedance","tag-youtube","tag-8316"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/50196","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=50196"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/50196\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=50196"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=50196"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=50196"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=50196"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}