{"id":6925,"date":"2024-04-02T09:14:27","date_gmt":"2024-04-02T01:14:27","guid":{"rendered":"https:\/\/www.1ai.net\/?p=6925"},"modified":"2024-04-02T09:14:27","modified_gmt":"2024-04-02T01:14:27","slug":"openai-%e4%b8%ba-dall-e-3-%e5%bc%95%e5%85%a5%e7%bc%96%e8%be%91%e5%8a%9f%e8%83%bd%ef%bc%9a%e8%bf%9b%e4%b8%80%e6%ad%a5%e7%b2%be%e7%bb%86%e5%8c%96%e8%b0%83%e6%95%b4%e5%b7%b2%e7%94%9f%e6%88%90%e5%9b%be","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/6925.html","title":{"rendered":"OpenAI introduces editing capabilities to DALL-E 3: further refinement of generated images"},"content":{"rendered":"<p data-vmark=\"c859\"><a href=\"https:\/\/www.1ai.net\/en\/tag\/openai\" title=\"[View articles tagged with [OpenAI]]\" target=\"_blank\" >OpenAI<\/a> The company recently issued an announcement announcing <a href=\"https:\/\/www.1ai.net\/en\/tag\/dall-e\" title=\"_OTHER ORGANISER\" target=\"_blank\" >DALL-E<\/a> 3 Introducing a new editing interface, after generating images based on user text,<strong>The generated images can be further refined based on the user description.<\/strong><\/p>\n<p data-vmark=\"512d\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-6926\" title=\"d5a79ed6-b383-434b-80c1-cebf1d4fecc5\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/04\/d5a79ed6-b383-434b-80c1-cebf1d4fecc5.jpg\" alt=\"d5a79ed6-b383-434b-80c1-cebf1d4fecc5\" width=\"1264\" height=\"920\" \/><\/p>\n<h2 data-vmark=\"b432\">The DALL-E Editor provides two main editing methods:<\/h2>\n<h3 data-vmark=\"1aab\">Selection-based editing:<\/h3>\n<p data-vmark=\"b7e4\">After DALL-E 3 generates an image, the user can select a specific area in the generated image, and then enter a prompt word in the chat interface to ask DALL-E 3 to make fine adjustments.<\/p>\n<h3 data-vmark=\"c2cd\">Conversational editing:<\/h3>\n<p data-vmark=\"91b1\">After DALL-E 3 generates an image, users can directly describe their edits in the chat window without selecting a specific area. This method is suitable for editing and adjusting the entire image.<\/p>\n<p data-vmark=\"3e44\">OpenAI said that by introducing this editor, it has further refined the image generation process, allowing users to further optimize the generated images for details. OpenAI said that the ability to edit DALL-E output opens up the possibility of various applications, such as:<\/p>\n<ul class=\"list-paddingleft-2\">\n<li>\n<p data-vmark=\"b06a\">Increase the precision or realism of specific elements in an image.<\/p>\n<\/li>\n<li>\n<p data-vmark=\"d20a\">Introduce new visual elements to existing images.<\/p>\n<\/li>\n<li>\n<p data-vmark=\"6ea2\">Modify the style of the generated image.<\/p>\n<\/li>\n<\/ul>","protected":false},"excerpt":{"rendered":"<p>OpenAI has announced the introduction of a new editing interface for DALL-E 3, which allows users to generate images based on user text, and then continue to fine-tune the image based on the user's description. The DALL-E editor offers two main editing methods: Selection-based editing: After DALL-E 3 generates an image, the user can select specific areas of the generated image and then, in the chat interface, enter prompts to ask DALL-E 3 to fine-tune it. Conversational Editing: After DALL-E 3 generates an image, the user can describe his\/her edits directly in the chat window without selecting a specific area; this method is suitable for editing and adjusting the entire image. OpenAI says that by citing<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[2012,190],"collection":[],"class_list":["post-6925","post","type-post","status-publish","format-standard","hentry","category-news","tag-dall-e","tag-openai"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/6925","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=6925"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/6925\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=6925"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=6925"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=6925"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=6925"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}