{"id":36897,"date":"2025-06-06T19:17:55","date_gmt":"2025-06-06T11:17:55","guid":{"rendered":"https:\/\/www.1ai.net\/?p=36897"},"modified":"2025-06-06T19:17:55","modified_gmt":"2025-06-06T11:17:55","slug":"%e5%ad%97%e8%8a%82%e8%b7%b3%e5%8a%a8%e5%8f%91%e5%b8%83%e5%9b%be%e5%83%8f%e7%bc%96%e8%be%91%e6%a8%a1%e5%9e%8b-seededit-3-0%ef%bc%8c%e5%a4%84%e7%90%86%e6%9b%b4%e5%8a%a0%e4%b8%9d%e6%bb%91%e9%ab%98","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/36897.html","title":{"rendered":"ByteDance Releases SeedEdit 3.0, an Image Editing Model for Silky-Smooth and Efficient Processing"},"content":{"rendered":"<p>June 6 News.<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%ad%97%e8%8a%82%e8%b7%b3%e5%8a%a8\" title=\"[View articles tagged with [bytejump]]\" target=\"_blank\" >ByteDance<\/a> The Seed team today announced the release of<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%9b%be%e5%83%8f%e7%bc%96%e8%be%91%e6%a8%a1%e5%9e%8b\" title=\"_Other Organiser\" target=\"_blank\" >Image editing model<\/a> <a href=\"https:\/\/www.1ai.net\/en\/tag\/seededit\" title=\"_Other Organiser\" target=\"_blank\" >SeedEdit<\/a> 3.0, is now available for beta testing on the web side of Dream, and the Doubao App will be launched soon.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-36898\" title=\"3138fbe0j00sxfm040014d000v100h0p\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/06\/3138fbe0j00sxfm040014d000v100h0p.jpg\" alt=\"3138fbe0j00sxfm040014d000v100h0p\" width=\"1117\" height=\"612\" \/><\/p>\n<p>The need to rely on AI to fulfill commanded image editing is widely present in visual content creation work. However, previously, image editing models had relatively limited capabilities in terms of subject &amp; background retention and command adherence, resulting in a low usability rate of edited images.<\/p>\n<p>According to the official introduction of Byte Jump, SeedEdit 3.0 is based on the Vincennes map model Seedream 3.0, superimposed on diverse data fusion methods and specific reward models, which better solves the above difficulties. Its image subject, background and detail retention ability is further improved, especially in portrait editing, background change, perspective and light conversion and other scenarios with more outstanding performance.<\/p>\n<p>The model processes and generates 4K images, and maintains other information with high fidelity while processing the editing area in a fine and natural way. In particular, the model demonstrates a better understanding of the trade-offs between what to change and what not to change in image editing, resulting in a higher usability rate. When the user needs to remove a group of pedestrians in a picture, the model can not only accurately recognize and remove irrelevant characters in the scene, but also remove shadows as well.<\/p>\n<p>In the task of converting 2D paintings to real models, SeedEdit 3.0 has maintained the details of the characters' clothes, hats, handbags, etc., and the resulting images have a fashionable street style.<\/p>\n<p>The model also handles light and shadow changes throughout the scene in a silky smooth and natural way. From the houses in the near distance to the ripples in the sea in the far distance, the details can be reasonably preserved and the rendering adjustments can be made at the \"pixel level\" following the light changes.<\/p>\n<p>In order to realize the above capabilities, the team in the development work of SeedEdit 3.0 proposed an<strong>Efficient data fusion strategies<\/strong>and constructed<strong>Multiple specialized reward models<\/strong>.<\/p>\n<p>By co-training these reward models with diffusion models, the team targeted improvements in editing quality for critical tasks (e.g., face alignment, text rendering, etc.). On the ground in practice, we also performed simultaneous optimizations for inference acceleration.<\/p>\n<p>Byte Jump said that in addition to further optimizing the editing performance, the team will explore richer editing operations in the future, so that the model has the ability to generate consecutive multi-images, multi-image synthesis, storytelling content generation and so on.<\/p>\n<p data-vmark=\"4658\">1AI attached link below:<\/p>\n<ul class=\"custom_reference list-paddingleft-1\">\n<li class=\"list-undefined list-reference-paddingleft\">\n<p data-vmark=\"a3c5\">Project home page:<span class=\"link-text-start-with-http\">https:\/\/seed.bytedance.com\/seededit<\/span><\/p>\n<\/li>\n<li class=\"list-undefined list-reference-paddingleft\">\n<p data-vmark=\"b782\">Technical report:<span class=\"link-text-start-with-http\">https:\/\/arxiv.org\/<\/span>\u00a0pdf\/2506.05083<\/p>\n<\/li>\n<\/ul>","protected":false},"excerpt":{"rendered":"<p>On June 6th, the byte beat Seed team announced today the release of the image editing model SeedEdit 3.0., which is now being tested at the end of the dream page and is about to go online. The need to do command image editing with AI is widely present in visual content creative work. Previously, however, the image-editing model had relatively limited ability to maintain the main subject &amp; background, and to follow instructions, resulting in a low availability of editing images. According to the byte beat official presentation, SeedEdit 3.0., based on the graphic model Seedream 3.0., a combination of diversified data integration methods and specific incentive models, better addressed the above difficulties. The ability to maintain the subject, background and detail of their images is further enhanced, in particular<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[4914,5098,548],"collection":[],"class_list":["post-36897","post","type-post","status-publish","format-standard","hentry","category-news","tag-seededit","tag-5098","tag-548"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/36897","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=36897"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/36897\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=36897"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=36897"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=36897"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=36897"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}