{"id":47465,"date":"2025-12-17T14:16:29","date_gmt":"2025-12-17T06:16:29","guid":{"rendered":"https:\/\/www.1ai.net\/?p=47465"},"modified":"2025-12-17T14:16:29","modified_gmt":"2025-12-17T06:16:29","slug":"openai-%e5%8f%91%e5%b8%83%e6%96%b0%e7%94%9f%e5%9b%be%e6%a8%a1%e5%9e%8b%ef%bc%8cpk-nano-banana","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/47465.html","title":{"rendered":"OpenAI Launching Newborn Map Model, PK Nano Banana"},"content":{"rendered":"<p class=\"translation-text-wrapper\" data-ries-data-process=\"114\" data-group-id=\"group-114\">December 17th message, today<a href=\"https:\/\/www.1ai.net\/en\/tag\/openai\" title=\"[View articles tagged with [OpenAI]]\" target=\"_blank\" >OpenAI<\/a> The latest image visual model GPT-Image-1.5 was officially launched. And this is another punch in the OpenAI Red Alert program after GPT-5.2\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-47466\" title=\"882debcej00t7ehdq00amd000v9000ibp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/12\/882debcej00t7ehdq00amd000v900ibp.jpg\" alt=\"882debcej00t7ehdq00amd000v9000ibp\" width=\"1125\" height=\"659\" \/><\/p>\n<p class=\"translation-text-wrapper\" data-ries-data-process=\"115\" data-group-id=\"group-115\"><strong>Directly look at upgrades: more precise execution of instructions, more precise editing, more complete detail retention, four times faster\u3002<\/strong><\/p>\n<p class=\"translation-text-wrapper\" data-ries-data-process=\"116\" data-group-id=\"group-116\">Among them, GPT-Image-1.5 has the largest upgrading point in \"precision editing\", light, image, character characteristics, and consistency in the closes of input, output and subsequent editing\u3002<\/p>\n<p class=\"translation-text-wrapper\" data-ries-data-process=\"117\" data-group-id=\"group-117\">GPT-Image-1.5 is better at following complex and nuanced commands than the first version of the image model and maintains a preset relationship between the elements. Text rendering is further enhanced to better handle dense, small font content\u3002<\/p>\n<p class=\"translation-text-wrapper\" data-ries-data-process=\"118\" data-group-id=\"group-118\">The blogger @Yuchenj_UW states that while it considers GPT-Image-1.5 to be largely at the \u201cprofessional\u201d level of Nano Banana Pro, the \u201cIQ\/Dictionary Power\u201d is significantly behind Nano Banana Pro, especially on mathematical issues (and other physical\/maze problems)\u3002<\/p>\n<p class=\"translation-text-wrapper\" data-ries-data-process=\"119\" data-group-id=\"group-119\">OpenAI uses CEO Fidji Simo to write in the blog: \"Human thinking is not just about words. In fact, our most innovative ideas often originate from images, voices, actions or patterns in our minds. I'm sorry<\/p>\n<p class=\"translation-text-wrapper\" data-ries-data-process=\"120\" data-group-id=\"group-120\">She told me<strong>ChatGPT is moving from a reactive, text-based product to a more intuitive and relevant tool for your mission needs\u3002<\/strong>The shift from pure text to multimedia and dynamic interfaces is an important step in this evolution\u3002<\/p>\n<p class=\"translation-text-wrapper\" data-ries-data-process=\"121\" data-group-id=\"group-121\">OpenAI plans more than that. The future will also introduce more visual elements to optimize the overall experience of ChatGPT. For example, in future, when searching queries, the results will include more pictures and clear sources\u3002<\/p>","protected":false},"excerpt":{"rendered":"<p>On December 17th, OpenAI officially launched the latest image visual model GPT-Image-1.5 today. And this is another punch in the OpenAI Red Alert program after GPT-5.2. Directly look at upgrades: more precise execution of instructions, more precise editing, more complete detail retention, four times faster. Among them, GPT-Image-1.5 has the largest upgrading point in \"precision editing\", light, image, character characteristics, and consistency in the closes of input, output and subsequent editing. GPT-Image-1.5 is better at following complex and nuanced commands than the first version of the image model and maintains a pre-set relationship between the elements. The text replicability has been further enhanced<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[190],"collection":[],"class_list":["post-47465","post","type-post","status-publish","format-standard","hentry","category-news","tag-openai"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/47465","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=47465"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/47465\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=47465"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=47465"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=47465"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=47465"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}