{"id":32620,"date":"2025-04-08T12:47:18","date_gmt":"2025-04-08T04:47:18","guid":{"rendered":"https:\/\/www.1ai.net\/?p=32620"},"modified":"2025-04-08T12:47:18","modified_gmt":"2025-04-08T04:47:18","slug":"%e8%b0%b7%e6%ad%8c-ai-%e6%a8%a1%e5%bc%8f%e6%96%b0%e5%a2%9e%e5%a4%9a%e6%a8%a1%e6%80%81%e6%90%9c%e7%b4%a2%ef%bc%8c%e6%94%af%e6%8c%81%e5%9b%be%e5%83%8f%e6%8f%90%e9%97%ae%e5%8a%9f%e8%83%bd","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/32620.html","title":{"rendered":"Google AI Mode Adds Multimodal Search with Image Question Support"},"content":{"rendered":"<p>April 8 News.<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%b0%b7%e6%ad%8c\" title=\"[View articles tagged with [Google]]\" target=\"_blank\" >Google<\/a>is introducing \"AI Mode\" to its Google Search experiment.<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%a4%9a%e6%a8%a1%e6%80%81%e6%90%9c%e7%b4%a2\" title=\"[See articles with [multi-mode search] labels]\" target=\"_blank\" >multimodal search<\/a>AI Mode allows users to ask complex, multi-part questions and explore topics in depth through follow-up questions. Today, users with AI Mode access can click on the feature to ask questions about photos they've uploaded or taken with their camera.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-32621\" title=\"6d54a627j00sudum00094d000v900hkp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/04\/6d54a627j00sudum00094d000v900hkp.jpg\" alt=\"6d54a627j00sudum00094d000v900hkp\" width=\"1125\" height=\"632\" \/><\/p>\n<p>1AI notes that Google said in a blog post this past Monday that the new image analysis feature in AI Mode is powered by the multimodal capabilities of Google Lens. According to Google.<strong>AI modes are able to understand the entire scene in an image, including the interrelationships between objects, as well as their materials, colors, shapes, and arrangements<\/strong>AI Mode is a search engine that uses a \"query fan-out\" technique. Using \"query fan-out\" technology, the AI mode asks multiple questions about the image itself and the objects shown in the image, providing more detailed information than traditional Google searches.<\/p>\n<p>For example, a user could take a photo of their bookshelf and enter the question, \"If I love these books, what are some similar and highly rated books?\" The AI model will recognize each book and provide a list of recommended books with links to further information or purchases.<\/p>\n<p>Additionally, the AI mode allows users to ask follow-up questions to narrow down their search, such as, \"I'm looking for a quick read, which of these recommendations is the shortest?\"<\/p>\n<p>Google says it will open up AI Mode to the millions of subscribers who participate in the Labs program, the platform Google uses for experimental features and products. Previously, AI Mode was only available to Google One AI premium subscribers.<\/p>\n<p>Google Search AI Mode was launched last month to compete with popular services like Perplexity and OpenAI's ChatGPT search. Google has said it plans to continue optimizing the feature's user experience and expanding its capabilities.<\/p>","protected":false},"excerpt":{"rendered":"<p>On April 8, Google is introducing a multi-modular search function for its Google Search Experiment \"AI Model\". The AI model allows users to ask complex and multi-particular questions and to explore relevant topics in depth through follow-up. Today, users with access to the AI mode can click on the feature to ask questions about their uploading photos or using a camera. 1AI notes that Google has indicated in this Monday's blog article that the new image analysis function in the AI model is supported by Google Lens' multi-modular capability. According to Google, the AI model understands the whole scene in the image, including the interrelationship between objects, as well as their material, colour, shape and organization. By using \"query fan\"<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[6213,281],"collection":[],"class_list":["post-32620","post","type-post","status-publish","format-standard","hentry","category-news","tag-6213","tag-281"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/32620","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=32620"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/32620\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=32620"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=32620"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=32620"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=32620"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}