{"id":31154,"date":"2025-03-20T14:22:23","date_gmt":"2025-03-20T06:22:23","guid":{"rendered":"https:\/\/www.1ai.net\/?p=31154"},"modified":"2025-03-20T14:22:23","modified_gmt":"2025-03-20T06:22:23","slug":"hugging-face-%e6%8e%a8%e5%87%ba%e6%9c%ac%e5%9c%b0-ai-%e5%8a%a9%e6%89%8b-huggingsnap%ef%bc%8c%e5%ae%9e%e7%8e%b0%e6%89%8b%e6%9c%ba%e7%ab%af%e5%8d%b3%e6%97%b6%e8%a7%86%e8%a7%89%e8%a7%a3%e6%9e%90","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/31154.html","title":{"rendered":"Hugging Face Launches HuggingSnap, a Native AI Assistant for Instant Visual Parsing on Mobile"},"content":{"rendered":"<p>March 20 news.<a href=\"https:\/\/www.1ai.net\/en\/tag\/hugging-face\" title=\"[See articles with [Hugging Face] label]\" target=\"_blank\" >Hugging Face<\/a> Latest iOS Apps <a href=\"https:\/\/www.1ai.net\/en\/tag\/huggingsnap\" title=\"[See articles with [HuggingSnap] labels]\" target=\"_blank\" >HuggingSnap<\/a>,<strong>Without relying on cloud servers, users can ask AI to generate visual descriptions directly on the device side.<\/strong><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-31155\" title=\"f21e2797j00stescc00j5d000v900hkp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/03\/f21e2797j00stescc00j5d000v900hkp.jpg\" alt=\"f21e2797j00stescc00j5d000v900hkp\" width=\"1125\" height=\"632\" \/><\/p>\n<p>The application is based on the lightweight multimodal model smolVLM2 (with parameter scales of 256 million to 2.2 billion), which allows all calculations to be done locally, avoiding data uploads to the cloud and ensuring privacy and security.<\/p>\n<p>Optimized for mobile devices, smolVLM2 can efficiently handle graphical tasks (e.g., image\/video analysis), but is slightly less accurate than large models in the cloud (e.g., GPT-4o, Gemini).<\/p>\n<p><a href=\"https:\/\/weibo.com\/tv\/show\/1034:5146227881476180?mid=5146227953369702\" target=\"_blank\" rel=\"noopener\"><img id=\"dingyue_3_1742451700860\" \/><\/a><\/p>\n<p>The small model (256 million parameters) is suitable for basic tasks, while the large model (2.2 billion parameters) provides more accurate parsing, but may increase device heat and power consumption.<\/p>\n<p>Users can instantly access complex scene descriptions (e.g., street view parsing), recognize multilingual text (e.g., translating road signs while traveling), or assist visually impaired people to navigate independently.<\/p>\n<p>Hugging Face emphasizes \"privacy by design\" and makes it clear that user data is only stored locally on the device and is not shared with third parties.<\/p>","protected":false},"excerpt":{"rendered":"<p>March 20, 2010 - Hugging Face has launched its latest iOS app, HuggingSnap, which allows users to ask AI to generate visual descriptions directly on the device side without relying on cloud servers. The app is based on the lightweight multimodal model smolVLM2 (with parameter sizes ranging from 256 million to 2.2 billion), which performs all calculations locally and avoids the need to upload data to the cloud to ensure privacy. Optimized for mobile devices, smolVLM2 can efficiently handle graphical tasks (e.g., image\/video analysis), but is slightly less accurate than larger models in the cloud (e.g., GPT-4o, Gemini). The small model (256 million parameters) is suitable for basic tasks, while the large model (2.2 billion parameters) provides more accuracy.<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[384,6021],"collection":[],"class_list":["post-31154","post","type-post","status-publish","format-standard","hentry","category-news","tag-hugging-face","tag-huggingsnap"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/31154","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=31154"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/31154\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=31154"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=31154"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=31154"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=31154"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}