{"id":27423,"date":"2025-01-20T15:30:28","date_gmt":"2025-01-20T07:30:28","guid":{"rendered":"https:\/\/www.1ai.net\/?p=27423"},"modified":"2025-01-20T15:30:28","modified_gmt":"2025-01-20T07:30:28","slug":"%e5%95%86%e6%b1%a4%e7%a7%91%e6%8a%80%e3%80%8c%e6%97%a5%e6%97%a5%e6%96%b0%e8%9e%8d%e5%90%88%e5%a4%a7%e6%a8%a1%e5%9e%8b%e4%ba%a4%e4%ba%92%e7%89%88%e3%80%8d%e5%bc%80%e6%94%be%e5%95%86%e7%94%a8%ef%bc%8c","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/27423.html","title":{"rendered":"Business Intelligence Technology's \"Rising Fusion Big Model Interactive Edition\" is open for commercial use and free for a limited time"},"content":{"rendered":"<p>January 20, 2011 - Beijing<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%95%86%e6%b1%a4%e7%a7%91%e6%8a%80\" title=\"[View articles tagged with [quotidian technology]]\" target=\"_blank\" >SenseTime<\/a>Ltd. today announced that its \"SenseNova Fusion Big Model Interactive Edition\" (SenseNova-5o) is now available to the public in real-time.<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e9%9f%b3%e8%a7%86%e9%a2%91%e5%af%b9%e8%af%9d\" title=\"[Sees articles with tags]\" target=\"_blank\" >audio-video dialog<\/a>Services.<strong>Free use for a limited time<\/strong>.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-27424\" title=\"a8c5afe9j00sqdm5h002vd000lr00c8p\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/01\/a8c5afe9j00sqdm5h002vd000lr00c8p.jpg\" alt=\"a8c5afe9j00sqdm5h002vd000lr00c8p\" width=\"783\" height=\"440\" \/><\/p>\n<p>According to the introduction, the model is the interactive version of the \"daily new\" fusion model of the Shangtang, supporting<strong>Real-time interaction, visual recognition, memory thinking, continuous dialog, and complex reasoning<\/strong>that can help AI communicate more naturally and fluently with humans.<\/p>\n<p>Shangtang also provides service optimization of the supporting Realtime API for \"SenseNova-5o\" to achieve integration with RTC networks. Officially, users in any environment can<strong>Stable, real-time, smooth, latency-free audio and video dialog and communication<\/strong>.<\/p>\n<p>1AI Attachment of the highlights information of ShangTech's \"Day-to-Day Convergence Large Model Interactive Edition\" is as follows:<\/p>\n<ul>\n<li>Supports ultra-long multimodal interaction memory<strong>Not less than 5 minutes<\/strong><\/li>\n<li>Continuously tracking and accumulating information on interactions with users, constantly refining and optimizing the understanding of user needs<\/li>\n<li>The current interaction latency is reduced to\u00a0<strong>2 seconds<\/strong>Within, it claims to be \"virtually indistinguishable from natural human communication.\"<\/li>\n<li>support<strong>Interrupt at any time, keep the conversation going, and lead to new topics based on context<\/strong><\/li>\n<li>Supports personalization of communication style and usage habits (persona, tone of voice, etc.) according to user preferences<\/li>\n<li>Support to help parents with their children's homework<\/li>\n<\/ul>","protected":false},"excerpt":{"rendered":"<p>January 20 news, Beijing Shangtang Technology Development Co., Ltd. today announced that its \"Rizhixin fusion big model interactive version\" (SenseNova-5o) officially provide real-time audio and video dialogue services to the outside world, free of charge for a limited period of time. According to the introduction, this model is the interactive version of SenseNova's \"Day by Day\" fusion model, which supports real-time interaction, visual recognition, memory thinking, continuous dialog and complex reasoning, and can help AI communicate with humans in a more natural and fluent way. Shangtang also provides service optimization of supporting Realtime API for \"SenseNova-5o\" to realize the combination with RTC network. Officially, users can have stable, real-time, smooth, and latency-free audio and video conversations and communications in any environment. 1AI Attachment Business<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[2171,5596],"collection":[],"class_list":["post-27423","post","type-post","status-publish","format-standard","hentry","category-news","tag-2171","tag-5596"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/27423","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=27423"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/27423\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=27423"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=27423"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=27423"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=27423"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}