{"id":43662,"date":"2025-09-20T14:18:42","date_gmt":"2025-09-20T06:18:42","guid":{"rendered":"https:\/\/www.1ai.net\/?p=43662"},"modified":"2025-09-20T14:18:42","modified_gmt":"2025-09-20T06:18:42","slug":"%e9%ab%98%e5%be%b7%e5%ae%a3%e5%b8%83-trafficvlm-%e6%a8%a1%e5%9e%8b%e9%87%8d%e7%a3%85%e5%8d%87%e7%ba%a7%ef%bc%9a%e9%a2%84%e7%9f%a5%e8%b6%85%e8%a7%86%e8%b7%9d%e8%b7%af%e5%86%b5","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/43662.html","title":{"rendered":"Golder announces that the TrafficVLM model \"heavy scale upgrades\": Prespect the distance path, AI brings the \"eye of the sky\" perspective"},"content":{"rendered":"<p>The news of September 20th<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e9%ab%98%e5%be%b7\" title=\"[Sees articles with [Gold] labels]\" target=\"_blank\" >Gold<\/a>Through official public announcements\u00a0<strong>TrafficVLM (Note: Traffic Visual Language Model)<\/strong>. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . \u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-43663\" title=\"9d56a8a3j00t2vito00fkd000u000bzp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/09\/9d56a8a3j00t2vito00fkd000u000bzp.jpg\" alt=\"9d56a8a3j00t2vito00fkd000u000bzp\" width=\"1080\" height=\"431\" \/><\/p>\n<p>According to the introduction, in modern transport environments, drivers often face the challenge of information-blind areas: when a complex crossing passes, they can only see the current in front of them<strong>I can't predict which driveway 100 meters away will be blocked<\/strong>; it is difficult to foresee a \u201cghost jam\u201d in the front, triggered by a slight brake, when travelling on a free-speed highway. These limitations of a local perspective make it difficult for drivers to make optimal decisions. Therefore, the TrafficVLM model is being upgraded to address the above-mentioned difficulties\u3002<\/p>\n<p>A completely new upgraded TrafficVLM, based on the space intelligence architecture, can bring an \"observed\" perspective to users. It allows users to have a comprehensive picture of the global traffic situation and thus make better decisions in complex environments. It is known that it gives every driver the ability to \u201cknow the whole perspective\u201d when facing a roadblock or high speed<strong>It is no longer limited to local vision, thus providing a more intuitive vision of the road ahead<\/strong>, respond to potential risks\u3002<\/p>\n<p>For example, on the 3 km main road in front of the user, the left side of the drive creates a new blockage due to a sudden tailing accident, which is immediately understood by TrafficVLM through real-time twin traffic, and<strong>The reasoning identifies the point of the accident and understands its evolution<\/strong>: Crowding or will spread rapidly to form a three-kilometre-long congestion section. In the case of TrafficVLM, Goth can push the passage advice in time before the user arrives at the congestion point: \u201cA three-kilometre accident ahead, with a large number of vehicles moving right in parallel, is recommended to you to move right in advance and avoid an emergency vehicle.\u201d<\/p>\n<p>Through the rapid response of the cloud-side control system, the system sends out observation instructions as soon as congestion occurs, extracts visual data from the first site and conducts smart analysis based on the depth of information in the image, and accurately restores the spatial structure and traffic posture of the congestion point\u3002<\/p>\n<p>This, it is described, means that users not only have direct access to \u201cfront traffic jams\u201d, but are better able to understand why there is a need for diversion, when to slow down, and the real causes and extent of congestion. This shift from passive reception to active insinuation frees users from the limitations of \"blind touch\" to complex road conditions<strong>Visual, perceptible, predictable<\/strong>The intelligent navigation experience\u3002<\/p>\n<p>It's a visual language model<strong>General Qwen-VL<\/strong>\u00a0For the base seat, intensive learning and data training based on traffic visual data of the Gothic volume and altitude reduction was completed\u3002<\/p>","protected":false},"excerpt":{"rendered":"<p>On September 20th, Goldman Sachs announced that TrafficVLM (Note: Traffic Visual Language Model) would help enable users to achieve global traffic control and enhance driving experience. It is described that in modern transport environments, drivers often face the challenge of information-blind areas: when they travel through complex traffic, they can only see the currents in front of them, but they cannot predict which lanes 100 metres away are about to be blocked; and when they travel on the freeway, it is difficult to anticipate the \u201cgrows of ghosts\u201d triggered by a slight brake in the forward. These limitations of a local perspective make it difficult for drivers to make optimal decisions. Therefore, the TrafficVLM model is being upgraded to address the above-mentioned difficulties. A new, upgraded TrafficVLM based on a space intelligence architecture can be brought to users<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[1865,7640],"collection":[],"class_list":["post-43662","post","type-post","status-publish","format-standard","hentry","category-news","tag-1865","tag-7640"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/43662","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=43662"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/43662\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=43662"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=43662"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=43662"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=43662"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}