{"id":23814,"date":"2024-11-26T09:25:51","date_gmt":"2024-11-26T01:25:51","guid":{"rendered":"https:\/\/www.1ai.net\/?p=23814"},"modified":"2024-11-26T09:25:51","modified_gmt":"2024-11-26T01:25:51","slug":"%e4%b8%ad%e5%9b%bd%e9%93%81%e5%a1%94%e5%8f%91%e5%b8%83%e7%bb%8f%e7%ba%ac%e5%a4%a7%e6%a8%a1%e5%9e%8b%ef%bc%9a%e5%8f%af%e6%9c%8d%e5%8a%a1%e4%ba%8e%e5%b1%b1%e6%b0%b4%e6%9e%97%e7%94%b0%e6%b9%96%e8%8d%89","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/23814.html","title":{"rendered":"China Tower Releases Longitude and Latitude Big Model: It Can Serve Spatial Governance Areas Such as Mountains, Water, Forests, Fields, Lakes, Grasses, Sands, etc."},"content":{"rendered":"<p>1AI from<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e4%b8%ad%e5%9b%bd%e9%93%81%e5%a1%94\" title=\"[Sees articles with tags on the Chinese Tower]\" target=\"_blank\" >China Tower<\/a>Officials learned that at the China Tower 2024 Science and Technology Innovation Conference, \"<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e7%bb%8f%e7%ba%ac%e5%a4%a7%e6%a8%a1%e5%9e%8b\" title=\"[Sees articles that contain labels of [Landitude Model]]\" target=\"_blank\" >Longitude and Latitude Large Model<\/a>\"Officially released, it will serve in the field of spatial governance, such as \"mountains, water, forests, fields, lakes, grass and sand\".<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-23815\" title=\"6f3e118cj00snjama00ppd000v900nhp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/11\/6f3e118cj00snjama00ppd000v900nhp.jpg\" alt=\"6f3e118cj00snjama00ppd000v900nhp\" width=\"1125\" height=\"845\" \/><\/p>\n<p>It is based on self-supervised training of 270 million medium and high point unlabeled images, accuracy tuning of 229,000 medium and high point labeled images, and model parameters of 18 billion, compared to the average accuracy of small model target detection<strong>Upgrade 16% or above<\/strong>In addition, the average recall rate has increased by more than 13%, which reduces the number of underreporting to a greater extent.<\/p>\n<p>The multimodal large model accomplishes downstream task accuracy tuning based on medium and high point visible text pairs, infrared text pairs, satellite remote sensing text pairs, radar text pairs and open source image text pairs. It has 200 billion parameters and supports three functions: target detection, zero-sample open detection, graphic Q&amp;A and inference.<\/p>\n<p>Where the target detection capability is consistent with the visual macromodel, zero-sample open detection is the ability to<strong>Achieve an average target detection accuracy of more than 91% without such target labeling samples for training.<\/strong>The graphic quiz and reasoning results are relevant and can meet actual business needs. Officials said that the large model has the conditions to be popularized and applied in emergency response, forestry and grassland, land, water conservancy, environmental protection, agriculture and other industries.<\/p>","protected":false},"excerpt":{"rendered":"<p>1AI learned from China Tower's official microblogging, in the China Tower 2024 Science and Technology Innovation Conference, the \"warp and woof big model\" was officially released, which will serve in the field of spatial governance, such as \"mountains, water, forests, fields, lakes, grasses, and sand\". It is based on self-supervised training of 270 million unlabeled images of medium and high points, precision tuning of 229,000 labeled images of medium and high points, and model parameters of 18 billion, which improves the average precision rate of target detection by more than 161 TP3T and the average recall rate by more than 131 TP3T compared with the small model and reduces the omission of reporting to a larger extent. The multimodal large model accomplishes downstream task precision tuning based on medium and high point visible light text pairs, infrared text pairs, satellite remote sensing text pairs, radar text pairs and open source image text pairs. It has 200 billion parameters and supports target detection and zero sample open detection.<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[5050,5051],"collection":[],"class_list":["post-23814","post","type-post","status-publish","format-standard","hentry","category-news","tag-5050","tag-5051"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/23814","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=23814"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/23814\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=23814"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=23814"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=23814"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=23814"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}