{"id":36803,"date":"2025-06-06T11:29:41","date_gmt":"2025-06-06T03:29:41","guid":{"rendered":"https:\/\/www.1ai.net\/?p=36803"},"modified":"2025-06-06T11:29:51","modified_gmt":"2025-06-06T03:29:51","slug":"%e9%98%bf%e9%87%8c%e5%bc%80%e6%ba%90-qwen3-%e6%96%b0%e6%a8%a1%e5%9e%8b-embedding-%e5%8f%8a-reranker%ef%bc%8c%e5%b8%a6%e6%9d%a5%e5%bc%ba%e5%a4%a7%e5%a4%9a%e8%af%ad%e8%a8%80%e3%80%81%e8%b7%a8%e8%af%ad","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/36803.html","title":{"rendered":"Ali open source Qwen3 new model Embedding and Reranker, bringing powerful multi-language, cross-language support"},"content":{"rendered":"<p>June 6 News.<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e9%98%bf%e9%87%8c\" title=\"[View articles tagged with [Ali]]\" target=\"_blank\" >Ali<\/a>in the wee hours today<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%bc%80%e6%ba%90\" title=\"[View articles tagged with [open source]]\" target=\"_blank\" >Open Source<\/a>Be\u00a0<strong><a href=\"https:\/\/www.1ai.net\/en\/tag\/qwen3\" title=\"[Sees articles with [Qwen3] labels]\" target=\"_blank\" >Qwen3<\/a>-Embedding family of models<\/strong>(Embedding and Reranker), designed for text characterization, retrieval and ranking tasks, trained on the Qwen3 base model.<\/p>\n<p>Officially, the Qwen3-Embedding family has demonstrated superior performance in text characterization and sorting tasks in several benchmark tests.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-36804\" title=\"04ef11d1j00sxf0bz006yd000u000r7p\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/06\/04ef11d1j00sxf0bz006yd000u000r7p.jpg\" alt=\"04ef11d1j00sxf0bz006yd000u000r7p\" width=\"1080\" height=\"979\" \/><\/p>\n<p>It has the following characteristics:<\/p>\n<p><strong>Excellent generalization:<\/strong>The Qwen3-Embedding family has achieved industry-leading performance in several downstream task evaluations. Among them, the Embedding model with 8B parameter scale is ranked No. 1 in the MTEB Multilingual Leaderboard (score 70.58 as of June 6, 2025), and its performance outperforms many commercial API services. In addition, this series of ranking models performs well in various text retrieval scenarios and significantly improves the relevance of search results.<\/p>\n<p><strong>Flexible modeling architecture:<\/strong>Qwen3-Embedding series provides 3 model configurations from 0.6B to 8B parameter scales to meet the performance and efficiency requirements in different scenarios. Developers can flexibly combine the characterization and sorting modules to achieve functionality expansion.<\/p>\n<p>In addition, the model supports the following customization features:<\/p>\n<ul>\n<li>Characterization dimension customization: Allows users to adjust the characterization dimension according to actual needs, effectively reducing application costs;<\/li>\n<li>Command Adaptation Optimization: Supports user-defined command templates to improve performance in specific tasks, languages or scenarios.<\/li>\n<\/ul>\n<p><strong>Full multilingual support:<\/strong>The Qwen3-Embedding family supports more than 100 languages, including mainstream natural languages and multiple programming languages. The models in this series have powerful multi-language, cross-language and code retrieval capabilities, which can effectively address the data processing needs in multi-language scenarios.<\/p>\n<p>It is reported that the Embedding model receives a single segment of text as input, and takes the last layer of the model<em>\"EOS.<\/em>The hidden state vectors corresponding to the tags are used as semantic representations of the input text, while the Reranker model receives text pairs (e.g., a user query and a candidate document) as inputs, and computes and outputs the relevance scores of the two texts using a single-tower structure.<\/p>\n<p data-vmark=\"2297\">1AI attached open source address below:<\/p>\n<h2 data-vmark=\"95d6\"><strong>ModelScope:<\/strong><\/h2>\n<ul class=\"list-paddingleft-2\">\n<li>\n<p data-vmark=\"35d0\"><span class=\"link-text-start-with-http\">https:\/\/modelscope.cn\/collections\/Qwen3-Embedding-3edc3762d50f48<\/span><\/p>\n<\/li>\n<li>\n<p data-vmark=\"85a7\"><span class=\"link-text-start-with-http\">https:\/\/modelscope.cn\/collections\/Qwen3-Reranker-6316e71b146c4f<\/span><\/p>\n<\/li>\n<\/ul>\n<h2 data-vmark=\"e220\">Hugging Face<strong>:<\/strong><\/h2>\n<ul class=\"list-paddingleft-2\">\n<li>\n<p data-vmark=\"f753\"><span class=\"link-text-start-with-http\">https:\/\/huggingface.co\/collections\/Qwen\/qwen3-embedding-6841b2055b99c44d9a4c371f<\/span><\/p>\n<\/li>\n<li>\n<p data-vmark=\"b402\"><span class=\"link-text-start-with-http\">https:\/\/huggingface.co\/collections\/Qwen\/qwen3-reranker-6841b22d0192d7ade9cdefea<\/span><\/p>\n<\/li>\n<\/ul>\n<h2 data-vmark=\"4077\"><strong>GitHub:<\/strong><\/h2>\n<ul class=\"list-paddingleft-2\">\n<li>\n<p data-vmark=\"5a88\"><span class=\"link-text-start-with-http\">https:\/\/github.com\/QwenLM\/Qwen3-Embedding<\/span><\/p>\n<\/li>\n<\/ul>\n<h2 data-vmark=\"8489\">Technical report:<\/h2>\n<ul class=\"list-paddingleft-2\">\n<li>\n<p data-vmark=\"9261\"><span class=\"link-text-start-with-http\">https:\/\/github.com\/QwenLM\/Qwen3-Embedding\/blob\/main\/qwen3_embedding_technical_report.pdf<\/span><\/p>\n<\/li>\n<\/ul>\n<p>&nbsp;<\/p>","protected":false},"excerpt":{"rendered":"<p>June 6 news, Ali early this morning open source Qwen3-Embedding series of models (Embedding and Reranker), designed for text representation, retrieval and sorting tasks, based on the Qwen3 base model for training. According to officials, the Qwen3-Embedding family has demonstrated excellent performance in text representation and sorting tasks in multiple benchmark tests. The Qwen3-Embedding series achieves industry-leading performance in several downstream task evaluations. Among them, the 8B parameter scale Embedding model was ranked first in the MTEB Multilingual Leaderboard list (<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[6841,219,331,1759],"collection":[],"class_list":["post-36803","post","type-post","status-publish","format-standard","hentry","category-news","tag-qwen3","tag-219","tag-331","tag-1759"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/36803","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=36803"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/36803\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=36803"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=36803"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=36803"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=36803"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}