{"id":3026,"date":"2024-01-19T11:05:24","date_gmt":"2024-01-19T03:05:24","guid":{"rendered":"https:\/\/www.1ai.net\/?p=3026"},"modified":"2024-01-19T11:05:24","modified_gmt":"2024-01-19T03:05:24","slug":"%e8%8b%b1%e4%bc%9f%e8%be%be%e5%8f%91%e5%b8%83chatqa%e6%a8%a1%e5%9e%8b-%e6%80%a7%e8%83%bd%e8%be%be%e5%88%b0gpt-4%e7%ba%a7%e5%88%ab","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/3026.html","title":{"rendered":"Nvidia releases ChatQA model with GPT-4 performance"},"content":{"rendered":"<p><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%8b%b1%e4%bc%9f%e8%be%be\" title=\"Look at the article with the label\" target=\"_blank\" >Nvidia<\/a>Launched<a href=\"https:\/\/www.1ai.net\/en\/tag\/chatqa\" title=\"[See articles with [ChatQA] label]\" target=\"_blank\" >ChatQA<\/a>It is said that the performance of the model can be compared to the Biao<a href=\"https:\/\/www.1ai.net\/en\/tag\/gpt-4\" title=\"[SEE ARTICLES WITH [GPT-4] LABELS]\" target=\"_blank\" >GPT-4<\/a>, using efficient training methods such as two-stage instruction tuning and improved contextual retrieval.<\/p>\n<p class=\"article-content__img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-3027\" title=\"201811151633429961_46\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/01\/201811151633429961_46.jpg\" alt=\"201811151633429961_46\" width=\"600\" height=\"399\" \/><\/p>\n<p>ChatQA is a set of conversational Question and Answer (QA) models that can achieve GPT-4 level accuracy. Specifically, the development team proposes a two-stage instruction tuning approach that significantly improves zero-sample conversational QA results for large language models (LLMs).<\/p>\n<p>In order to handle retrieval in conversational QA, a dense searcher was fine-tuned on a multi-round QA dataset, which provides a different approach than using the<span class=\"spamTxt\">First<\/span>into the query rewriting model equivalent results, while significantly reducing deployment costs. Notably, ChatQA-70B outperforms GPT-4 in terms of average scores on 10 conversational QA datasets (54.14 vs. 53.90) without relying on any synthetic data from OpenAI GPT models.<\/p>","protected":false},"excerpt":{"rendered":"<p>NVIDIA has introduced the ChatQA model, which is said to perform against Biao GPT-4, using efficient training methods such as two-stage instruction tuning and improved context retrieval. ChatQA is a set of conversational question-and-answer (QA) models that can achieve GPT-4 level accuracy. Specifically, the development team proposes a two-stage instruction tuning approach that significantly improves zero-sample conversational QA results for large language models (LLMs). To handle retrieval in conversational QA, dense retrievers are fine-tuned on a multi-round QA dataset, which provides results comparable to using state-of-the-art query rewriting models while significantly reducing deployment costs. It is worth noting that ChatQA-70B provides a good result on 10 conversational QA datasets (54.14 vs. 5<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[966,510,239],"collection":[],"class_list":["post-3026","post","type-post","status-publish","format-standard","hentry","category-news","tag-chatqa","tag-gpt-4","tag-239"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/3026","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=3026"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/3026\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=3026"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=3026"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=3026"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=3026"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}