{"id":50274,"date":"2026-02-24T16:52:33","date_gmt":"2026-02-24T08:52:33","guid":{"rendered":"https:\/\/www.1ai.net\/?p=50274"},"modified":"2026-02-24T16:52:33","modified_gmt":"2026-02-24T08:52:33","slug":"mit-%e7%a0%94%e7%a9%b6%ef%bc%9a%e9%a1%b6%e5%b0%96-ai-%e8%81%8a%e5%a4%a9%e6%9c%ba%e5%99%a8%e4%ba%ba%e6%ad%a7%e8%a7%86%e5%bc%b1%e5%8a%bf%e7%be%a4%e4%bd%93%ef%bc%8c%e6%95%99%e8%82%b2%e4%bd%8e%e3%80%81","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/50274.html","title":{"rendered":"MIT RESEARCH: TOP AI CHAT ROBOTS DISCRIMINATE AGAINST VULNERABLE GROUPS, WITH LOW EDUCATION AND POOR ENGLISH"},"content":{"rendered":"<p>February 24th.<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%a4%a7%e5%9e%8b%e8%af%ad%e8%a8%80%e6%a8%a1%e5%9e%8b\" title=\"[View articles tagged with [large-scale language model]]\" target=\"_blank\" >Large Language Models<\/a>Global access to information has been widely promoted as a revolutionary tool to make it more inclusive. However, the United States<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e9%ba%bb%e7%9c%81%e7%90%86%e5%b7%a5%e5%ad%a6%e9%99%a2\" title=\"[Sees articles with labels]\" target=\"_blank\" >Massachusetts Institute of Technology<\/a>A recent study by the Center for Constructive Communication shows that these artificial intelligence systems systematically underperform the vulnerable groups that would have benefited most\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-50275\" title=\"659c41b6jm\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2026\/02\/659c41b6j00taygma002dd0014000mim.jpg\" alt=\"659c41b6jm\" width=\"1440\" height=\"810\" \/><\/p>\n<p>1AI notes that the results of this study were published at the annual AAAI Conference, with the most current state-of-the-art subjects such as OpenAI GPT(4), Anthropic Claude 3 Opus, and Meta Lama 3<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%81%8a%e5%a4%a9%e6%9c%ba%e5%99%a8%e4%ba%ba\" title=\"[View articles tagged with [chatbot]]\" target=\"_blank\" >Chatbots<\/a>I don't know. Researchers use the TrueQA and SciQ data sets to test the factual accuracy and authenticity of models and add user background information on different levels of education, English proficiency and nationality to questions. The results show that:<strong>For users with low formal education or English proficiency, the accuracy rate of model responses has decreased significantly, while users with both characteristics have been disproportionately affected\u3002<\/strong><\/p>\n<p>The study also revealed worrying variations in the model ' s handling of queries. For example, Claude 3 Opus refused to answer questions from less educated, non-English-speaking native-tongue users close to 111 TP3T, whereas the control group only had 3.61 TP3T. In many cases of refusal to respond, the model uses a high-profile, arrogant and even cynical tone, sometimes deliberately imitating lame English. Moreover, for less educated users from Iran, Russia and other countries, models deliberately conceal real information on topics such as nuclear power, historical events, and other user groups can answer exactly the same questions\u3002<\/p>\n<p>Researchers warn that, as individualization becomes more widespread, these inherent societal cognitive biases may exacerbate existing information inequalities, and that they may pass harmful behaviour and misinformation on to those who are least able to distinguish\u3002<\/p>","protected":false},"excerpt":{"rendered":"<p>On February 24, news that large language models have been widely promoted as revolutionary tools to make global access to information more inclusive. However, a recent study by the Center for Constructive Communication of the Massachusetts Institute of Technology in the United States has shown that these artificial intelligence systems are systematically underperforming the vulnerable groups that should have benefited most. 1AI noted that the results of the study were published at the annual AAAI conference and were addressed to state-of-the-art chat robots, such as OpenAI GPT\u20104, Anthropic Claude 3 Opus and Meta Lama 3. Researchers test models using TruthfulQA and SciQ data sets<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[371,275,619],"collection":[],"class_list":["post-50274","post","type-post","status-publish","format-standard","hentry","category-news","tag-371","tag-275","tag-619"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/50274","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=50274"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/50274\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=50274"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=50274"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=50274"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=50274"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}