February 24th.Large Language ModelsGlobal access to information has been widely promoted as a revolutionary tool to make it more inclusive. However, the United StatesMassachusetts Institute of TechnologyA recent study by the Center for Constructive Communication shows that these artificial intelligence systems systematically underperform the vulnerable groups that would have benefited most。

1AI notes that the results of this study were published at the annual AAAI Conference, with the most current state-of-the-art subjects such as OpenAI GPT(4), Anthropic Claude 3 Opus, and Meta Lama 3ChatbotsI don't know. Researchers use the TrueQA and SciQ data sets to test the factual accuracy and authenticity of models and add user background information on different levels of education, English proficiency and nationality to questions. The results show that:For users with low formal education or English proficiency, the accuracy rate of model responses has decreased significantly, while users with both characteristics have been disproportionately affected。
The study also revealed worrying variations in the model ' s handling of queries. For example, Claude 3 Opus refused to answer questions from less educated, non-English-speaking native-tongue users close to 111 TP3T, whereas the control group only had 3.61 TP3T. In many cases of refusal to respond, the model uses a high-profile, arrogant and even cynical tone, sometimes deliberately imitating lame English. Moreover, for less educated users from Iran, Russia and other countries, models deliberately conceal real information on topics such as nuclear power, historical events, and other user groups can answer exactly the same questions。
Researchers warn that, as individualization becomes more widespread, these inherent societal cognitive biases may exacerbate existing information inequalities, and that they may pass harmful behaviour and misinformation on to those who are least able to distinguish。