LLMs change their answers based on who’s asking

AI chatbots may deliver unequal answers depending on who is asking the question. A new study from the MIT Center for Constructive Communication finds that LLMs provide less accurate information, increase refusal rates, and sometimes adopt a different tone when users appear less educated, less fluent in English, or from particular countries. Breakdown of performance on TruthfulQA between ‘Adversarial’ and ‘Non-Adversarial’ questions. (Source: MIT) The team evaluated GPT-4, Claude 3 Opus, and Llama 3-8B using … More

The post LLMs change their answers based on who’s asking appeared first on Help Net Security.

This article has been indexed from Help Net Security

Read the original article: