opposing

AI Models Have Conflicting Opinions on Contentious Topics, Study Reveals

Robot Blinders
Not all generative AI models are created equal, particularly when it comes to how they treat polarizing subject matter. They found that the models tended to answer questions inconsistently, which reflects biases embedded in the data used to train the models, they say. “Our research shows significant variation in the values conveyed by model responses, depending on culture and language.”Text-analyzing models, like all generative AI models, are statistical probability machines. Instrumental to an AI model’s training data are annotations, or labels that enable the model to associate specific concepts with specific data (e.g. Other studies have examined the deeply ingrained political, racial, ethnic, gender and ableist biases in generative AI models — many of which cut across languages, countries and dialects.