biases

AI Models Have Conflicting Opinions on Contentious Topics, Study Reveals

Robot Blinders
Not all generative AI models are created equal, particularly when it comes to how they treat polarizing subject matter. They found that the models tended to answer questions inconsistently, which reflects biases embedded in the data used to train the models, they say. “Our research shows significant variation in the values conveyed by model responses, depending on culture and language.”Text-analyzing models, like all generative AI models, are statistical probability machines. Instrumental to an AI model’s training data are annotations, or labels that enable the model to associate specific concepts with specific data (e.g. Other studies have examined the deeply ingrained political, racial, ethnic, gender and ableist biases in generative AI models — many of which cut across languages, countries and dialects.

Exploring AI: Combatting Racial Bias in Image Generating Technology

Gettyimages 1062086882
This week in AI, Google paused its AI chatbot Gemini’s ability to generate images of people after a segment of users complained about historical inaccuracies. Google’s ginger treatment of race-based prompts in Gemini didn’t avoid the issue, per se — but disingenuously attempted to conceal the worst of the model’s biases. Yes, the data sets used to train image generators generally contain more white people than Black people, and yes, the images of Black people in those data sets reinforce negative stereotypes. That’s why image generators sexualize certain women of color, depict white men in positions of authority and generally favor wealthy Western perspectives. Whether they tackle — or choose not to tackle — models’ biases, they’ll be criticized.