racism

Exploring AI: Combatting Racial Bias in Image Generating Technology

Gettyimages 1062086882
This week in AI, Google paused its AI chatbot Gemini’s ability to generate images of people after a segment of users complained about historical inaccuracies. Google’s ginger treatment of race-based prompts in Gemini didn’t avoid the issue, per se — but disingenuously attempted to conceal the worst of the model’s biases. Yes, the data sets used to train image generators generally contain more white people than Black people, and yes, the images of Black people in those data sets reinforce negative stereotypes. That’s why image generators sexualize certain women of color, depict white men in positions of authority and generally favor wealthy Western perspectives. Whether they tackle — or choose not to tackle — models’ biases, they’ll be criticized.

The Rise of Tech’s Diversity and Inclusion Backlash

Musk Words
The responses to his tweet are split between the two factions that have appeared within venture in recent years: those who support diversity, equity and inclusion (DEI) efforts, and those who do not. Wealthy power players like Peter Thiel and Elon Musk have been very outspoken against the premise of DEI, with their thoughts shared and spread widely throughout the ecosystem. “DEI must DIE. DEI received a lot of support after the murder of George Floyd back in 2020, but support has waned these past few years. In a sense, they were right, and the decreased DEI support in business and tech has created ripple effects.