Tackling AI Bias: The Mission of Mutale Nkonde’s Nonprofit

We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Mutale Nkonde is the founding CEO of the nonprofit AI For the People (AFP), which seeks to increase the amount of Black voices in tech. It established AI for the People as a key thought leader around how to develop protocols to guide the design, deployment, and governance of AI systems that comply with local nondiscrimination laws. There is so much work to be done on reskilling our workforce for a time when AI systems do low-stakes labor-saving tasks. What information can they give us about how AI systems work and do not work from them, and how can we use these insights to make sure AI truly is for the People?

To recognize the achievements of women in the AI field, TechCrunch is shining a spotlight on remarkable women who have contributed to the AI revolution. As the AI boom continues, we will publish a series of interviews throughout the year, highlighting the often unrecognized work of these women. Check out more profiles here.

Meet Mutale Nkonde, the founding CEO of nonprofit organization AI For the People (AFP). Nkonde is on a mission to increase the representation of Black voices in the tech world. Prior to her role at AFP, she played a crucial role in the introduction of the Algorithmic and Deep Fakes Algorithmic Acts, as well as the No Biometric Barriers to Housing Act, to the US House of Representatives. Currently, she serves as a Visiting Policy Fellow at the Oxford Internet Institute.

“How did you get your start in AI and what drew you to the field?”

I was intrigued by the workings of social media after a friend’s post in 2015. Google Pictures, the precursor to Google Image, had mistakenly labeled two Black people as gorillas. Outraged, myself and others in “Blacks in tech” circles took notice, but it wasn’t until the release of Weapons of Math Destruction in 2016 that I began to understand the issue of algorithmic bias. This inspired me to apply for fellowships where I could study this further. My involvement in co-authoring the report “Advancing Racial Literacy in Tech,” published in 2019, caught the attention of the McArthur Foundation, launching the current leg of my career.

I was drawn to exploring the intersection of racism and technology because it was under-researched and counterintuitive. As someone who enjoys doing things differently, the opportunity to delve deeper and share this knowledge within Silicon Valley was exciting. Since working on “Advancing Racial Literacy in Tech,” I have founded AI for the People, which focuses on advocating for policies and practices to mitigate algorithmic bias.

“What work are you most proud of in the AI field?”

I am extremely proud of spearheading the Algorithmic Accountability Act, which was first introduced to the House of Representatives in 2019. This established AI for the People as a thought leader in determining protocols for the ethical design, deployment, and governance of AI systems to comply with nondiscrimination laws. As a result, we have been included in the Schumer AI Insights Channels and are part of an advisory group for various federal agencies. There are also exciting projects in the works on Capitol Hill.

How do I navigate the challenges of working in a male-dominated tech industry, and by extension, the male-dominated AI industry? Surprisingly, the greatest obstacles I have faced have come from academic gatekeepers. Most of the men I collaborate with in tech companies are working to develop systems for use on Black and other non-white populations, making them more open to working together. As an external expert, I can either validate or challenge existing practices.

“What advice would you give to women seeking to enter the AI field?”

Find a specialization and become an expert in that area. For me, advocating for policies to reduce algorithmic bias has given me an advantage as academia began to address this issue. This allowed AI for the People to establish itself as an authority on Capitol Hill five years before the executive order. It’s also important to identify and address any knowledge gaps. As AI for the People is four years old, I have been pursuing the academic credentials needed to ensure I remain a thought leader. I am currently finishing my Masters at Columbia and will continue researching in this field.

As AI evolves, what are the most pressing issues? I am heavily focused on strategies to involve Black and people of color in the development, testing, and annotation of foundational AI models. After all, these technologies are only as good as the data they are trained on. However, at a time when diversity, equity, and inclusion are under attack, Black venture funds are facing discrimination lawsuits, and Black academics are facing public attacks, who will be tasked with this important work in the industry?

For AI users, it is essential to consider the geopolitical implications of AI development. As the United States and China are the primary producers, it is vital for the US to create products that work effectively on diverse populations. China, on the other hand, is developing products within a largely homogenous population, even though they have a significant presence in Africa. With aggressive investments in developing anti-bias technologies, the American tech sector can dominate this market.

What is the most responsible way to build AI? A multi-faceted approach is necessary, but one aspect to consider is conducting research that focuses on marginalized populations. A simple way to do this is to take note of cultural trends and how they influence technological advancements. For instance, questions like, “How do we design scalable biometric technologies in a society where more people are identifying as trans or non-binary?” must be asked.

What can investors do to push for responsible AI? They should pay attention to demographic trends and consider if the companies they are investing in will be able to reach an increasingly diverse population, as birth rates decline in European populations worldwide. This should prompt them to inquire about algorithmic bias during due diligence, as issues of discrimination will increasingly impact consumers.

There is much work to be done to reskill our workforce for a future where AI systems undertake low-stakes labor-saving tasks. How can we ensure that those on the margins of society are included in these programs? What insights can they offer on the workings (and failures) of AI from their experiences, and how can we use this knowledge to truly create AI for the people?

Avatar photo
Max Chen

Max Chen is an AI expert and journalist with a focus on the ethical and societal implications of emerging technologies. He has a background in computer science and is known for his clear and concise writing on complex technical topics. He has also written extensively on the potential risks and benefits of AI, and is a frequent speaker on the subject at industry conferences and events.

Articles: 865

Leave a Reply

Your email address will not be published. Required fields are marked *