“Miranda Bogen: Revolutionizing AI Governance through Innovative Solutions”

We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Miranda Bogen is the founding director of the Center of Democracy and Technology’s AI Governance Lab, where she works to help create solutions that can effectively regulate and govern AI systems. For the most part, AI systems are still missing seat belts, airbags, and traffic signs, so proceed with caution before using them for consequential tasks. Consider how the success of the AI system you are working on has been defined, who that definition serves, and what context may be missing. Intense competitive pressure to release the newest, biggest, and shiniest new AI models is leading to concerning underinvestment in responsible practices.

To shine the spotlight on deserving women academics and others in the field, TechCrunch is launching a series of interviews focused on the remarkable women revolutionizing AI. As the AI boom continues, we will publish several pieces throughout the year, highlighting the often unrecognized but critical work done by these individuals. Read more profiles here.

Miranda Bogen is the founding director of the Center of Democracy and Technology’s AI Governance Lab, where she works to create effective solutions for regulating and governing AI systems. Previously, she guided responsible AI strategies at Meta and served as a senior policy analyst at Uptown, an organization focused on using technology to promote equity and justice.

Q: Briefly, how did you get your start in AI? What attracted you to the field?

A: “I was drawn to work on machine learning and AI by seeing the way these technologies were colliding with fundamental conversations about society — values, rights, and which communities get left behind. My early work exploring the intersection of AI and civil rights reinforced for me that AI systems are far more than technical artifacts; they are systems that both shape and are shaped by their interaction with people, bureaucracies, and policies. I’ve always been adept at translating between technical and non-technical contexts, and I was energized by the opportunity to help break through the appearance of technical complexity to help communities with different kinds of expertise shape the way AI is built from the ground up.”

Q: What work are you most proud of (in the AI field)?

A: “When I first started working in this space, many people needed to be convinced that AI systems could result in discriminatory impact for marginalized populations, let alone that anything needed to be done about those harms. While there is still much progress to be made, I’m proud of the research my collaborators and I conducted on discrimination in personalized online advertising, as well as my work within the industry on algorithmic fairness. Our efforts helped lead to meaningful changes in Meta’s ad delivery system and progress toward reducing disparities in access to important economic opportunities.”

Q: How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

A: “I’ve been fortunate to work with phenomenal colleagues and teams who have been generous with both opportunities and sincere support. In my most recent career transition, I was delighted that nearly all of my options involved working on teams or within organizations led by phenomenal women. I hope the field continues to lift up the voices of those who haven’t traditionally been centered in technology-oriented conversations.”

Q: What advice would you give to women seeking to enter the AI field?

A: “The same advice I give to anyone who asks: find supportive managers, advisors, and teams who energize and inspire you, who value your opinion and perspective, and who put themselves on the line to stand up for you and your work.”

Q: What are some of the most pressing issues facing AI as it evolves?

A: “The impacts and harms AI systems are already having on people are well-known at this point, and one of the biggest pressing challenges is moving beyond describing the problem to developing robust approaches for systematically addressing those harms and incentivizing their adoption. We launched the AI Governance Lab at CDT to drive progress in both directions.”

Q: What are some issues AI users should be aware of?

A: “For the most part, AI systems are still missing seat belts, airbags, and traffic signs, so proceed with caution before using them for consequential tasks.”

Q: What is the best way to responsibly build AI?

A: “The best way to responsibly build AI is with humility. Consider how the success of the AI system you are working on has been defined, who that definition serves, and what context may be missing. Think about for whom the system might fail and what will happen if it does. And build systems not just with the people who will use them, but with the communities who will be subject to them.”

Q: How can investors better push for responsible AI?

A: “Investors need to create room for technology builders to move more deliberately before rushing half-baked technologies to market. The intense competitive pressure to release the newest, biggest, and shiniest new AI models is leading to concerning underinvestment in responsible practices. While uninhibited innovation sings a tempting siren song, it is a mirage that will leave everyone worse off.”

“AI is not magic; it’s just a mirror that is being held up to society. If we want it to reflect something different, we’ve got work to do.”

Avatar photo
Zara Khan

Zara Khan is a seasoned investigative journalist with a focus on social justice issues. She has won numerous awards for her groundbreaking reporting and has a reputation for fearlessly exposing wrongdoing.

Articles: 793

One comment

Leave a Reply

Your email address will not be published. Required fields are marked *