The Role of Women in Artificial Intelligence: An Interview with Allison Cohen, Developer of Ethical AI Initiatives

Allison Cohen is the senior applied AI projects manager at Mila, a Quebec-based community of more than 1,200 researchers specializing in AI and machine learning. One of the projects I managed involved building a dataset containing instances of subtle and overt expressions of bias against women. I learned firsthand why this process is fundamental to building responsible applications, and also why it’s not done enough — it’s hard work! What advice would you give to women seeking to enter the AI field? How can investors better push for responsible AI?

To highlight and celebrate the achievements of AI-focused women academics, TechCrunch is launching a new series of interviews. This series aims to shine light on the remarkable women who have contributed to the AI revolution, giving them the recognition they deserve. As the AI industry continues to boom, we hope to bring attention to key work that often goes unnoticed and unrecognized. You can read more profiles in this series here.

Allison Cohen: Bridging the Gap Between AI and Society

Allison Cohen serves as the senior applied AI projects manager at Mila, a Quebec-based community of over 1,200 researchers specializing in AI and machine learning. In her role, she works closely with researchers, social scientists, and external partners to deploy AI projects for social good. Cohen’s impressive portfolio includes projects such as a tool that detects misogyny, an app to identify online activity from suspected human trafficking victims, and an agricultural app to promote sustainable farming practices in Rwanda.

Previously, Cohen was a co-lead on AI drug discovery at the Global Partnership on Artificial Intelligence, an organization dedicated to guiding the responsible development and use of AI. She has also worked as an AI strategy consultant at Deloitte and a project consultant at the Center for International Digital Policy, an independent Canadian think tank.


Here, Allison Cohen shares insights into her journey in the AI field and her thoughts on important issues facing AI today.

Briefly, how did you get your start in AI? What attracted you to the field?

“The realization that we could mathematically model everything from recognizing faces to negotiating trade deals changed the way I saw the world, which is what made AI so compelling to me. Ironically, now that I work in AI, I see that we can’t — and in many cases shouldn’t — be capturing these kinds of phenomena with algorithms.

I was introduced to the field while completing a master’s in global affairs at the University of Toronto. The program aimed to educate students on navigating the systems that shape the world order, including macroeconomics, international law, and human psychology. As I learned more about AI, I realized its significance in world politics and felt the need to educate myself on the topic.

What allowed me to break into the field was an essay-writing competition. I wrote a paper for the competition which described how psychedelic drugs could help humans stay competitive in a labor market dominated by AI. This piece qualified me to attend the St. Gallen Symposium in 2018 (it was a creative writing piece). My participation in that event gave me the confidence to continue pursuing my interest in the field.”

What work are you most proud of in the AI field?

“One of the projects I managed involved building a dataset containing instances of subtle and overt expressions of bias against women.

Managing this project required extensive staffing and coordination with a multidisciplinary team of natural language processing experts, linguists, and gender studies specialists. I am quite proud of this work, as it showed me firsthand the critical role of incorporating diverse perspectives and disciplines in building responsible AI applications. It was also well-received by the community, with one of our papers receiving a spotlight recognition at the socially responsible language modeling workshop at one of the leading AI conferences, NeurIPS. Our work also served as inspiration for a similar interdisciplinary process managed by AI Sweden, which was adapted to fit Swedish notions and expressions of misogyny.”

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

“It’s unfortunate that, in such a cutting-edge industry, we are still seeing problematic gender dynamics. This not only affects women negatively but also hinders progress for all of us. One concept that has inspired me in dealing with this issue is “feminist standpoint theory,” which I learned about in Sasha Costanza-Chock’s book, “Design Justice.”

This theory suggests that marginalized communities, with their unique knowledge and experiences, possess a powerful understanding of the world that can bring about inclusive and fair change. While not all marginalized communities are the same, it is essential to include a diverse range of perspectives from these groups in navigating and addressing structural challenges and inequalities. Failure to do so can result in AI remaining exclusionary for an even wider population, reinforcing power dynamics outside of the field as well.

In terms of managing a male-dominated industry, I have found allies to be crucial. These allies stem from strong and trusting relationships. I have been fortunate to have friends like Peter Kurzwelly, who have supported me in creating a female-led and -centered podcast called “The World We’re Building.” This podcast allows us to elevate and showcase the work of even more women and non-binary individuals in the field of AI.”

What advice would you give to women seeking to enter the AI field?

“Find an open door. It doesn’t have to be a paid opportunity, a career path, or even related to your background or experience. If you find an opportunity, give it your all, and it may lead to bigger and better things. Of course, I also acknowledge that there is privilege in being able to volunteer.”

“When I lost my job during the pandemic and unemployment rates were at an all-time high in Canada, very few companies were hiring for AI talent. It was tough for someone like me, with a background in global affairs and only eight months of consulting experience, to land a job. In the process of job hunting, I began volunteering with an AI ethics organization. This led me to connect with individuals who eventually led me to my current role at Mila, which has transformed my life.”

What are some of the most pressing issues facing AI as it evolves?

“I see three main challenges that are interconnected. We need to figure out how to:

  • Scale AI while also adapting to local knowledge and needs
  • Incorporate anthropologists and sociologists into the AI design process
  • Alter the incentives towards designing tools for those in need rather than for profitability

“In order to build AI tools that are adapted to the local context, it is crucial to incorporate diverse perspectives from anthropologists and sociologists into the design process. However, this can be challenging due to a lack of collaboration between disciplines and incentive structures that prioritize profitability over responsible practices. We must address these issues in order to create meaningful solutions that benefit society as a whole.”

What are some issues AI users should be aware of?

“One issue that doesn’t receive enough attention is labor exploitation. Many AI models rely on labeled data, and the people responsible for labeling (annotators) are often subjected to exploitative practices. Even for models that do not require labeled data, datasets can still be built in an exploitative manner, with data creators not receiving proper compensation or credit.

I would recommend looking into the work of Krystal Kauffman, who has been advocating for annotators’ labor rights. She also raises important points about how AI can perpetuate invasive surveillance and the need for respecting fundamental human rights in the development and deployment of AI.”

What is the best way to responsibly build AI?

“Ethical reflection is crucial in building responsible AI, but it’s not enough. We must be mindful of the decisions we make from the earliest stages, including how we define the problem and whose interests it serves, how it supports or challenges existing power dynamics, and its impact on different communities. Responsible AI goes beyond just incorporating ethical principles; it requires thoughtful navigation of complex systems of power.”

How can investors better push for responsible AI?

“One way is to ask about the values of the team behind the AI technology. If their values are influenced by the local community and they are held accountable to that community, it is more likely that they will prioritize responsible practices in their work. Investors can also play a role in creating incentives that prioritize responsible AI over profitability.”

“Thank you for reading. I hope this has given you valuable insights into the world of AI and the need for responsible practices in its development and use.”

Avatar photo
Ava Patel

Ava Patel is a cultural critic and commentator with a focus on literature and the arts. She is known for her thought-provoking essays and reviews, and has a talent for bringing new and diverse voices to the forefront of the cultural conversation.

Articles: 850

Leave a Reply

Your email address will not be published. Required fields are marked *