“Feminine Forces in Artificial Intelligence: A Spotlight on Lee Tiedrich, Global Partnership on AI’s Leading Specialist”

It’s very gratifying to help prepare the next generation of AI leaders to address multidisciplinary AI challenges. I recently called for a global AI learning campaign in a piece I published with the OECD. To reduce potential liability and other risks, AI users should establish proactive AI governance and compliance programs to manage their AI deployments. Furthermore, in our increasingly regulated and litigious AI world, responsible AI practices should reduce litigation risks and potential reputational harms caused by poorly designed AI. Additionally, even if not addressed in the investment agreements, investors can introduce portfolio companies to potential responsible AI hires or consultants and encourage and support their engagement in the ever-expanding responsible AI ecosystem.

To bring attention to the impressive contributions of women in the AI field, TechCrunch presents a series of interviews featuring these pioneering individuals. Throughout the year, as the AI industry continues to boom, we will highlight the groundbreaking work of these often unsung heroes. Check out more profiles here.

Lee Tiedrich, Global Partnership on AI

Briefly, how did you get your start in AI? What attracted you to the field?

I’ve been at the intersection of technology, law and policy for many years, from the realm of cellular and internet technology to e-commerce and now AI. I am drawn to helping organizations navigate the complex legal challenges that arise with emerging technology, while also maximizing its benefits. My involvement with AI began while working at Covington & Burling LLP, and as it gained widespread attention, I became co-chair of the firm’s global and multidisciplinary Artificial Intelligence Initiative. I now focus on AI governance, compliance, transactions, and government affairs.

What work are you most proud of (in the AI field)?

I am proud of the extensive and diverse projects I have worked on to address the challenges of overseeing AI. This has involved collaboration across multiple disciplines, geographic locations, and cultures in order to develop global solutions. During my time at Covington, I worked closely with clients’ legal, engineering, and business teams on AI governance matters. I am currently a member of both the Organisation for Economic Co-operation and Development (OECD) AI and Global Partnership on AI (GPAI) global expert groups, where I have contributed to various high-stakes multidisciplinary AI issues, such as responsible data and model sharing, climate impact, intellectual property, and privacy. I also co-lead the GPAI Intellectual Property Committee and the Responsible AI Strategy for the Environment (RAISE) Committee. At Duke University, I have designed and teach a course that brings together students from different programs to tackle real-world responsible tech challenges alongside the OECD, corporations, and others. It is incredibly rewarding to help cultivate the next generation of AI leaders, equipped to confront the multidisciplinary complexities of AI.

How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?

I have faced male-dominated environments throughout my career, beginning as an electrical engineering student at Duke University where women were a small minority. As the 22nd woman elected to the Covington partnership, my practice focused on technology. My approach to navigating these challenges has been to prioritize producing exceptional and innovative work, and making it widely known. This not only increases demand for my services, but also leads to more opportunities. Building strong relationships within the AI ecosystem is also crucial, as it cultivates important mentors, sponsors, and clients. I encourage women to proactively seek out opportunities to expand their knowledge, profile, and expertise through participating in industry associations and other activities. Lastly, I remind women to invest in themselves, utilizing the many resources and networks available to navigate and advance in the AI field. Setting goals and identifying resources that can help achieve them is essential to success.

What advice would you give to women seeking to enter the AI field?

There are endless opportunities in the AI field across various disciplines such as engineering, data science, law, economics, business, and government affairs. I encourage women to pursue their passion within the AI field, and to become an expert in their chosen focus area. People often excel when they are passionate about their work. Developing and promoting expertise can involve joining professional associations, attending networking events, writing articles, public speaking, or pursuing continuing legal education. With the constantly evolving landscape of AI and its complex issues, there are many avenues for young professionals to become experts in the field, which can lead to numerous opportunities for growth. Women should also actively seek out these opportunities, and utilize their network to do so.

What are some of the most pressing issues facing AI as it evolves?

AI has vast potential to advance global prosperity, security, and social good, including its ability to address urgent challenges such as climate change and achieving the UN Sustainable Development goals. However, if not developed and used responsibly, AI can also pose safety and other risks, to individuals and the environment. It is crucial that we address these pressing issues by developing global frameworks that maximize AI’s benefits while mitigating its risks. This requires collaboration and harmonization across disciplines, as laws and policies need to consider both technology and societal realities. As technology transcends borders, international harmonization is also essential. Standards and other tools are key to advancing this harmonization, especially with the varying legal frameworks across different jurisdictions.

What are some issues AI users should be aware of?

In a recent publication with the OECD, I highlighted the need for a global AI learning campaign for users. Awareness of the benefits and risks of AI is crucial for making informed decisions about whether and how to utilize AI applications. This knowledge empowers users to mitigate risks effectively. Users should also be aware that AI is increasingly regulated and litigious. Government AI enforcement is expanding, and users may be held liable for the potential harm caused by third-party AI systems they use. To reduce these risks, it is important for AI users to establish proactive AI governance and compliance programs, as well as diligence third-party AI systems before utilizing them.

What is the best way to responsibly build AI?

Building and deploying AI responsibly requires careful consideration of multiple factors. It starts with publicly embracing and upholding responsible AI values, such as those embodied by the OECD AI Principles. Given the complexities of AI, it is crucial to establish an AI governance framework that fosters collaboration across various disciplines including technical, legal, business, sustainability, and others. The framework should be implemented throughout the entire lifecycle of AI systems and consider important guidance such as the NIST AI Risk Management Framework. Additionally, compliance with relevant laws is necessary. Given the rapid changes in both legal and technological landscapes in AI, the governance framework should also allow for flexibility and adaptation to new developments.

How can investors better push for responsible AI?

Investors have various ways to promote responsible AI within their portfolio companies. First, they should make responsible AI a priority in their investments. Not only is this the right thing to do, but it is also beneficial for business. The demand for responsible AI is increasing, making it profitable for companies. Furthermore, in the highly regulated and litigious world of AI, responsible practices can reduce potential litigation risks and reputational harm caused by poorly designed AI. Investors can also push for responsible AI by exercising oversight as corporate board members. As AI oversight becomes more common for corporate boards, investors should consider implementing oversight mechanisms in their investments. Additionally, even if not included in investment agreements, investors can introduce portfolio companies to potential responsible AI hires or consultants, and support their engagement within the responsible AI community.

Avatar photo
Max Chen

Max Chen is an AI expert and journalist with a focus on the ethical and societal implications of emerging technologies. He has a background in computer science and is known for his clear and concise writing on complex technical topics. He has also written extensively on the potential risks and benefits of AI, and is a frequent speaker on the subject at industry conferences and events.

Articles: 865

Leave a Reply

Your email address will not be published. Required fields are marked *