“AI and Privacy: A Senior Counsel’s Perspective on the Role of Women in the Field”

Rashida Richardson, senior counsel, AI at MastercardBriefly, how did you get your start in AI? On the second point, law and policy regarding AI development and use is evolving. How can investors better push for responsible AI? Investors can do a better job at defining or at least clarifying what constitutes responsible AI development or use, and taking action when AI actor’s practices do not align. Currently, “responsible” or “trustworthy” AI are effectively marketing terms because there are no clear standards to evaluate AI actor practices.

To Put AI-Focused Women in the Limelight, TechCrunch Introduces a New Interview Series

To give deserving recognition to women in academia and other fields who have played a significant role in the AI revolution, TechCrunch is launching a series of exclusive interviews. Throughout the year, as the AI industry continues to experience rapid growth, we will highlight the exceptional work of these remarkable women through a series of articles. Be sure to read all of the profiles here.

Rashida Richardson: A Senior Counsel at Mastercard Who Specializes in Legal Issues Surrounding AI

Rashida Richardson, currently serving as a senior counsel at Mastercard, is responsible for addressing legal issues related to privacy, data protection, and AI. With an impressive background in the field, Richardson was formerly the director of policy research at the AI Now Institute, a research institute focused on studying the social implications of AI. She has also worked as a senior policy advisor for data and democracy at the White House Office of Science and Technology Policy. Since 2021, she has been serving as an assistant professor of law and political science at Northeastern University, specializing in the intersection of race and emerging technologies.

Briefly, how did you get your start in AI? What attracted you to the field?

My background is as a civil rights attorney, where I worked on a range of issues including privacy, surveillance, school desegregation, fair housing, and criminal justice reform. While working on these issues, I witnessed the early stages of government adoption and experimentation with AI-based technologies. I also saw the need for greater oversight and evaluation, which led me to lead policy efforts aimed at addressing these concerns. I also had a healthy skepticism towards the claims of AI’s efficacy, especially in cases where it was marketed as a solution for complex issues like school desegregation. As I became more aware of the gaps in policy and regulation surrounding AI, I realized that my background and experience could make a meaningful contribution to the field.

What work are you most proud of (in the AI field)?

I am proud to see that the issue of AI is finally receiving more attention from all stakeholders, especially policymakers. In the United States, there is a history of the law struggling to adequately address technology policy issues, and a few years ago it seemed like AI was destined to face the same fate. However, in recent years there has been a significant shift in public discourse and policymakers are now more aware of the urgent need for informed action. I am also happy to see that stakeholders across all sectors, including industry, are beginning to recognize the unique benefits and risks associated with AI and are more open to policy interventions.

How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?

As a Black woman, I am used to being a minority in many spaces, and while the AI and tech industries are extremely homogeneous, they are no different from other fields of immense power and wealth, such as finance and the legal profession. My prior work and lived experiences have prepared me for these challenges, as I am hyper-aware of preconceptions and potential obstacles. I rely on my unique background and perspective to navigate these industries, having worked on AI in various sectors including academia, industry, government, and civil society.

What are some issues AI users should be aware of?

There are two key issues that AI users should be aware of. First, there needs to be a greater understanding of the capabilities and limitations of different AI applications and models. Second, there is a lot of uncertainty surrounding the ability of current laws to effectively address conflicts and concerns related to AI use.

On the first point, there is a significant imbalance in public discourse and understanding regarding the actual capabilities and limitations of AI applications. This is further complicated by the fact that many users do not fully understand the difference between AI applications and models. While the release of ChatGPT and other generative AI systems has raised public awareness, they are different from other types of AI models that have been in use for years, such as recommendation systems. It is important for the public to have a clear understanding of the capabilities and limitations of different AI technologies, as well as the potential risks associated with their use.

On the second point, existing laws and policies surrounding AI are still evolving. While there are laws that already apply to AI use, there is still much uncertainty around how they will be enforced and interpreted. There is also a lack of specific policies and regulations tailored for AI. As a result, there are many areas where legal issues may remain unresolved until there is more litigation and legal precedent is established.

What is the best way to responsibly build AI?

The challenge in responsibly building AI is that many of the underlying principles, such as fairness and safety, are based on normative values that are not universally agreed upon. This can make it difficult to measure or define what constitutes responsible AI. Therefore, it is important to have clear principles, policies, and standards for responsible AI development and use, which can be enforced through internal oversight and governance practices.

How can investors better push for responsible AI?

Investors can play a crucial role in promoting responsible AI by defining and clarifying what constitutes responsible AI development and use. Currently, terms like “responsible” and “trustworthy” are often used as marketing tools, as there are no clear standards for evaluating AI practices. Investors can also incentivize AI actors to develop better practices that prioritize human values and societal good. However, this requires investors to take action when there is misalignment or evidence of bad actors, rather than turning a blind eye.

Avatar photo
Zara Khan

Zara Khan is a seasoned investigative journalist with a focus on social justice issues. She has won numerous awards for her groundbreaking reporting and has a reputation for fearlessly exposing wrongdoing.

Articles: 847

One comment

Leave a Reply

Your email address will not be published. Required fields are marked *