AI and Privacy Expert: Rashida Richardson, Mastercard’s senior legal advisor on artificial intelligence

women-in-ai-Richardson

TechCrunch is starting a series of interviews with outstanding women in AI, who deserve more recognition for their academic and other achievements. As the AI field keeps growing, we will publish several articles throughout the year to showcase their important work that often goes unnoticed. You can find more profiles here.

Rashida Richardson is a senior legal advisor at Mastercard, where she handles privacy, data protection, and AI-related matters. She used to be the policy research director at the AI Now Institute, which examines the social impact of AI, and a senior policy consultant for data and democracy at the White House Office of Science and Technology Policy. Since 2021, she has been teaching law and political science at Northeastern University, with a focus on race and emerging technologies.

AI and Privacy Expert: Rashida Richardson, Mastercard’s senior legal advisor on artificial intelligence.

How did you enter the AI field? What drew you to it?

I was a civil rights lawyer, working on issues like privacy, surveillance, school integration, fair housing and criminal justice reform. I saw how the government was using and testing AI-based technologies in these areas. Sometimes, I could see the dangers and problems, and I worked on several technology policy initiatives in New York State and City to improve oversight, evaluation and protection. Other times, I doubted the claims of AI-related solutions that promised to address or reduce structural issues like school integration or fair housing.

My previous work also made me aware of the gaps in existing policy and regulation. I realized that there were not many people in the AI field with my background and expertise, or with the same perspective and approach that I had in my policy advocacy and academic work. I saw this as an opportunity to make a positive impact and also to use my previous experience in new ways.

I chose to concentrate on AI in both my legal and academic work, focusing on policy and legal issues related to their creation and use.

What work are you most proud of (in the AI field)?

I’m glad that the issue is getting more attention from everyone, especially policymakers. The law in the United States has often lagged behind or failed to address technology policy issues, and I felt that AI might suffer the same fate 5-6 years ago, because I saw how policymakers, in formal settings like U.S. Senate hearings or educational forums, treated the issue as obscure or not urgent, even though AI was being used widely in different sectors. But in the last year or so, there’s been a big change, and AI is now a regular topic of public discussion and policymakers understand the importance and need for informed action. I also think stakeholders from all sectors, including industry, realize that AI has unique advantages and challenges that may not be solved by usual methods, so there’s more recognition — or at least respect — for policy interventions.

How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?

Being a Black woman, I have often been a minority in various settings, and the AI and tech industries are not new or very different from other powerful and wealthy fields, like finance and law, where there is also a lack of diversity. So I believe my previous work and life experience prepared me for this industry, because I’m very aware of the biases I may face and the difficulties I’ll probably encounter. I use my experience to navigate, because I have a distinctive background and viewpoint having worked on AI in all sectors — academia, industry, government and civil society.

What are some issues AI users should be aware of?

AI users should be aware of two main issues: (1) the need to understand the strengths and weaknesses of different AI applications and models, and (2) the lack of clarity on how existing and future laws can address conflict or concerns related to AI use.

On the first issue, there is a gap between public perception and reality regarding the advantages and limitations of AI applications and their actual abilities and constraints. This problem is worsened by the fact that AI users may not know the difference between AI applications and models. AI became more popular with the launch of ChatGPT and other generative AI systems that are available commercially, but those AI models are different from other kinds of AI models that consumers have used for a long time, like recommendation systems. When the discussion about AI is confused — where the technology is seen as one thing — it misleads public understanding of what each kind of application or model can really do, and the dangers that come with their flaws or shortcomings.

On the second issue, law and policy on AI creation and use is changing. There are many laws (e.g. civil rights, consumer protection, competition, fair lending) that already cover AI use, but we are still seeing how these laws will be applied and interpreted. We are also seeing the development of policy that is specific for AI — but what I have observed from my legal work and research is that there are areas that are not resolved by this legal mix and will only be resolved when there is more legal action involving AI creation and use. Generally, I don’t think there is a good understanding of the current situation of the law and AI, and how legal uncertainty on key issues like liability can mean that some risks, harms and disputes may not be settled until years of legal action between businesses or between regulators and companies create legal precedent that may give some clarity.

What is the best way to responsibly build AI?

Building AI responsibly is difficult because the core values of responsible AI, such as fairness and safety, are based on norms that people do not agree on or understand. So someone could act responsibly and still cause harm, or someone could act maliciously and use the lack of common norms to justify their actions. Until there are global standards or a common framework for what it means to build AI responsibly, the best way to achieve this goal is to have clear principles, policies, guidance and standards for responsible AI creation and use that are checked by internal oversight, benchmarking and other governance practices.

How can investors better push for responsible AI?

Investors can improve their efforts to define or explain what responsible AI creation or use means, and to act when AI actors do not follow it. Right now, “responsible” or “trustworthy” AI are just buzzwords because there are no clear criteria to judge AI actor practices. Some new regulations like the EU AI Act will set some rules and oversight for AI, but there are still areas where investors can encourage AI actors to adopt better practices that focus on human values or social good. However, if investors do not act when there is a mismatch or proof of bad actors, then there will be little motivation to change behavior or practices.

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to Top