Jessica Lee, Njeri Mutura, Safiya Noble

Companies are increasingly turning to deep learning algorithms and other forms of artificial intelligence to analyze, understand patterns and make decisions based on the large volumes of data they have access to.  From healthcare to adtech, AI is being harnessed to make critical decisions that can have significant impact on consumers. The risk for bias in these decisions has been well-documented. From the initial collection to designing the models, there is a risk that bias will creep into each stage of the decision-making process.  At the same time, the GDPR, CPRA, and other privacy laws are calling for limits on automated decision-making, as well as setting standards for transparency and explainability. In this panel we will look at how and when bias can creep into AI, the potential harms, and how privacy law may serve as an avenue for interrupting that bias.

Jessica Lee, Partner, Loeb & Loeb

Njeri Mutura, Sr. Corporate Counsel, Legal & Compliance Lead, Microssoft

Safiya Noble, Assoc. Professor & Co-Director, UCLA Center for Critical Internet Inquiry

Jessica Lee
Jessica Lee

Partner
Loeb & Loeb

Njeri Mutura
Njeri Mutura

Sr. Corporate Counsel, Legal & Compliance Lead
Microsoft

Safiya Noble

Associate Professor and Co-Director
UCLA Center for Critical Internet Inquiry