Upcoming Talk : Characterizing Algorithmic Unfairness: From Compounding Injustices to Social Norm Bias

When: March 16 at 3:30 p.m.

Where/How: Visit

Abstract In this talk, I will characterize different dimensions of algorithmic unfairness. I will first show how societal biases encoded in data may be compounded by machine learning models. I will relate this to the political philosophy notion of compounding injustices and illustrate it in the context of automated recruiting. I will then discuss residual harms of fairness-aware algorithms that mitigate bias at a group level. In doing so, I will introduce the notion of Social Norm Bias (SNoB), a subtle but consequential type of algorithmic discrimination in which predictions are associated with conformity to inferred social norms. I will relate this to the social psychology notion of descriptive stereotypes and their role in workplace discrimination. I will conclude by discussing implications for algorithm design, deployment, and evaluation.

Bio Maria De-Arteaga is an Assistant Professor at the Information, Risk and Operation Management (IROM) Department at the University of Texas at Austin, where she is also a core faculty member in the Machine Learning Laboratory and an affiliated faculty of Good Systems. She holds a joint PhD in Machine Learning and Public Policy and a M.Sc. in Machine Learning, both from Carnegie Mellon University, and a. B.Sc. in Mathematics from Universidad Nacional de Colombia. Her research focuses on the risks and opportunities of using machine learning to support experts’ decisions in high-stakes settings, with a particular interest in algorithmic fairness and human-AI collaboration. Her work has been featured by UN Women and Global Pulse, and has received best paper awards at CIST’22, WITS’21, NAACL’19 and Data for Policy’16, and research awards from Google and Microsoft Research.

Maria De-Arteaga (Photographer: Brian Birzer)

 

If you have any questions please email sathish@ece.ubc.ca