Talk: How GPT-3 responds to different publics on climate change and Black Lives Matter: A critical appraisal of equity in conversational AI

When: Wednesday, May 3, 2023

Speaker: Kaiping Chen, University of Wisconsin-Madison

Time: Wednesday, May 3, 2023; 3:30-5:00pm

Location: MCLD 3038 (2356 Main Mall)

Zoom link (for virtual participation): https://ubc.zoom.us/j/66143606993?pwd=RmJjaDJvbzBWMjg4OVh4c1ZtZHQ3Zz09

Abstract:

Autoregressive language models, which use deep learning to produce human-like texts, have become increasingly widespread. Such models are powering popular virtual assistants in areas like smart health, finance, and autonomous driving, and facilitating the production of creative writing in domains from the entertainment industry to science communities. Despite growing discussions of AI fairness across disciplines, there lacks systemic metrics to assess what equity means in dialogue systems and how to engage different populations in the assessment loop. In this talk, Dr. Kaiping Chen will draw from theories of deliberative democracy and science and technology studies to propose an analytical framework for evaluating equity in human-AI dialogues. Using this framework, Dr. Chen will introduce a recent algorithm auditing study her team has conducted to examine how GPT-3 responded to different subpopulations on crucial science and social issues: climate change and the Black Lives Matter (BLM) movement. In the study, Dr. Chen and her collaborators built a user interface to let diverse participants have conversations with GPT-3. The study found a substantively worse user experience with GPT-3 among the opinion and the education minority subpopulations; however, these two groups achieved the largest knowledge gain, changing attitudes toward supporting BLM and climate change efforts after the chat. In this study, Chen’s team also traced these user experience divides to conversational differences and found that GPT-3 used more negative expressions when it responded to the education and opinion minority groups, compared to its responses to the majority groups. To what extent GPT-3 uses justification when responding to the minority groups is contingent on the issue. Dr. Chen will discuss the implications of the findings for a deliberative conversational AI system that centralizes diversity, equity, and inclusion.