When algorithms fall short of expectations, people get hurt.
To hold AI systems builders accountable, we can use an algorithm audit — a tool that has been part of the conversation for years in the context of online platforms. They are now beginning to emerge as a mode of external evaluation in the deployment of automated decision systems (ADS) and are being included in critical policy proposals as a primary mechanism for algorithmic accountability. However, not all audits are made equal.
In this talk, Deborah Raji will discuss the ongoing challenges in executing algorithm audits, and will highlight the technical, policy, and institutional design interventions that are necessary for audits to be effective mechanisms for accountability.
Location: Faculty Club (41 Willcocks St.)
For parking and directions, please visit here.
Deborah Raji is a PhD student in computer science at University of California, Berkeley, and a Mozilla fellow. She is interested in algorithmic auditing and evaluation and has worked with Google’s Ethical AI team, and been a research fellow at the Partnership on AI and AI Now Institute at New York University, working on projects to operationalize ethical considerations in ML engineering practice. She has also worked closely with the Algorithmic Justice League initiative to highlight bias in deployed AI products.
Raji is one of the 2023 TIME “100 Most Influential People in AI”. She was also a Forbes 2021 “30 Under 30”, and one of the MIT Tech Review’s 2020 “35 Under 35 Innovators”.