Research

I am currently a postdoc at Stanford Health Policy working with Sherri Rose. I’m generally interested in how to effectively evaluate decision-making systems built with machine learning, and in studying such evaluation methods from an interdisciplinary perspective.

I received my PhD in Computer Science from Brown University, where I was fortunate to be advised by Suresh Venkatasubramanian, and where I was an affiliate of the Center for Technological Responsibility, Reimagination and Redesign (CNTR). My PhD research was partially supported by Arthur AI and the Utah chapter of the ARCS Foundation (while I was at the University of Utah). I also have an M.S. in Computer Science from UMass Amherst and a B.A. in Mathematics from Scripps College.

contact: iekumar at stanford dot edu

Publications

see also: google scholar

To Pool or Not To Pool: Analyzing the Regularizing Effects of Group-Fair Training on Shared Models.
Cyrus Cousins, I. Elizabeth Kumar, Suresh Venkatasubramanian.
In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS), 2024.

Deconstructing Design Decisions: Why Courts Must Interrogate Machine Learning and Other Technologies.
Andrew D. Selbst, Suresh Venkatasubramanian, I. Elizabeth Kumar.
85 Ohio State Law Journal 415 (2024).
Drafts presented at PLSC 2021, WeRobot 2021.

Equal credit opportunity in algorithms: Aligning algorithmic fairness research with U.S. fair lending regulation.
I. Elizabeth Kumar, Keegan Hines, John P. Dickerson.
In Proceedings of the 5th AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AIES), 2022.

The fallacy of AI functionality.
Inioluwa Deborah Raji*, I. Elizabeth Kumar*, Aaron Horowitz, Andrew D. Selbst.
In Proceedings of the 5th ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2022.

Shapley Residuals: Quantifying the limits of the Shapley value for explanations.
I. Elizabeth Kumar, Carlos Scheidegger, Suresh Venkatasubramanian, Sorelle Friedler.
In Advances in Neural Information Processing Systems 34 (NeurIPS), 2021.

Epistemic values in feature importance methods: Lessons from feminist epistemology.
Leif Hancox-Li*, I. Elizabeth Kumar*.
In Proceedings of the 4th ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2021.
Best Paper Award.

Problems with Shapley-value-based explanations as feature importance measures.
I. Elizabeth Kumar, Suresh Venkatasubramanian, Carlos Scheidegger, Sorelle Friedler.
In Proceedings of the 37th International Conference on Machine Learning (ICML), 2020.