I am a second-year Computing Ph.D. student at the University of Utah, where I am advised by Suresh Venkatasubramanian. I am interested in the practice of analyzing the social impact of machine learning systems and developing responsible AI law and policy. My research has been supported by the ARCS Foundation.

Previously, I developed actuarial risk models on the Data Science team at MassMutual while completing my M.S. in Computer Science at the University of Massachusetts. I received my B.A. in Mathematics from Scripps College in 2016.

cv: here (updated Jan 2021)
email: kumari at cs dot utah dot edu


Papers

Epistemic values in feature importance methods: Lessons from feminist epistemology.
Leif Hancox-Li*, I. Elizabeth Kumar*.
In Proceedings of the 4th ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2021.

Shapley Residuals: Quantifying the limits of the Shapley value for explanations.
I. Elizabeth Kumar, Carlos Scheidegger, Suresh Venkatasubramanian, Sorelle Friedler.
Presented at the 5th ICML Workshop on Human Interpretability in Machine Learning (WHI), 2020.

Problems with Shapley-value-based explanations as feature importance measures.
I. Elizabeth Kumar, Suresh Venkatasubramanian, Carlos Scheidegger, Sorelle Friedler.
In Proceedings of the 37th International Conference on Machine Learning (ICML), 2020.