I am a Computer Science PhD student at Brown University where I am fortunate to be advised by Suresh Venkatasubramanian. I am interested in the process of analyzing machine learning systems to determine if they are fair, ethical, or legal; my research examines this process from an interdisciplinary perspective. My ultimate goal is to inform the development of law and policy to prevent the intentional or unintentional deployment of harmful data-driven technology. My research has been supported by Arthur AI and the Utah chapter of the ARCS Foundation.
Previously, I developed actuarial risk models on the Data Science team at MassMutual while completing my M.S. in Computer Science at the University of Massachusetts. I received my B.A. in Mathematics from Scripps College in 2016.
cv: here (updated June 2021)
email: iekumar at brown dot edu
Shapley Residuals: Quantifying the limits of the Shapley value for explanations.
I. Elizabeth Kumar, Carlos Scheidegger, Suresh Venkatasubramanian, Sorelle Friedler.
Presented at the 5th ICML Workshop on Human Interpretability in Machine Learning (WHI), 2020.
**New version to appear at NeurIPS 2021**
Epistemic values in feature importance methods: Lessons from feminist epistemology.
Leif Hancox-Li*, I. Elizabeth Kumar*.
In Proceedings of the 4th ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2021.
Best Paper Award
Problems with Shapley-value-based explanations as feature importance measures.
I. Elizabeth Kumar, Suresh Venkatasubramanian, Carlos Scheidegger, Sorelle Friedler.
In Proceedings of the 37th International Conference on Machine Learning (ICML), 2020.