Equal credit opportunity in algorithms: Aligning algorithmic fairness research with U.S. fair lending regulation

I. Elizabeth Kumar, Keegan Hines, John P. Dickerson. In Proceedings of the 5th AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AIES), 2022.

Also presented as a poster at EAAMO 2022.

Credit is an essential component of financial wellbeing in America, and unequal access to it is a large factor in the economic disparities between demographic groups that exist today. Today, machine learning algorithms, sometimes trained on alternative data, are increasingly being used to determine access to credit, yet research has shown that machine learning can encode many different versions of “unfairness,” thus raising the concern that banks and other financial institutions could—potentially unwittingly—engage in illegal discrimination through the use of this technology. In the US, there are laws in place to make sure discrimination does not happen in lending and agencies charged with enforcing them. However, conversations around fair credit models in computer science and in policy are often misaligned: fair machine learning research often lacks legal and practical considerations specific to existing fair lending policy, and regulators have yet to issue new guidance on how, if at all, credit risk models should be utilizing practices and techniques from the research community. This paper aims to better align these sides of the conversation. We describe the current state of credit discrimination regulation in the United States, contextualize results from fair ML research to identify the specific fairness concerns raised by the use of machine learning in lending, and discuss regulatory opportunities to address these concerns.

arXiv / publication


August 2, 2022: AIES presentation!

October 7, 2022: EAAMO poster session