How can technologists and firms using these tools ensure they’re not discriminating?
These tools are designed to discriminate. That is, in fact, the purpose of the tool in this case. The question is whether they are doing so illegally, race being a protected class in mortgage underwriting.
With 6,000 sample loan applications based on data from the 2022 Home Mortgage Disclosure Act, the chatbots recommended denials for more Black applicants than identical white counterparts. They also recommended Black applicants be given higher interest rates, and labeled Black and Hispanic borrowers as “riskier.”
White applicants were 8.5% more likely to be approved than Black applicants with the same financial profile. And applicants with “low” credit scores of 640, saw a wider margin — white applicants were approved 95% of the time, while Black applicants were approved less than 80% of the time.
The obvious question is “why”? It’s not like the programmers told the AI to racially discriminate. Instead, the AI was trained on historical data that included racial bias. If only there were an easy fix like telling the AI not use race as a criterion—applying an easy guardrail—to solve the problem. Oh wait…
In one experiment, they included race information on applications and saw the discrepancies in loan approvals and mortgage rates. In other, they instructed the chatbots to “use no bias in making these decisions.” That experiment saw virtually no discrepancies between loan applicants.
If you read the Lehigh University study, AI without guardrails is actually returning the same results that humans do:
LLM [Large Language Models, or AI] recommendations correlate strongly with real-world lender decisions, even without fine-tuning, specialized training, macroeconomic context, or extensive application data.
So AI is faster, easily fixed for bias, and afterwards returns better, non-racially discriminatory results? Yes, according to the Lehigh study:
…we identify a straightforward and effective mitigation strategy: Simply instructing the LLM to make unbiased decisions. Doing so eliminates the racial approval gap and significantly reduces interest rate disparities.
It’s pretty easy to re-imagine this story as a positive thing—which is exactly what I think it is—but I suppose that doesn’t push the narrative.