Return to page

BLOG

Mitigating Bias in AI/ML Models with Disparate Impact Analysis

 headshot

By Karthik Guruswamy | minute read | August 02, 2019

Blog decorative banner image

Everyone understands that the biggest plus of using AI/ML models is a better automation of day-to-day business decisions, personalized customer service, enhanced user experience, waste elimination, better ROI, etc. The common question that comes up often though is — How can we be sure that the AI/ML decisions are free from bias/discrimination and fair to all the consumers?

In the US, Disparate Impact Analysis and Model Documentation may be required under HMDA and ECOA/FHA regulations. These regulations and others are designed to address discrimination issues in the Financial Industry. See next section(s) for a summary of what a couple of regulations are designed to do. In the rest of the blog post, we will discuss a simple use case and a tool that can help address the discrimination/adverse impact issues, required by the regulations.

1. HMDA

HMDA or Home Mortgage Disclosure Act, for instance, requires certain financial institutions to provide mortgage data to the public.

HMDA grew out of public concern over credit shortages in certain urban neighborhoods. Congress believed that some financial institutions had contributed to the decline of some geographic areas by their failure to provide adequate home financing to qualified applicants on reasonable terms and conditions. Thus, one purpose of HMDA and Regulation C is to provide the public with information that will help show whether financial institutions are serving the housing credit needs of the neighborhoods and communities in which they are located. A second purpose is to aid public officials in targeting public investments from the private sector to areas where they are needed. Finally, the FIRREA amendments of 1989 require the collection and disclosure of data about applicant and borrower characteristics to assist in identifying possible discriminatory lending patterns and enforcing antidiscrimination statutes.

^^ Tks © Wikipedia!  ^^

2. Federal Protections in the Mortgage Marketplace (ECOA/FHA)

From the FTC website: 

Two federal laws, the Equal Credit Opportunity Act (ECOA) and the Fair Housing Act (FHA), offer protections against discrimination.

The ECOA forbids credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or whether you receive income from a public assistance program. Creditors may ask you for most of this information in certain situations, but they may not use it as a reason to deny you credit or to set the terms of your credit. They are never allowed to ask your religion. Everyone who participates in the decision to grant credit or in setting the terms of that credit, including real estate brokers who arrange to finance, must comply with the ECOA.

The FHA forbids discrimination in all aspects of residential real-estate related transactions, including:

· making loans to buy, build, repair, or improve a place to live;

· selling, brokering, or appraising residential real estate; and

· selling or renting a place to live

The FHA also forbids discrimination based on race, color, religion, sex, national origin, handicaps, or familial status. That’s defined as children under 18 living with a parent or legal guardian, pregnant women, and people securing custody of children under 18.

More here on the FTC website .

The Big Conundrum:

Lending institutions cannot use  or sometimes cannot even ask  for your demographic data as per ECOA and FHA. How can the AI/ML models  for issuing loans, then produce unbiased decisions automatically and comply with HMDA? Also, how do we measure what’s wrong with the models, in the first place and rectify the errors w/o using protected demographic data?

As you might’ve guessed, we are going to discuss a tool to measure the biases in decision making AI/ML models. We will be able to explain what the AI/ML model learned from data and how it’s performing wrt bias to regulators. Financial institutions can take advantage of the tools and methods to get control of their processes — to take a fair and anti-discriminatory approach, towards their consumers.

Disparate Impact Analysis (DIA)

www.h2o.ai2019/08/1.png www.h2o.ai2019/08/1.png

We can control things better, if we can measure first …

Disparate Impact Analysis or DIA is sometimes called as Adverse Impact Analysis is a way to measure quantitatively the adverse treatment of protected classes, which leads to discrimination in hiring, housing, etc., or in general any public policy decisions.

Disparate Impact Analysis is one of the tools that is broadly applicable to a wide variety of use cases under the regulatory compliance umbrella, especially around intentional discrimination.

A Credit Card Default Prediction Use Case for DAI

I’m using the UCI ML Data Set from a Taiwanese Bank that predicts Credit Card Default for customers. We are using Driverless AI  1.7.1 for building an AI/ML model using Automated Machine Learning  and Automated Feature Engineering  …

Citation: Yeh, I. C., & Lien, C. H. (2009). The comparisons of data mining techniques for the predictive accuracy of the probability of default of credit card clients. Expert Systems with Applications, 36(2), 2473–2480. 

Link here: https://archive.ics.uci.edu/ml/datasets/default+of+credit+card+clients 

The dataset has the following variables:

· Gender (Sex), Education, Age, Marital Status

· Amount of Credit given, Rolling history of past payment, Bill statement, Balance, etc., We want to now create a model on which customers have a higher probability of defaulting the payment.

· default _payment_next_month — Target Variable 

Why is predicting a loan default a regulatory use case?  Well, one can argue that lenders can make decisions on increasing the interest rate or reduce credit to customers based on the prediction results, which can adversely impact one group vs other.

If we build an AI/ML classification  model with Gender, Education, etc., as input, it’s may not be considered compliant  in regulatory environments as it’s “protected data”. The premise is that adding these you may be  introducing “intentional bias” to the model. So we drop these 4 columns in Driverless AI and then build the model based on only payment history data. We pick the target column as default_payment_next_month.  You can set the interpretability knob to the level of feature engineering complexity that will help with Disparate impact mitigation. The recommended setting is >=7 to run constrained ML models. This is because the analysis depends on group averages and any feature complexity below 7 will actually create more issues in the model.

www.h2o.ai2019/08/2-1.png www.h2o.ai2019/08/2-1.png

One thing to notice in the finished experiment screen below is how behavioral features such as PAY_0, PAY_2, etc., show up as important features in the final model — that’s because we dropped the demographic protected variables earlier.

www.h2o.ai2019/08/3.png www.h2o.ai2019/08/3.png

Is this model fair now? We don’t know yet — We definitely avoided the protected data going into the model and avoided any “intentional bias”. We are about to find out if “unintentional bias” got into the model. You can click on ‘INTERPRET THIS MODEL’, to get to the MLI (Machine Learning Interpretability) page. Choose ‘DISPARATE IMPACT ANALYSIS’ . Even though we dropped the pesky columns to build a model, MLI brings the protected columns back next to the final prediction results to check for unintentional bias. I choose to look at Gender or Sex (Male/Female) discrimination first in the model.

www.h2o.ai2019/08/4.png www.h2o.ai2019/08/4.png

Looks like the Model did really well with respect to eliminating bias around Gender! The FAIRNESS1 (Female) is True, FAIRNESS2 (Male) is True and the FAIRNESS ALL is True that tells us the model accuracy is fairly the same wrt to both Genders — definitely didn’t break the four-fifth rule.

If we had to break this down, The Disparate Impact Analysis basically took model prediction results which was from15K males, 9k females and looked at various measures such as Accuracy, True Positive Rate, Precision, Recall, etc., across both binary classes and then found the ratios are comparable across the two groups over the desired cutoff value. The results for the Male was more than four-fifths of the reference class Female — which means no adverse impact by its basic definition.

We can try the same with another variable such as MARRIAGE (Single, Married, Divorced, etc.,). In the model we built quickly, the test failed. The model favors one MARRIAGE value over the others and won’t do well with certain regulations.

www.h2o.ai2019/08/5.png www.h2o.ai2019/08/5.png

Let’s also look at the confusion matrices for each group. The only measure that’s fair, seems to be ‘Specificity Rate’ (Recall), Accuracy and Negative Predicted Value as seen in the Group Disparity box. Everything else fails across the different values 0,1,2,3 for MARRIAGE. In reality, if this model goes to production it will favor a certain Marital Status over the other (even though it was not one of the inputs) and will have an unintentional bias.

www.h2o.ai2019/08/6.png www.h2o.ai2019/08/6.png

What are the next steps, now that we’ve done DIA?

If you are not a highly regulated industry, the easiest thing to do is to put the protected variables back to reduce the Adverse impact of one group over the other — that is if the end result is more important than the means. You can also pick a scorer like BIAS AUC to reduce bias right from the get-go. Or use tools like IBM AIF360 to reweigh data, train fair models (LFR, adversarial debiasing), or post-process predictions (Reject Option Classification).

For regulated applications inside financial institutions, where a demographic protected variable, cannot even be used in the model (the main theme of the blog), the most conservative approach may be to try different models with different system settings. Then pick the model with the best score across multiple demographic variables.

More on these topics in upcoming blogs.

My special thanks to Patrick Hall  and Navdeep Gill  of H2O.ai to spend time to educate me on this wonderful topic.

 headshot

Karthik Guruswamy

Karthik is a Principal Pre-sales Solutions Architect with H2O. In his role, Karthik works with customers to define, architect and deploy H2O’s AI solutions in production to bring AI/ML initiatives to fruition. Karthik is a “business first” data scientist. His expertise and passion have always been around building game-changing solutions - by using an eclectic combination of algorithms, drawn from different domains. He has published 50+ blogs on “all things data science” in Linked-in, Forbes and Medium publishing platforms over the years for the business audience and speaks in vendor data science conferences. He also holds multiple patents around Desktop Virtualization, Ad networks and was a co-founding member of two startups in silicon valley.