The application in this practice session is inspired by the empirical example in “Measuring the model risk-adjusted performance of machine learning algorithms in credit default prediction” by Alonso Robisco and Carbó Martínez (2022). However, since we are not interested in model risk-adjusted performance, the application will purely focus on the implementation of machine learning algorithms for loan default prediction.
6.1 Problem Setup
The dataset that we will be using was used in the Kaggle competition “Give Me Some Credit”. The description of the competition reads as follows:
Banks play a crucial role in market economies. They decide who can get finance and on what terms and can make or break investment decisions. For markets and society to function, individuals and companies need access to credit.
Credit scoring algorithms, which make a guess at the probability of default, are the method banks use to determine whether or not a loan should be granted. This competition requires participants to improve on the state of the art in credit scoring, by predicting the probability that somebody will experience financial distress in the next two years.
The goal of this competition is to build a model that borrowers can use to help make the best financial decisions.
Historical data are provided on 250,000 borrowers and the prize pool is $5,000 ($3,000 for first, $1,500 for second and $500 for third).
Unfortunately, there won’t be any prize money today. However, the experience that you can gain from working through an application like this can be invaluable. So, in a way, you are still winning!
6.2 Dataset
Let’s download the dataset automatically, unzip it, and place it in a folder called data if you haven’t done so already
from io import BytesIOfrom urllib.request import urlopenfrom zipfile import ZipFileimport os.path# Check if the file existsifnot os.path.isfile('data/Data Dictionary.xls') ornot os.path.isfile('data/cs-training.csv'):print('Downloading dataset...')# Define the dataset to be downloaded zipurl ='https://www.kaggle.com/api/v1/datasets/download/brycecf/give-me-some-credit-dataset'# Download and unzip the dataset in the data folderwith urlopen(zipurl) as zipresp:with ZipFile(BytesIO(zipresp.read())) as zfile: zfile.extractall('data')print('DONE!')else:print('Dataset already downloaded!')
Dataset already downloaded!
Then, we can have a look at the data dictionary that is provided with the dataset. This will give us an idea of the variables that are available in the dataset and what they represent
import pandas as pddata_dict = pd.read_excel('data/Data Dictionary.xls', header=1)data_dict.style.hide()
Variable Name
Description
Type
SeriousDlqin2yrs
Person experienced 90 days past due delinquency or worse
Y/N
RevolvingUtilizationOfUnsecuredLines
Total balance on credit cards and personal lines of credit except real estate and no installment debt like car loans divided by the sum of credit limits
percentage
age
Age of borrower in years
integer
NumberOfTime30-59DaysPastDueNotWorse
Number of times borrower has been 30-59 days past due but no worse in the last 2 years.
integer
DebtRatio
Monthly debt payments, alimony,living costs divided by monthy gross income
percentage
MonthlyIncome
Monthly income
real
NumberOfOpenCreditLinesAndLoans
Number of Open loans (installment like car loan or mortgage) and Lines of credit (e.g. credit cards)
integer
NumberOfTimes90DaysLate
Number of times borrower has been 90 days or more past due.
integer
NumberRealEstateLoansOrLines
Number of mortgage and real estate loans including home equity lines of credit
integer
NumberOfTime60-89DaysPastDueNotWorse
Number of times borrower has been 60-89 days past due but no worse in the last 2 years.
integer
NumberOfDependents
Number of dependents in family excluding themselves (spouse, children etc.)
integer
The variable \(y\) that we want to predict is SeriousDlqin2yrs which indicates whether a person has been 90 days past due on a loan payment (serious delinquency) in the past two years. This target variable is \(1\) if the loan defaults (i.e., serious delinquency occurred) and \(0\) if the loan does not default (i.e., no serious delinquency occurred). The other variables are features that we can use to predict this target variable such as the age of the borrower and the monthly income of the borrower.
6.3 Putting the Problem into the Context of the Course
Given the description of the competition and the dataset, we can see that this is a supervised learning problem. We have a target variable that we want to predict, and we have features that we can use to predict this target variable. The target variable is binary, i.e., it can take two values: 0 or 1. The value 0 indicates that the loan will not default, while the value 1 indicates that the loan will default. Thus, this is a binary classification problem.
6.4 Setting up the Environment
We will start by setting up the environment by importing the necessary libraries
import numpy as npimport pandas as pdimport matplotlib.pyplot as pltimport seaborn as sns
and loading the dataset
df = pd.read_csv('data/cs-training.csv')
6.5 Exercises
Note that the exercises build on each other. You can sometimes skip exercises but the results for later exercises will depend on the previous ones. If you get stuck, you can skip to the next exercise and try to come back to the previous one later.
6.5.1 Exercise 1: Familiarization with the Dataset
Tasks:
Display the first 5 rows of the dataset. What do you notice about the column names?
There appears to be an unnecessary index column. Identify it and remove it from the DataFrame
Use .info() to check the data types and identify which columns have missing values
Hints:
The .head() method shows the first rows
Look for columns that seem to duplicate the index
The axis parameter in .drop() specifies whether you’re dropping rows or columns
# Your code here. Add additional code cells as needed.
6.5.2 Exercise 2: Understanding the Target Variable
Tasks:
What is the proportion of defaulted vs non-defaulted loans in the dataset? Use value_counts(normalize=True)
Based on this distribution, would you say the dataset is balanced or imbalanced?
Why might class imbalance be problematic for machine learning? What evaluation metrics should we be careful about?
Hints:
The target variable is SeriousDlqin2yrs
Think about what accuracy would be if a model just predicted the majority class
# Your code here. Add additional code cells as needed.
6.5.3 Exercise 3: Handling Missing Values and Data Quality Issues
Tasks:
Use .dropna() combined with value_counts() to check if dropping missing values significantly changes the target variable distribution
Drop the rows with missing values for the rest of the exercises.
How many rows were dropped due to missing values?
Verify that there are no missing values remaining in the dataset.
Check for duplicate rows. How many are there? Should you remove them?
Hints:
Use df.loc[df.isna().any(axis=1)] to select rows with any missing values
Pay attention to the mean and standard deviation differences
Note
Note that in a real application, you would want to carefully consider how to handle missing data rather than just dropping rows. Imputation methods or models that can handle missing data directly might be more appropriate depending on the context. Furthermore, in this specific dataset, dropping some of the missing values also removes some of the other data quality issues by chance. In practice, you would want to investigate and address these issues separately.
# Your code here. Add additional code cells as needed.
6.5.4 Exercise 4: Exploratory Data Analysis
Tasks:
Create a pie chart (or histogram) showing the distribution of the target variable in your cleaned dataset
Generate a pair plot for age, MonthlyIncome, DebtRatio, and SeriousDlqin2yrs using seaborn’s pairplot() with hue='SeriousDlqin2yrs'
Calculate and visualize correlation matrices using a heatmap
Which features appear most correlated with loan default?
Are there any features that are highly correlated with each other? What issues could this cause?
Hints:
Use sns.pairplot() with the hue parameter for coloring by class
Use df.corr() for Pearson correlation
sns.heatmap() can visualize correlation matrices
Use np.triu() to create a mask for the upper triangle
# Your code here. Add additional code cells as needed.
6.5.5 Exercise 5: Preparing Data for Machine Learning Algorithms
Tasks:
Separate features (X) from the target variable (y)
Split the data into training (80%) and test (20%) sets using train_test_split. Use stratify=y to maintain class proportions and random_state=42 for reproducibility
Apply MinMaxScaler to normalize the features. Important: Fit the scaler only on training data, then transform both training and test data
Hints:
Use df.drop('column_name', axis=1) for features
The stratify parameter ensures balanced splits
Fitting on test data causes “data leakage” - avoid this!
Create a helper function for scaling if you want cleaner code
# Your code here. Add additional code cells as needed.
6.5.6 Exercise 6: Defining Evaluation Metrics
Tasks:
Write a function evaluate_model(clf, X_train, y_train, X_test, y_test, label='') that:
Computes predictions and predicted probabilities
Prints Accuracy, Precision, Recall, and ROC AUC for both training and test sets
Plots the ROC curve for both training and test sets
Why is it important to evaluate on both training and test data?
Given our imbalanced dataset, which metric(s) should we focus on and why?
Hints:
Use clf.predict() for class predictions and clf.predict_proba() for probabilities
Import metrics from sklearn.metrics: accuracy_score, precision_score, recall_score, roc_auc_score, roc_curve
Plot both curves on the same figure for comparison
Add a diagonal reference line for the ROC plot
Use label parameter to differentiate models in outputs, e.g., label='Logistic Regression'
# Your code here. Add additional code cells as needed.
6.5.7 Exercise 7: Training Classification Models
Tasks:
Train the following models and evaluate each using your evaluation function:
Logistic Regression: Use penalty=None, solver='lbfgs', max_iter=5000
Decision Tree: Use max_depth=7
Random Forest: Use max_depth=20, n_estimators=100
XGBoost: Use max_depth=5, n_estimators=40, random_state=0
Neural Network: Use MLPClassifier with activation='relu', solver='adam', hidden_layer_sizes=(300, 200, 100), max_iter=300, random_state=42
Watch for signs of overfitting (training >> test performance)
Training the neural network may take several minutes
# Your code here. Add additional code cells as needed.
6.5.8 Exercise 8: Results Comparison
Tasks:
Create a DataFrame comparing all models with columns: Model, ROC AUC (Train), ROC AUC (Test)
Which model performed best on the test set?
Which model showed the largest gap between training and test performance? What does this suggest?
Hints:
Use pd.DataFrame() with a dictionary
The gap between train/test performance indicates overfitting
# Your code here. Add additional code cells as needed.
6.5.9 Exercise 9: Feature Engineering
Tasks:
Create squared versions of all features and add them to the dataset (use .pow(2) and .add_suffix('_sq'))
Re-split and re-scale the data with the new features
Retrain all models with the expanded feature set
Compare the new results with the original. Did feature engineering help?
Optional: Add a Logistic Regression with L1 (LASSO) penalty using penalty='l1' and solver='liblinear'. How does it perform?
Hints:
Use X.assign(**X.pow(2).add_suffix('_sq')) for compact feature creation
Remember to fit a new scaler on the new training data
LASSO can help with feature selection when you have many features
# Your code here. Add additional code cells as needed.
6.5.10 Exercise 10: Reflection and Discussion
Tasks:
What additional steps could improve model performance (e.g., hyperparameter tuning, handling class imbalance, more feature engineering)?
In a real banking context, would you prefer a model with higher precision or higher recall? Why?
What are the ethical considerations when deploying such a model for loan decisions?
Hints:
No code required; reflect on practical and ethical aspects
Alonso Robisco, Andrés, and José Manuel Carbó Martínez. 2022. “Measuring the model risk-adjusted performance of machine learning algorithms in credit default prediction.”Financial Innovation 8 (1). https://doi.org/10.1186/s40854-022-00366-1.