Complex and explainable machine learning models in credit scoring

Loading...
Thumbnail Image

Date

Journal Title

Journal ISSN

Volume Title

Publisher

Strathmore University

Abstract

The rapid evolution of state-of-the-art modelling methodologies offers a compelling opportunity to enhance the precision of credit evaluation tools. However, this progress often comes at a cost: the trade-off between model transparency and predictive accuracy. For credit managers tasked with maintaining effective oversight of credit risk and central bank regulators seeking assurance in model integrity, this trade-off can be excessively burdensome. As a result, the adoption of sophisticated models is often hindered by their inherent lack of transparency. This dissertation addresses this challenge by exploring advancements in credit assessment methodologies. It provides a comprehensive evaluation of predictive techniques, ranging from traditional logistic regression to modern artificial intelligence (AI) approaches. The findings demonstrate that complex tree-based algorithms, such as random forests, gradient-boosted trees, and extreme gradient-boosted trees, exhibit superior predictive accuracy in forecasting customer defaults. However, the dissertation goes beyond mere performance metrics by introducing innovative approaches to enhance the interpretability and practicality of these advanced models for credit risk practitioners. By doing so, it addresses a significant barrier to the widespread adoption of complex, opaque models in the financial industry. The study leverages a substantial dataset obtained from a financial institution, ensuring the reliability of inputs and the robustness of outputs. A key contribution of this dissertation lies in its integration of Explainable Artificial Intelligence (XAI) methodologies, which bridge the gap between predictive power and model transparency. By making AI-driven credit risk models more interpretable, this research provides actionable insights for credit managers and regulators, fostering greater confidence in the use of advanced modelling techniques. The key Contributions are: Comprehensive Evaluation of Predictive Models: A thorough comparison of traditional and modern credit scoring models, highlighting the strengths and limitations of each approach. Enhanced Interpretability of Complex Models: Introduction of techniques to improve the transparency of tree-based algorithms, making them more accessible to credit risk practitioners. Integration of Explainable AI (XAI): Application of XAI methodologies to credit risk management, enabling stakeholders to understand and trust AI-driven decisions. Practical Insights for Industry Adoption: Recommendations for implementing advanced models in real-world credit risk management, balancing accuracy with regulatory and operational requirements. In conclusion, this dissertation underscores the importance of balancing predictive accuracy with transparency in credit scoring models. By advancing the interpretability of sophisticated AI techniques, it paves the way for their broader adoption in the financial industry. The integration of XAI methodologies not only enhances model performance but also ensures that credit managers and regulators can maintain effective oversight, ultimately contributing to more robust and reliable credit risk management practices.

Description

Full - text thesis

Keywords

Citation

Aswani, A. A. (2025). Complex and explainable machine learning models in credit scoring [Strathmore University]. https://hdl.handle.net/11071/16425

Endorsement

Review

Supplemented By

Referenced By