Explainable AI using SHAP & CXPlain About Machine Learning (ML) based models are often black-box To use ML model in industrial setting it should be fair and reliable Metrics like accuracy-score or r2-score makes it reliable, but one cannot say anything about the fairness of the model ML engineer should be able to explain their models and understand the value and accuracy of their findings In this project, we have tried to interpret a trained ML model using CXPlain and SHAP library This project was developed for a #hackingforfuture Hackathon organized by International Center for Networked, Adaptive Production ICNAP in coorperation with the Fraunhofer Project Center at the University of Twente Project Flow A ML model is trained on a tabular dataset for binary classification task Using this trained model, feature importance for the input features are calculated with the help of CXPlain and SHAP model interpretation library Results are compared quantitatively. Requirements cxplain 1.0.3 shap 0.37.0 pycaret 2.3.0 tensorflow 2.4.1 plotly 4.14.3 Installation procedure pip install cxplain pip install shap pip install pycaret pip install tensorflow pip install plotly Results Light Gradient Boosting Machine Classification Report Light Gradient Boosting Machine Confusion Matrix Relative importance of features using SHAP How a particular feature affects a prediction: Relative importance of features using CXPlain Team members Aditya Pradhan Kartik Sachdev Kishore Kunisetty Lennart Mesters