TrustyAI Explainability Toolkit
-
Updated
Mar 19, 2026 - Java
TrustyAI Explainability Toolkit
Supporting models and data to doi 10.1021/acs.jcim.1c01163
OrganismCore transforms reasoning into executable artifacts built on the Universal Reasoning Substrate (URS). Its purpose is to accelerate discovery by making reasoning itself a programmable, transmissible, and model-agnostic object.
This repository contains a code of Unfold and Conquer Attribution Guidance, which is presented in a conference of Association for the Advancement of Artificial Intelligence 2023.
Ths repo has the list of Interesting Literature in the domain of XAI
Hacking a Neural Network to understand what concepts the network learns in order to solve a logic task.
Methods to interpret machine learning models/black box which can help us understand how it’s making decisions.
Skin cancer classification using Transfer Learning and explainable AI
Attention-guided convolutional autoencoder for one-class anomaly detection and localization on CIFAR-10, using CBAM and reconstruction-based scoring.
GUI-based ransomware attack detection using processor and disk I/O telemetry with CNN2D classification, SHAP explainability, adversarial robustness testing, and 10-fold stratified cross-validation.
See the world through the eyes of AI
An AI-powered clinical assistant using Retrieval-Augmented Generation (RAG) on the MIMIC-IV DiReCT dataset. It retrieves relevant patient cases and generates diagnostic reasoning using LLMs. Built with Streamlit, Transformers, FAISS, and SentenceTransformers.
Detects Pneumonia using Chest X rays through Deep Learning models
experimental setup to hook llms to a logic engine, for proof traces and financial math
Add a description, image, and links to the explainableai topic page so that developers can more easily learn about it.
To associate your repository with the explainableai topic, visit your repo's landing page and select "manage topics."