Directory / Categories / ML Explainability
Category

ML Explainability

26 vendors curated. Independent ranking, no paid placement.

Arize AI

Monitor LLM and ML model performance, detect drift, and debug issues in production.

freemium

LangSmith

Trace, debug, and monitor LLM applications for transparency and risk control.

freemium

Neuronpedia

Interpretability platform for understanding neural network behavior and safety.

freemium

A Living and Curated Collection of Explainable AI Methods

Curated reference collection of XAI methods for model transparency and interpretability.

free

Adversarial Model Analysis

Open-source toolkit for adversarial testing and model interpretability.

free

AI FactSheets 360 (IBM)

Open-source toolkit for AI transparency, bias detection, and responsible model development.

free

AI Snake Oil

Exposes AI hype and provides practical guidance for responsible AI deployment.

free

ALEPlot

R package for Accumulated Local Effects plots to interpret ML model predictions.

free

Debugging Machine Learning Models

Debug ML models to understand failures and improve transparency.

free

Distill

Interactive visualizations for understanding and debugging machine learning models.

free

FairLearn

Open-source toolkit for detecting and mitigating AI model bias and fairness issues.

free

Fairness and Machine Learning: Limitations and Opportunities

Free textbook and resource guide on fairness limitations in machine learning systems.

free

Getting a Window into your Black Box Model

Reason codes for NFL models: interpretability for black-box AI systems.

free

IML

Open-source ML interpretability library for understanding model decisions.

free

Interpretable Machine Learning using Counterfactuals

Explainable AI through counterfactual examples for model transparency.

free

Interpreting Machine Learning Models with the iml Package

R package for interpreting and explaining machine learning model predictions.

free

Introduction to Responsible Machine Learning

Educational framework for building interpretable, fair, and accountable ML systems.

free

MadryLab

Adversarial robustness research lab advancing AI security and trustworthiness.

free

Partial Dependence Plots in R

R package for interpreting model predictions through partial dependence visualization.

free

ResponsibleAI

Open-source toolkit for responsible AI development and model explainability.

free

TensorBoard Projector

Interactive visualization tool for understanding neural network embeddings and model behavior.

free

Tracing the thoughts of a large language model

Interpretability research enabling auditable LLM decision tracing.

free

What-If Tool (Google)

Interactive tool for testing and understanding ML model behavior and fairness.

free

Fiddler AI

Monitor model drift, detect bias, and explain ML/LLM decisions in production.

unknown

Model Transparency Ratings

Ratings system for AI model transparency and accountability across deployments.

unknown

SynthID-Text

Watermark AI-generated text for transparency and provenance verification.

unknown