ALEPlot
R package for Accumulated Local Effects plots to interpret ML model predictions.
ALEPlot generates Accumulated Local Effects (ALE) plots, a model-agnostic interpretation technique that explains how features influence ML model predictions. Used by data scientists and ML practitioners needing local and global explanations. Provides an alternative to SHAP and LIME for understanding model behavior and identifying potential biases.
Adjacent tooling.
Arize AI
Monitor LLM and ML model performance, detect drift, and debug issues in production.
LangSmith
Trace, debug, and monitor LLM applications for transparency and risk control.
Neuronpedia
Interpretability platform for understanding neural network behavior and safety.
A Living and Curated Collection of Explainable AI Methods
Curated reference collection of XAI methods for model transparency and interpretability.
Adversarial Model Analysis
Open-source toolkit for adversarial testing and model interpretability.
AI FactSheets 360 (IBM)
Open-source toolkit for AI transparency, bias detection, and responsible model development.