IML
Open-source ML interpretability library for understanding model decisions.
IML provides tools for interpreting machine learning models through feature importance, partial dependence plots, and other explainability methods. Used by data scientists and ML engineers to understand model behavior and build trustworthy AI systems. Supports compliance documentation by making model decisions transparent and auditable.
Adjacent tooling.
AI Trust Services (KPMG)
KPMG's trusted AI framework for governance, risk, and compliance.
Aporia
Monitor, test, and safeguard LLMs in production with observability and guardrails.
Robust Intelligence
AI security platform detecting adversarial vulnerabilities and model failures.
Azure AI Content Safety
Content moderation API detecting harmful AI outputs in real-time.
Arize AI
Monitor LLM and ML model performance, detect drift, and debug issues in production.
LangSmith
Trace, debug, and monitor LLM applications for transparency and risk control.