A Living and Curated Collection of Explainable AI Methods
Curated reference collection of XAI methods for model transparency and interpretability.
A living repository documenting explainable AI (XAI) methods and research papers. Serves as a knowledge base for data scientists, AI auditors, and compliance teams needing to understand and implement interpretability techniques. Helps organizations meet EU AI Act transparency requirements for high-risk AI systems by cataloging proven XAI approaches.
Adjacent tooling.
AI Governance & Compliance (EY Global)
Enterprise AI governance and compliance framework aligned with EU AI Act requirements.
AI Trust Services (KPMG)
KPMG's trusted AI framework for governance, risk, and compliance.
Aporia
Monitor, test, and safeguard LLMs in production with observability and guardrails.
Centraleyes
AI-powered risk register and policy management for EU AI Act compliance.
Certa
AI-driven third-party risk assessments and compliance management.
Credo AI
Map AI initiatives to regulatory frameworks with compliance scoring.