Neuronpedia
Interpretability platform for understanding neural network behavior and safety.
Neuronpedia provides tools for mechanistic interpretability of large language models and neural networks. It helps AI teams, researchers, and auditors understand model internals through sparse autoencoders and feature analysis. Essential for EU AI Act transparency requirements and responsible AI governance by making black-box models more interpretable.
Adjacent tooling.
AI Governance & Compliance (EY Global)
Enterprise AI governance and compliance framework aligned with EU AI Act requirements.
AI Trust Services (KPMG)
KPMG's trusted AI framework for governance, risk, and compliance.
Aporia
Monitor, test, and safeguard LLMs in production with observability and guardrails.
Centraleyes
AI-powered risk register and policy management for EU AI Act compliance.
Certa
AI-driven third-party risk assessments and compliance management.
Credo AI
Map AI initiatives to regulatory frameworks with compliance scoring.