Getting a Window into your Black Box Model
Reason codes for NFL models: interpretability for black-box AI systems.
Provides reason codes and feature importance explanations for neural network models, particularly in sports analytics contexts. Helps organizations understand model decisions for compliance and risk management. Open-source project demonstrating explainability techniques for high-stakes AI applications requiring transparency and auditability.
Adjacent tooling.
AI Trust Services (KPMG)
KPMG's trusted AI framework for governance, risk, and compliance.
Aporia
Monitor, test, and safeguard LLMs in production with observability and guardrails.
Robust Intelligence
AI security platform detecting adversarial vulnerabilities and model failures.
Azure AI Content Safety
Content moderation API detecting harmful AI outputs in real-time.
Arize AI
Monitor LLM and ML model performance, detect drift, and debug issues in production.
LangSmith
Trace, debug, and monitor LLM applications for transparency and risk control.