MadryLab
Adversarial robustness research lab advancing AI security and trustworthiness.
MadryLab conducts research on adversarial examples, robustness, and security in machine learning systems. Their work informs AI safety practices and helps organizations understand model vulnerabilities. Researchers and ML teams use their findings to build more reliable AI systems. Focus on adversarial training and robustness evaluation provides practical compliance insights for high-risk applications.
Adjacent tooling.
AI Trust Services (KPMG)
KPMG's trusted AI framework for governance, risk, and compliance.
Aporia
Monitor, test, and safeguard LLMs in production with observability and guardrails.
Lumenova AI
Enterprise platform automating AI governance, risk assessment, and fairness monitoring.
ModelOp
AI ethics platform for model monitoring, bias detection, and governance.
Robust Intelligence
AI security platform detecting adversarial vulnerabilities and model failures.
Sardine
AI risk management for fraud detection with governance oversight.