Directory / Responsible AI Toolkit / Redwood Research
[VENDOR] Profile

Redwood Research

AI safety research lab building tools for measuring and improving model alignment.

Redwood Research conducts AI safety research and develops tools for evaluating and improving language model behavior. They focus on mechanistic interpretability, model auditing, and alignment testing. Used by ML teams seeking deeper insight into model risks and behavioral validation. Emphasizes rigorous evaluation methodologies over compliance checklists.