Decode Research's mission is to to improve understanding of AI models and to accelerate interpretability research.
We work on:
- Neuronpedia - Interpretability platform for understanding, visualizing, testing, and searching AI internals.
- SAELens - Open-source library for training and analyzing Sparse Autoencoders (SAEs), maintained by David Chanin.
- circuit-tracer - Finding circuits using features from transcoders, maintained by Michael Hanna.
- SAEDashboard - Generate dashboards for visualizing SAE features.
Our approach is roughly:
- Interpretability Tooling: Create and maintain libraries like SAELens, making state-of-the-art interpretability techniques more accessible.
- Infrastructure and Platform: Neuronpedia serves as a central hub for hosting, testing, visualizing, and understanding SAEs for LLM interpretability for both independent researchers and larger companies and labs.
- Democratizing Access: By releasing open-source SAEs for popular models, we lower the barriers to entry for AI safety research and enable a broader community to contribute.
We collaborate with organizations and companies of all sizes, from labs like Anthropic/Deepmind/OpenAI, to independent researchers, to academic and other non-profit research organizations. We're always looking for new partners and to help however we can.
Decode is led by Johnny Lin, with significant core contributions from David Chanin, Michael Hanna, advisors, and open source contributors.