Research

We host several resarch projects investigating open problems in AI safety.

Focus Areas

Our broad purpose is to address emergent risks from advanced AI systems. We welcome a variety of interests in this area. Here are a few prominent areas of interest:

Current Projects

Eliciting Language Model Behaviors using Reverse Language Models

We evaluate the applicability of a reverse language model, pre-trained on inverted token-order, as a tool for automated identification of an LM's natural language failure modes.

Scaling laws for activation addition

Activation engineering is a promising direction for controlling LLM behavior at inference time with zero compute cost.  Recent research suggest manipulating model internals may even enable more precise control over model outputs.  We seek to understand how techniques operating on model activation scale with model size and improve their performance for larger models.

Supervised Program for Alignment Research

Organized by groups at UC Berkeley, Georgia Tech, and Stanford, the Supervised Program on Alignment Research (SPAR) is an intercollegiate project-based research program for students interested in AI safety running this fall. SPAR matches students around the world with advisors to do guided projects in AI safety.

Learn more »