A community at Georgia Tech ensuring AI is developed to the benefit of our future.

Our Mission

Managing risks from advanced artificial intelligence is one of the most important problems of our time. We are a community of technical and policy researchers at Georgia Tech aimed at reducing these risks, training the next generation of researchers, engaging the public, and steering the trajectory of AI development for the better. 

Read more »

Research

We pursue and publish novel work that seeks to understand, evaluate, and make safe powerful artificial intelligence systems, including through interpretability, goal specification, and governance projects.

View projects »

Programs

We host a variety of educational fellowships, upskilling programs, and large events exploring the fields of AI alignment and governance, drawing in talent from around Atlanta and the Georgia Tech community.

View all programs »

Current: AI Safety Fellowship

A 6-week seminar that introduces the basic arguments for caution in AI development from a technical perspective.  Fellows begin research projects with the Initative or pursue further upskilling. Applications closed Feb. 2nd.

Current: Rapid Upskilling Cohorts

Facilitation of the AI Alignment Research Engineer Accelerator, a rigorous bootcamp covering mechanistic interpretability and evaluation frameworks. Cohort members begin full-scale research projects, often in collaboration with academic and industry partners.

We seek to engage a wide variety of students, faculty, and researchers from across fields and departments. Our AI Safety Pipeline is designed to make technical and governance research approachable and well-supported at Georgia Tech, and we are constantly looking for innovative and creative ways to further our mission. 

Get Involved

Discord

Discuss AI safety and frontier development and keep up with upcoming eventsJoin »

Contact

Talk to a member of our team.Email »

Newsletter

Receive group updates in your inbox.Subscribe »

Our members have collaborated with researchers at:

Faculty

Proudly supported by: