Programs

We host various programs and events exploring the fields of AI alignment and governance.

AI Safety Fellowship

Overview

While AI systems become more powerful, our ability to understand and control these models remains limited. This program covers the technical problems that must be solved to ensure AI is developed to the benefit of our future. Participants will join weekly discussion groups on topics such as reward specification, goal generalization, scalable oversight, and interpretability. By the end of the semester, participants will have the opportunity to join an ML upskilling group or join AISI as a researcher to work on a project ranging from a blog post examining a topic in AI safety to a workshop paper.

View curriculum »

Who is this for?

People with less than 1 hour of experience with AI safety who are technically-minded and interested in learning more about the challenges and potential risks of AI systems to kickstart their career in AI safety. 

Application

AI Safety Fellowship is open to anyone interested in learning more about AI safety.

Applications for Spring 2025 are now closed. Join us for our other events, reading groups, and the Spring AI Safety Forum.

Expected Start Date: February 7th

Rapid Upskilling Cohorts

Overview

Facilitation and accountability group for alignment research upskilling based on the AI Alignment Research Engineer Accelerator, a rigorous bootcamp covering mechanistic interpretability and evaluation frameworks. Cohort graduates begin full-scale research projects, often in collaboration with academic and industry partners.

Who is this for?

People who have some experience with the fundamentals of AI Safety (Past AI Safety Fellowship participant or equivalent) and have a decent background in programming.

Application

By invitation after AI Safety Fellowship or BlueDot equivalent. Invitations for this round have been sent out. Contact board@aisi.dev if you are interested.

Start Date: March 1st

Research Opportunities

Join impactful projects alongside AISI researchers aimed at a workshop or arXiv publication.  This opportunity is offered to select AI Safety Fellowship graduates and by application. Feel free to reach out to board@aisi.dev if you'd like to inquire about this opportunity. Include your resume/CV and a short description of your interest in AI safety research.

We are coordinating with College of Computing and ML@GT faculty to mentor promising AISI research groups alongside the Research Option (RO) for undergraduates at GT. Details forthcoming!

Who is this for?

People who have past research experience, strong background in programming, or compelling demonstrated interest in AI safety.

Keep updated »

Paper Reading Groups

Overview

Special topics reading groups. Happen by request and dependent on interest from the larger community. Contact board@aisi.dev if you'd like to start a group for a topic you're interested in.

Who is this for?

People from any background, preferably with some research experience, who are interested in diving deep into the technical details of a niche field within AI safety research.

Speaker Events

Overview

Join us for engaging presentations by leading researchers and industry professionals who share their insights on AI safety and showcase their latest research. These events, open to both members and the public, offer a unique opportunity to explore emerging topics in the field, engage directly with speakers, and connect with others who share your interest in ensuring AI development benefits humanity. To get involved and hear about upcoming speaker events, please join our discord server.

Who is this for?

People from any background who are keen to engage with experts in the field of AI safety and learn more about the latest developments.

General Body Meetings

Overview

Build connections within our AI safety community at these welcoming gatherings. Whether we're sharing pizza over member-led presentations, engaging in lively trivia games, diving into journal discussions, or walking through key concepts, these meetings create space for meaningful interaction. Often scheduled alongside speaker events, these casual meetups let you explore AI safety topics while getting to know others who share your passion for responsible AI development. These meetings are open to everyone. To get involved and hear about upcoming general body meetings, please join our discord server.

Who is this for?

Anyone curious about AI safety who wants to connect with other like-minded individuals in a welcoming, low-pressure environment, regardless of their background.