Spring AI Safety Forum
Friday, March 7th, 1 - 4 PM
Georgia Institute of Technology
Scheller School of Business Floor 2
The start of a conversation.
Artificial Intelligence will be a transformative technology.
How do we make sure it benefits all?
Schedule
The Georgia Tech Scheller School of Business is located at 800 West Peachtree Street NW, Atlanta, GA.
12:30 - 1:00 PM: Check-in & Networking (Floor 2)
1:00 - 1:45 PM: Opening & Keynote Speech
1:45 - 4:00 PM: Technical & Governance Workshops
Keynote: Jason Green-Lowe
Executive Director of the Center for AI Policy
The Disconnect Between Heavy AI Risks and Lightweight AI Governance
Jason Green-Lowe is the Executive Director of the Center for AI Policy (CAIP), which works with government partners on legislation that will mitigate the catastrophic risks associated with advanced AI. Prior to joining CAIP, Jason worked as a product safety litigator and as a data compliance counselor, advising local governments and nonprofits about how to safely store and manage sensitive data. He graduated from Harvard Law in 2010 and has two certificates in data science.
What does unsafe AI look like?
Technical Workshop
Opening Pandora's Box: Creating Malicious RL Agents
Workshop Leader: Changlin Li is the founder of the AI Safety Awareness Foundation, and began his career at the Systemized Intelligence Lab at Bridgewater Associates for 5 years before joining Vowel.com (now acquired by Zapier) as a founding engineer for 4 years. Along the way he did a stint at the Recurse Center in New York City studying formal verification of software and then another stint at the Recurse Center with a focus on modern AI and AI safety.
How can we prepare for transformative AI?
Governance Workshop
Down the Rabbit Hole: Forecasting and AI Legislation
Workshop Leader: Parv Mahajan is a counter-WMD and counterproliferation researcher at the Georgia Tech Research Institute Advanced Concepts laboratory and a curriculum developer with the GT School of Mathematics. His research focuses on cyberbiosecurity and RL interpretability. Outside of GT, he works on LLM red-teaming with Apart Lab Studio and on de novo protein design research with Big Data Big Impact.