News
Stay updated with the latest news and upcoming events!
March 15th
National AI Action Plan Submission
Our RFI Working Group submitted a response to the federal government’s Office of Science and Technology Policy on the development of a National AI Action Plan in collaboration with faculty and PhD students from the School of Public Policy. Read the full response here.
As we continue fleshing out our governance program, we are excited to continue collaborations with researchers across campus and to provide input to vital organizations in the US government.
Executive Summary:
Frontier AI models represent a critical national security asset requiring immediate action to ensure America's economic competitiveness and geopolitical dominance. The recommended dual approach involves classifying frontier models as vital security assets while fostering commercial applications that drive innovation and economic growth. Success requires substantial investment in infrastructure, intelligence networks, and public-private partnerships, alongside comprehensive education and workforce development initiatives. In our response, we split AI models into two main classes - (1) frontier AI models and (2) their commercial applications - and provide four classes of recommendations:
National Security, Defense, and Research Dominance: Secure frontier AI development through establishing classified partnerships between AI labs and DoD/IC government agencies, implement multi-layered defense strategies, secure human capital through competitive retention programs, prioritize explainable AI over AGI, and strengthen IP protection and procurement controls.
Technical Infrastructure and Model Development: Develop resilient model evaluation tools, expedite domestic hardware manufacturing through the CHIPS Act, secure critical mineral supply chains beyond China, and invest in energy-efficient computation technologies.
Innovation and Education: Support industry research on AI adoption best practices, develop comprehensive workforce training programs, accelerate government AI adoption, and build AI literacy across K-12 education.
Balancing Autonomy and Safety: Enforce strict data handling policies, establish NIST as the primary regulatory body for commercial AI, incorporate Probabilistic Risk Assessment methodologies, and support market-driven solutions like incident reporting systems.

March 7th
Spring AI Safety Forum
On March 7, 2025, Georgia Institute of Technology hosted over 60 researchers, students, industry professionals, and members of the public at the Spring AI Safety Forum at the Scheller College of Business.
The event kicked off with a powerful keynote by Jason Green-Lowe, Executive Director of the Center for AI Policy. His address, "The Disconnect Between Heavy AI Risks and Lightweight AI Governance," tackled the complex challenges of aligning rapid AI development with robust governance structures—setting the tone for the day’s discussions.
Following the keynote, attendees dove into hands-on workshops designed to address the multifaceted nature of AI safety. One session, led by Changlin Li, founder of the AI Safety Awareness Foundation, explored the creation of malicious reinforcement learning agents in a workshop aptly named "Opening Pandora’s Box." Participants gained firsthand experience in understanding and managing the potential risks these AI systems might pose. Meanwhile, another workshop, "AI Control: Strategies and Failures," guided by Tyler Tracy from Redwood Research, explored the technical challenges behind controlling and predicting potentially malicious agents.
Organized by the AI Safety Initiative (AISI) at Georgia Tech, the forum aimed to foster hands-on engagement with AI safety tools and stimulate policy discussions. The event also served as a networking platform for participants to connect with others dedicated to ensuring that AI technologies are developed and governed responsibly.
February 24
CAIP Congressional Exhibit
AISI presents at a congressional exhibit on AI Safety Risk hosted by the Center for AI Policy at Washington DC. The team presented a demonstration on redteaming language models. Read more about the event in the official press release.