CBAI AI Safety Fellowship 2026: Fully Funded Summer Research

The CBAI Summer Research Fellowship in AI Safety 2026 offers a fully funded chance for researchers to tackle big risks from advanced AI. This nine-week program runs from June 8 to August 10 in Cambridge, Massachusetts. It brings together talented people to work on AI safety under expert guidance.

Program Overview

Hosted by the Cambridge Boston Alignment Initiative, the CBAI Summer Research Fellowship in AI Safety 2026 focuses on key areas like interpretability, multi-agent safety, formal verification, and risk management. Fellows join workshops, speaker events, and hands-on projects. Past participants have landed jobs at groups like Goodfire and Redwood, presented at top conferences such as NeurIPS and ICLR, and even advised policymakers in Washington, DC. The deadline to apply is April 12, 2026.

Research Tracks

The fellowship covers four main tracks to match different interests.

Technical AI Safety
This track targets ways to cut catastrophic risks from powerful AI. Work includes alignment strategies, interpretability tools, robustness tests, formal verification, multi-agent safety, and scalable oversight.

AI Governance
Here, fellows study how to guide AI development and use. Topics cover policy frameworks, regulations, global teamwork, and systems to handle risks from strong AI.

Technical Governance
This blends tech and policy. Research looks at compute governance, model evaluations, standards, and ways to make advanced AI safe.

Economics of Transformative AI
Fellows explore how advanced AI affects economies. This includes growth models, productivity shifts, job markets, and policy on industry strategies and market power.

Benefits

The program provides strong support for fellows.

  • Mentorship: One to two hours per week from experts at MIT, Harvard, Northeastern, Boston University, Institute for Progress, AVERI, RAND, and more. In-house managers with experience in interpretability, alignment, and governance help guide projects.
  • Stipend: $10,000 for the nine weeks.
  • Meals: Free lunches and dinners on weekdays, plus office snacks and drinks.
  • Housing: Provided for those traveling from outside Cambridge.
  • Office Space: 24/7 access near Harvard Square, close to the metro and Harvard Yard.
  • Networking: Links to labs at Northeastern, MIT, and Harvard. Social events, workshops, and talks with top researchers help build connections.
  • Research Support: Up to $10,000 in compute like API credits and GPUs. Plus help with conferences.
  • Extension Option: Up to six months of extra funding for continued work.

Eligibility Criteria

Open to those committed to safe AI. Ideal applicants include:

  • Undergraduate, Master’s, PhD students, and postdocs new to or deepening AI safety work.
  • Recent graduates switching to technical or governance AI safety roles.
  • Anyone passionate about AI risks, aged 18 or older.
    US-based international students can use OPT or CPT. No visa sponsorship available.

Application Process

Apply in four steps:

  1. Fill out the general form.
  2. Do a 15-minute interview with CBAI.
  3. Complete mentor questions, test tasks, or code screening for technical tracks.
  4. Interview with your mentor.

Start your application through the official form. For full details, check the CBAI site.

Frequently Asked Questions

What is the CBAI Summer Research Fellowship in AI Safety 2026?

It’s a nine-week fully funded program from June 8 to August 10 in Cambridge, Massachusetts, where researchers work on AI safety topics like interpretability and governance under expert guidance.

What are the main research tracks offered?

The tracks include Technical AI Safety, AI Governance, Technical Governance, and Economics of Transformative AI, covering risks, policies, and economic impacts of advanced AI.

What benefits do fellows receive?

Fellows get a $10,000 stipend, free meals, housing, mentorship, office space, networking events, research support up to $10,000, and an option for six months of extra funding.

Who is eligible and what is the application deadline?

It’s open to students, postdocs, recent graduates, and others passionate about AI safety aged 18 or older; the deadline to apply is April 12, 2026.

Leave a Reply

Your email address will not be published. Required fields are marked *