OpenAI Announces Pilot Safety Fellowship Amid New Yorker Investigation

Key Points
- OpenAI launched a six‑month Safety Fellowship on April 6, 2026.
- Program runs from September 14, 2026, to February 5, 2027.
- Fellows receive a monthly stipend, API credits and mentorship from OpenAI researchers.
- Applications close on May 3; selections announced by July 25.
- Research focus includes robustness, privacy‑preserving methods, and high‑severity misuse.
- Fellowship announced hours after a New Yorker investigation exposed the dissolution of OpenAI’s internal safety teams.
- Participants must deliver a paper, benchmark or dataset by the program’s end.
OpenAI unveiled a six‑month pilot Safety Fellowship on April 6, 2026, offering external researchers a stipend, compute credits and mentorship to tackle AI safety and alignment. The program runs from September 14, 2026, to February 5, 2027, and accepts applications until May 3. Its launch follows a New Yorker exposé that detailed the company’s recent dissolution of internal safety teams and the removal of “safely” from its mission filing. OpenAI says the fellowship is an open‑door invitation for experts across computer science, social sciences and cybersecurity to produce concrete research outcomes by the program’s end.
OpenAI announced a new Safety Fellowship on April 6, 2026, positioning the six‑month pilot as a direct response to growing scrutiny over the company’s internal safety efforts. The fellowship will support a cohort of external researchers from September 14, 2026, through February 5, 2027. Each fellow receives a monthly stipend, access to OpenAI’s API credits, and dedicated mentorship from the firm’s research staff. In return, participants must deliver a substantive output—be it a paper, benchmark, or dataset—by the program’s conclusion.
The timing of the announcement is noteworthy. Hours earlier, The New Yorker published an investigation by Ronan Farrow and Andrew Marantz that alleged OpenAI had dismantled its Superalignment team in May 2024, its AGI‑Readiness team in October 2024, and the Mission Alignment group in February 2026. The report also claimed the word “safely” was removed from the company’s mission statement on IRS filings. When journalists asked OpenAI for comment on the existence of “existential safety,” a spokesperson reportedly replied, “What do you mean by existential safety? That’s not, like, a thing.”
OpenAI’s fellowship seeks to shift the focus of safety research outward, inviting scholars from computer science, social sciences, cybersecurity, privacy, and human‑computer interaction. The firm emphasized that technical judgment and research ability outweigh formal academic credentials in the selection process. Priority areas include safety evaluation, robustness, scalable mitigation strategies, privacy‑preserving methods, agentic oversight, and high‑severity misuse domains.
Applicants have until May 3 to submit proposals, with successful candidates notified by July 25. While fellows will receive API credits, they will not gain access to OpenAI’s internal systems, a restriction that underscores the program’s arm‑length design. Critics within the AI safety community have already begun debating whether an external fellowship can substitute for a robust in‑house alignment program.
The fellowship’s structure reflects a broader industry trend of leveraging external talent to address complex ethical challenges. By providing compute resources and mentorship, OpenAI hopes to generate actionable research that can be integrated into its product roadmap. Whether the initiative will restore confidence in the company’s commitment to safety remains to be seen, but the pilot marks the first formal, public step since the internal teams were dissolved.