OpenAI launches Safety Research Scholarship, external scholars can receive computing resources and stipends but do not have internal system access

robot
Abstract generation in progress

According to 1M AI News monitoring, OpenAI has announced the opening of applications for the Safety Fellowship. This is a pilot program for external researchers, engineers, and practitioners, lasting about five months (from September 14, 2026 to February 5, 2027). The focus is on AI safety and alignment research. Priority areas include security assessments, ethics, robustness, scalable mitigation measures, privacy-preserving safety methods, agent oversight, and high-risk misuse scenarios.

Selected fellows will receive a monthly stipend, compute support, and mentorship from OpenAI. Work locations can be chosen as either the Constellation shared workspace in Berkeley or remotely. At the end of the program, participants are required to produce substantial research outputs such as a paper, benchmark test suite, or dataset. Scholars will receive API credits and related resources, but they will not have access to OpenAI internal systems.

Applications are open to candidates from multidisciplinary backgrounds such as computer science, social science, cybersecurity, privacy, and human-computer interaction. Applicants must provide recommendation letters. The deadline is May 3, and results will be announced by July 25.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin