
Job Overview
Location
London, UK; Ontario, CAN; Remote-Friendly, United States; San Francisco, CA
Job Type
Full-time
Category
Data Science
Date Posted
May 6, 2026
Full Job Description
📋 Description
- • The Anthropic Fellows Program — AI Security is a 4-month full-time research fellowship focused on advancing AI safety and security through empirical projects aligned with Anthropic’s research priorities, with the goal of producing public outputs such as paper submissions.
- • Fellows will work on independent research projects under direct mentorship from leading Anthropic researchers (including Nicholas Carlini, Keri Warr, Evyatar Ben Asher, Keane Lucas, and Newton Cheng), utilizing external infrastructure like open-source models and public APIs, with access to shared workspaces in Berkeley or London or remote options in the UK, US, or Canada.
- • The program fosters collaboration within the broader AI safety and security research community, offering weekly stipends of $3,850 USD / £2,310 GBP / CAD 4,300, plus ~$15k/month in compute funding and research expense support, with over 80% of past fellows producing papers and 25-50% receiving full-time offers at Anthropic.
- • Fellows will develop expertise in AI security, offensive security techniques (e.g., pentesting, vulnerability research, CVE reporting), empirical ML research, deep learning frameworks, and experiment management, while contributing to high-impact work that reduces catastrophic risks from advanced AI systems and strengthens red teaming methodologies.
🎯 Requirements
- • Fluent in Python programming
- • Available to work full-time on the Fellows program for 4 months
- • Have work authorization in the US, UK, or Canada and be located in that country during the program
- • Strong technical background in computer science, mathematics, or physics
- • Motivated by making sure AI is safe and beneficial for society as a whole
- • Ability to implement ideas quickly and communicate clearly
🏖️ Benefits
- • Weekly stipend of $3,850 USD / £2,310 GBP / CAD 4,300
- • Access to shared workspaces in Berkeley, California or London, UK (or remote options in UK, US, or Canada)
- • Direct mentorship from leading Anthropic researchers (e.g., Nicholas Carlini, Keri Warr)
- • Funding for compute (~$15k/month) and other research expenses
- • Connection to the broader AI safety and security research community
- • Opportunity to produce public outputs (e.g., paper submissions) with over 80% past fellow success rate
Skills & Technologies
About Anthropic, PBC
Anthropic is a public benefit corporation founded in 2021 by former OpenAI researchers to develop large-scale AI systems that are safe, interpretable and aligned with human values. The company produces Claude, a family of conversational and reasoning models based on constitutional AI and reinforcement learning from human feedback. Headquartered in San Francisco, Anthropic combines frontier research with applied engineering, publishing scholarly papers on alignment, interpretability and robustness while offering API access and commercial products built on its models.
Subscribe to the weekly newsletter for similar remote roles and curated hiring updates.
Newsletter
Weekly remote jobs and featured talent.
No spam. Only curated remote roles and product updates. You can unsubscribe anytime.
Similar Opportunities

FundraiseUp Inc.
2 months ago

Hangar Aviation Technologies, Inc.
2 months ago

Anyone AI Inc.
16 days ago
