This job has expired
This position was posted on October 9, 2025 and is likely no longer accepting applications. We've kept it here for historical reference. Check out the similar jobs below!

Job Overview
Location
Remote
Job Type
Full-time
Category
Software Engineering
Date Posted
October 9, 2025
Full Job Description
đź“‹ Description
- • Own the global Governance, Risk, and Compliance (GRC) program for one of the world’s most advanced AI safety companies. You will architect the policies, controls, and evidence-collection processes that let Anthropic ship frontier AI systems responsibly and at speed.
- • Translate cutting-edge AI research into practical risk frameworks. You will work side-by-side with research scientists to map novel capabilities (constitutional AI, RLHF, interpretability tooling) to concrete threats, then design controls that mitigate those threats without throttling innovation.
- • Build and continuously improve Anthropic’s control library. Starting from SOC 2 Type II, ISO 27001, and NIST 800-53 baselines, you will tailor each control to the realities of large-scale model training, inference, and data pipelines. Expect to author new controls for model-weight protection, red-team data handling, and responsible disclosure.
- • Drive risk assessments across the entire model lifecycle—from pre-training data sourcing through post-deployment monitoring. You will quantify residual risk, present findings to senior leadership, and shepherd mitigation plans to completion.
- • Serve as the primary liaison with external auditors, regulators, and enterprise customers. You will lead evidence-gathering sprints, respond to security questionnaires, and translate technical nuance into language that Fortune-500 CISOs and government officials trust.
- • Automate evidence collection and continuous control monitoring. You will partner with Security Engineering to instrument AWS, GCP, Kubernetes, and Snowflake environments, ensuring that every configuration change, access grant, and model release is logged, attested, and compliant.
- • Design and deliver security-awareness content tailored to researchers, engineers, and policy staff. Expect to run tabletop exercises that simulate insider threats to model weights, prompt-injection attacks against Claude, and supply-chain compromises of open-source dependencies.
- • Champion a culture of “security as code.” You will embed lightweight policy checks into CI/CD pipelines, create Terraform modules that bake compliance into infrastructure, and publish internal libraries that make secure defaults the path of least resistance.
- • Track the rapidly evolving regulatory landscape—EU AI Act, NIST AI RMF, ISO 42001—and translate new obligations into actionable roadmaps. You will brief executives on strategic trade-offs and own the implementation timeline.
- • Measure what matters. You will define KPIs for control effectiveness, audit-finding closure rates, and customer-trust metrics, then present quarterly updates to the Board’s Risk Committee.
- • Contribute to Anthropic’s public safety and security research. Whether co-authoring white-papers on AI supply-chain risk or presenting at conferences like RSA and NeurIPS, you will help set industry standards for responsible AI deployment.
- • Thrive in a fully remote, asynchronous culture that values written rigor. You will document decisions in Notion, review RFCs in Slack, and occasionally travel for off-sites or customer briefings (roughly 10 %).
Skills & Technologies
About Anthropic, PBC
Anthropic is a public benefit corporation founded in 2021 by former OpenAI researchers to develop large-scale AI systems that are safe, interpretable and aligned with human values. The company produces Claude, a family of conversational and reasoning models based on constitutional AI and reinforcement learning from human feedback. Headquartered in San Francisco, Anthropic combines frontier research with applied engineering, publishing scholarly papers on alignment, interpretability and robustness while offering API access and commercial products built on its models.
Similar Opportunities
4 days ago



