
Job Overview
Location
San Francisco
Job Type
Full-time
Category
Data & Analytics
Date Posted
February 28, 2026
Full Job Description
đź“‹ Description
- • As a Strategic Risk Analyst at OpenAI, you will play a pivotal role in safeguarding the future of artificial intelligence by proactively identifying, analyzing, and mitigating potential abuse and strategic risks across our cutting-edge products and platforms. This is a unique opportunity to contribute to OpenAI's core mission of ensuring AI benefits all of humanity by building and maintaining a comprehensive, horizontal "radar" for AI abuse and strategic risk. You will be instrumental in correlating internal signals, external intelligence, and real-world events into clear, actionable priorities that directly inform OpenAI's safety and product decision-making processes.
- • Your primary responsibility will be to synthesize a wide array of data – including internal abuse patterns, upstream and external intelligence feeds, and product and conversational signals – into decision-ready risk insights. This involves transforming complex, often ambiguous information into structured judgments, complete with explicit assumptions and confidence levels. You will produce recurring analytical briefs and provide crucial prioritization inputs that guide the development and deployment of AI technologies.
- • Collaboration is at the heart of this role. You will work closely with a diverse group of stakeholders, including investigators, engineers, policy experts, trust and safety counterparts, and measurement and forecasting teammates. Your ability to translate messy, disparate signals into clear, actionable findings and recommendations will be critical. This position offers a high-leverage analytical environment where your crisp thinking and effective communication will directly shape safety decisions, mitigation strategies, and the overall readiness of our products.
- • Key responsibilities include continuously monitoring and analyzing internal risk signals, such as abuse telemetry, investigation outputs, and model and product signals, to detect emerging trends, shifts in adversary tactics, and novel abuse patterns. You will also conduct proactive upstream and external scanning, leveraging open-source intelligence (OSINT), monitoring ecosystem developments, and analyzing real-world events to understand and articulate their implications for OpenAI's products and the broader threat landscape.
- • You will be tasked with identifying and conducting deep-dive analyses into specific harms and misuse cases across various products and channels, effectively turning raw, unstructured data into clear, evidence-based analytic findings. A significant part of your role will involve connecting individual incidents to form system-level narratives. This means understanding the actors involved, their incentives, the weaknesses in product design that are being exploited, and how misuse might spill over across different products. You will be expected to rigorously pressure-test hypotheses early in the analysis process.
- • The output of your analysis will be concise, decision-ready risk briefs and intelligence estimates. These documents must clearly articulate your findings, explicitly state your assumptions, define your confidence levels, and outline the conditions under which your assessment might change. Ultimately, you will convert these analyses into clear, ranked priorities and actionable recommendations that can be directly implemented by product, safety, and policy teams.
- • Furthermore, you will define and track key risk indicators (KRIs) and outcome metrics to rigorously evaluate the effectiveness of implemented mitigations. This data-driven approach will be essential for driving necessary course corrections and ensuring continuous improvement in our safety posture.
- • You will collaborate with data and engineering partners to build robust early-warning and monitoring capabilities. This includes developing dashboards that effectively highlight leading indicators of risk and unusual changes in system behavior, providing real-time situational awareness.
- • Contributing to product readiness and launch reviews is another vital aspect of this role. You will develop reusable playbooks, FAQs, and briefing materials that empower teams to respond consistently and effectively to emerging risks.
- • Finally, you will drive cross-functional alignment by tailoring your communication and readouts to the specific needs of investigations, engineering, policy, trust and safety, and product stakeholders. Ensuring clarity on decisions and crisp follow-through on action items will be paramount to your success.
🎯 Requirements
- • Typically 5+ years of experience in trust and safety, integrity, security, policy analysis, or intelligence work, with a demonstrated ability to analyze complex online harms and AI-enabled misuse.
- • Strong analytical craft, including the ability to identify weak signals, form and test hypotheses, explicitly state assumptions, and communicate confidence and uncertainty clearly.
- • Proven ability to work cross-functionally with product, engineering, data science, operations, legal, and policy teams, driving clarity on tradeoffs and ensuring follow-through on mitigation work.
- • Excellent written and verbal communication skills, with a track record of producing concise, executive-ready briefs and explaining complex issues in grounded, concrete terms.
🏖️ Benefits
- • Competitive salary and equity compensation.
- • Comprehensive health, dental, and vision insurance.
- • Generous paid time off and holidays.
- • Opportunities for professional development and learning.
- • A collaborative and innovative work environment focused on impactful AI safety research and deployment.
Skills & Technologies
Onsite
About OpenAI, Inc.
OpenAI is a San Francisco-based artificial intelligence research and deployment company founded in 2015. It develops large-scale AI models such as GPT, DALL-E, and Codex, providing cloud APIs and consumer applications like ChatGPT. Originally established as a non-profit, it later created a capped-profit subsidiary to attract capital while maintaining its mission to ensure artificial general intelligence benefits all of humanity.
Similar Opportunities

Premier Research Group Limited
Bulgaria
Full-time
Expires Mar 17, 2026
Senior
Onsite
Degree Required
+1 more
2 months ago


