
Job Overview
Location
San Francisco
Job Type
Full-time
Category
Software Engineering
Date Posted
February 28, 2026
Full Job Description
đź“‹ Description
- • As a Technical Abuse Investigator at OpenAI, you will be at the forefront of ensuring that our advanced AI technologies are used for the benefit of humanity, not for harm. You will join the dedicated Intelligence and Investigations team, a critical function focused on detecting, investigating, and disrupting the misuse of OpenAI's powerful platform. This role is pivotal in understanding and mitigating novel and critical harms, providing essential data-backed insights to inform model policies and build scalable safety mitigations. Your work will directly contribute to making OpenAI's products safer and enabling their use for positive, impactful applications.
- • This position is uniquely designed to amplify the team's impact. While you will conduct direct investigations, a significant part of your role will involve scaling and automating highly manual, yet crucial, investigative processes. You will leverage your technical acumen to design and implement lightweight solutions, such as sophisticated notebook templates, efficient data pipelines, and intuitive internal utilities. These tools will empower specialized investigators to identify, track, and address abuse more effectively and at a greater scale than currently possible.
- • Your success will be measured not only by the number of investigations you personally complete but, more importantly, by the increased efficiency and consistency your technical contributions bring to the entire team. You will be a force multiplier, enabling broader and deeper coverage of potential misuse.
- • Collaboration is key in this role. You will work hand-in-hand with cross-functional partners across Engineering, Legal, Investigations, Security, and Policy teams. This collaboration is essential for responding to time-sensitive escalations, investigating activities that fall outside of existing safeguards, and translating complex investigative findings into actionable, scalable detection and enforcement strategies.
- • The role involves participation in an on-call rotation, requiring you to handle urgent escalations outside of standard working hours. Investigations may expose you to sensitive content, including material that is sexual, violent, or otherwise disturbing. This role operates within the Pacific Standard Time (PST) zone and is open to remote work within the United States, with a strong preference for candidates located in San Francisco or New York.
- • Key responsibilities include:
- • Detecting, investigating, and disrupting abuse and harm by analyzing complex datasets in collaboration with policy, legal, global affairs, security, and engineering teams.
- • Developing and refining abuse signals and investigative methodologies, transforming one-off insights into scalable solutions to minimize manual effort and expand operational coverage.
- • Building and maintaining lightweight technical solutions, such as SQL/Python data pipelines, investigation templates, dashboards, and internal utilities, tailored for investigators focusing on specific harm domains.
- • Cultivating a profound understanding of OpenAI's products, data systems, and enforcement mechanisms, and partnering with engineering and data teams to enhance investigative tooling, data quality, and operational workflows.
- • Communicating investigation findings clearly and concisely to internal stakeholders through detailed written briefs, data-driven recommendations, and concise escalation summaries.
- • Participating in an infrequent incident response rotation, which demands rapid threat triaging, thorough investigation, effective mitigation, sound judgment, and clear communication to senior leadership.
- • Being a positive and collaborative team member that others enjoy working with.
- • Demonstrating a proven ability to rapidly learn new processes, systems, and team dynamics, thriving in ambiguous, rapidly changing, and high-pressure environments.
- • You will thrive in this role if you possess deep expertise in at least two of the following specialized domains: agentic AI misuse; automation; encryption; terrorism; fraud; violence; child exploitation; data science; dashboarding; API abuse; product exploits; prompt injection; distillation. You should also have 5+ years of experience investigating and mitigating abuse in a relevant field, coupled with at least 2 years of experience with relevant technical projects. A strong ability to present safety work effectively in public or policy settings is highly valued, as is prior experience scaling or automating processes, particularly using LLMs or ML techniques.
Skills & Technologies
Python
Remote
About OpenAI, Inc.
OpenAI is a San Francisco-based artificial intelligence research and deployment company founded in 2015. It develops large-scale AI models such as GPT, DALL-E, and Codex, providing cloud APIs and consumer applications like ChatGPT. Originally established as a non-profit, it later created a capped-profit subsidiary to attract capital while maintaining its mission to ensure artificial general intelligence benefits all of humanity.


