OpenAI, Inc. logo

Data Scientist, Integrity Measurement

Job Overview

Location

San Francisco

Job Type

Full-time

Category

Data Scientist

Date Posted

March 7, 2026

Full Job Description

đź“‹ Description

  • • Join OpenAI's Applied Foundations team, a critical unit focused on safeguarding our advanced AI technologies against a spectrum of adversarial threats and ensuring platform integrity as we scale.
  • • As a Data Scientist, Integrity Measurement, you will be at the forefront of defending our platforms against sophisticated misuse, including financial abuse, large-scale attacks, and other malicious activities that could compromise user experience or operational stability.
  • • The Integrity pillar within Applied Foundations is specifically tasked with developing and deploying scaled systems to detect and respond to malicious actors and harmful content across OpenAI's platforms.
  • • This role is pivotal in maturing our systems that address severe usage harms, requiring a data scientist to meticulously measure the prevalence of these issues and evaluate the effectiveness of our countermeasures.
  • • You will own the measurement strategy and quantitative analysis for a portfolio of severe, actor- and network-based usage harm verticals, delving deep into the complexities of identifying and quantifying these threats.
  • • A core responsibility will be to develop and implement innovative, AI-first methodologies for prevalence measurement and other critical production safety metrics, potentially incorporating off-platform indicators and unconventional datasets.
  • • You will be instrumental in building robust metrics suitable for performance goal setting and A/B testing, especially when direct prevalence metrics are not feasible or sufficient.
  • • Take ownership of comprehensive dashboards and metrics reporting for your assigned harm verticals, providing clear, actionable insights to stakeholders.
  • • Conduct in-depth analyses to generate insights that directly inform and drive improvements in review processes, detection algorithms, and enforcement strategies, significantly influencing product and safety roadmaps.
  • • Play a key role in optimizing Large Language Model (LLM) prompts specifically for the purpose of accurate and efficient measurement of harmful activities.
  • • Collaborate closely with other safety teams across OpenAI to gain a deep understanding of emerging and existing safety concerns, and to contribute to the creation of relevant policies that effectively support our safety objectives.
  • • Prepare and present key metrics for leadership reviews and external reporting, ensuring transparency and accountability.
  • • Develop and implement automation solutions, leveraging OpenAI's cutting-edge agentic products, to enhance efficiency and scale your own impact.
  • • This role demands a proactive approach to identifying and mitigating risks, requiring a keen analytical mind and a commitment to upholding the highest standards of AI safety and integrity.
  • • You will work with sensitive and potentially disturbing content, including material related to sexual violence, child safety, and other forms of severe harm, requiring emotional resilience and a strong ethical compass.
  • • Contribute to the continuous evolution of our safety measurement frameworks, ensuring they remain state-of-the-art and effective against evolving threats.
  • • The position offers a unique opportunity to shape the safety and integrity of globally impactful AI technologies, making a tangible difference in protecting users and the broader ecosystem.
  • • You will be part of a dynamic and collaborative environment, working alongside world-class researchers and engineers dedicated to responsible AI development.
  • • This role is ideal for a senior data scientist with a proven track record in trust and safety, eager to drive measurement direction and tackle some of the most challenging problems in AI safety.
  • • Your work will directly contribute to OpenAI's mission of ensuring that artificial general intelligence benefits all of humanity by building trust and safety into the core of our AI systems.
  • • The San Francisco or New York office location offers a vibrant work environment, with potential for urgent escalations outside of normal work hours, reflecting the critical nature of the role.
  • • Embrace the challenge of measuring and mitigating complex harms in a rapidly evolving technological landscape, contributing to the responsible deployment of AI.

Skills & Technologies

Python
Data Science
Onsite

Ready to Apply?

You will be redirected to an external site to apply.

OpenAI, Inc. logo
OpenAI, Inc.
Visit Website

About OpenAI, Inc.

OpenAI is a San Francisco-based artificial intelligence research and deployment company founded in 2015. It develops large-scale AI models such as GPT, DALL-E, and Codex, providing cloud APIs and consumer applications like ChatGPT. Originally established as a non-profit, it later created a capped-profit subsidiary to attract capital while maintaining its mission to ensure artificial general intelligence benefits all of humanity.

Get more remote jobs like this

Subscribe to the weekly newsletter for similar remote roles and curated hiring updates.

Newsletter

Weekly remote jobs and featured talent.

No spam. Only curated remote roles and product updates. You can unsubscribe anytime.

Similar Opportunities

London
Full-time
Expires May 14, 2026
Python
Data Science
Senior
+1 more

1 month ago

Apply
⏰ EXPIRES SOON
Brazil - Sao Paolo
Full-time
Expires Apr 25, 2026 (Soon)
Data Science
Junior
Remote

2 months ago

Apply
⏰ EXPIRES SOON
SĂŁo Paulo, Brazil
Full-time
Expires Apr 25, 2026 (Soon)
Python
Apache Spark
Onsite
+1 more

2 months ago

Apply
Canada
Full-time
Expires May 9, 2026
Python
Spring
TensorFlow
+4 more

1 month ago

Apply