Anthropic, PBC logo

Technical Influence Operations Threat Investigator

Job Overview

Location

USA

Job Type

Full-time

Category

Data Analyst

Date Posted

March 5, 2026

Full Job Description

đź“‹ Description

  • • As a Technical Influence Operations Threat Investigator at Anthropic, you will be at the forefront of safeguarding AI systems against sophisticated misuse. Your primary mission will be to detect, investigate, and disrupt the exploitation of Anthropic's advanced AI technologies for malicious influence operations, disinformation campaigns, coordinated inauthentic behavior, and other forms of information manipulation that threaten public discourse and democratic processes.
  • • This role sits at the critical intersection of AI safety and information integrity. You will leverage a unique blend of deep subject matter expertise in influence operations and cutting-edge technical investigation skills to identify and neutralize threat actors who are increasingly using AI to generate synthetic content, amplify divisive narratives, manipulate public opinion, and undermine societal trust.
  • • Your day-to-day responsibilities will involve proactively detecting and meticulously investigating attempts to misuse Anthropic’s AI systems. This includes identifying AI-generated disinformation, uncovering coordinated inauthentic behavior, exposing astroturfing schemes, and dismantling narrative manipulation campaigns designed to sway public perception.
  • • You will be instrumental in developing and refining influence operation-specific detection capabilities. This involves creating sophisticated abuse signals, employing advanced behavioral clustering techniques, and devising novel detection methodologies specifically tailored to the nuances of AI-enabled information manipulation.
  • • A key aspect of your role will be conducting in-depth technical investigations. You will utilize powerful tools such as SQL and Python to analyze vast datasets, trace complex user behavior patterns, and uncover the intricate networks of threat actors engaged in sophisticated influence operations.
  • • You will be responsible for producing high-quality, actionable intelligence reports. These reports will detail the tactics, techniques, and procedures (TTPs) of influence operations, highlight emerging narrative threats, and map out the campaigns orchestrated by threat actors leveraging AI systems.
  • • Your analysis will extend beyond Anthropic's platforms to encompass cross-platform threat analysis. This involves linking on-platform activity to broader influence campaigns that may span across social media, messaging platforms, and other digital ecosystems, providing a holistic view of threat actor activities.
  • • You will actively monitor and analyze state-sponsored and non-state influence operations that may be leveraging AI capabilities. A particular focus will be placed on operations originating from or targeting geopolitically significant regions, requiring an understanding of global threat landscapes.
  • • Collaboration will be central to your success. You will work closely with policy and enforcement teams to provide critical insights for informed decision-making regarding user violations and to ensure the implementation of appropriate mitigation actions.
  • • You will also engage with a diverse range of external stakeholders. This includes building relationships and sharing intelligence with government agencies, other platform integrity teams, academic researchers, and within threat intelligence sharing communities, fostering a collaborative approach to combating online threats.
  • • Looking ahead, you will play a vital role in forecasting how advancements in AI technology—such as improved content generation, voice synthesis, and multimodal capabilities—will reshape the influence operations landscape. This foresight will be crucial in informing Anthropic's safety-by-design strategies and proactive defense mechanisms.
  • • This role may involve exposure to explicit content across various sensitive topics, including sexual, violent, or psychologically disturbing material. Additionally, it may require availability for escalations during weekends and holidays, reflecting the dynamic and critical nature of threat intelligence work.
  • • Your work will directly contribute to Anthropic's mission of building safe and beneficial AI systems by mitigating one of the most significant and rapidly evolving categories of AI misuse.

Skills & Technologies

Python
Remote
Degree Required

Ready to Apply?

You will be redirected to an external site to apply.

Anthropic, PBC logo
Anthropic, PBC
Visit Website

About Anthropic, PBC

Anthropic is a public benefit corporation founded in 2021 by former OpenAI researchers to develop large-scale AI systems that are safe, interpretable and aligned with human values. The company produces Claude, a family of conversational and reasoning models based on constitutional AI and reinforcement learning from human feedback. Headquartered in San Francisco, Anthropic combines frontier research with applied engineering, publishing scholarly papers on alignment, interpretability and robustness while offering API access and commercial products built on its models.

Similar Opportunities

Arkansas, Argentina
Full-time
Expires Apr 25, 2026
Remote
Degree Required

13 days ago

Apply
Finalis Inc. logo

Finalis Inc.

Argentina
Full-time
Expires Apr 26, 2026
Remote

12 days ago

Apply
Finalis Inc. logo

Finalis Inc.

Argentina
Full-time
Expires Apr 27, 2026

11 days ago

Apply
Australia
Full-time
Expires May 3, 2026
Remote

5 days ago

Apply