
Job Overview
Location
Anywhere (Open Globally)
Job Type
Full-time
Category
Data Scientist
Date Posted
February 17, 2026
Full Job Description
đź“‹ Description
- • Future of Life Institute (FLI) is seeking a highly skilled and motivated Senior Technical Specialist to join its Comprehensive Risk Assessment (CARMA) team. This pivotal role is designed for an individual with a profound understanding of AI safety, alignment, and technical governance, who can also provide leadership in quality assurance and methodological rigor. You will be instrumental in developing and refining the analytical frameworks used to assess the complex and potentially catastrophic risks associated with increasingly capable AI systems. This position offers a unique opportunity to contribute to groundbreaking research that shapes industry standards and informs global policy, directly impacting society's ability to navigate the challenges posed by advanced AI.
- • As a Senior Technical Specialist, your primary responsibility will be to conduct original research focused on AI risk pathways. This involves developing innovative techniques for identifying threat models and generating comprehensive risk pathway analyses. These analyses will not only focus on the technical aspects of AI but will also deeply consider the societal and sociotechnical dimensions, recognizing the interconnectedness of AI systems with human societies and institutions.
- • A key aspect of this role involves modeling complex risk dynamics. You will be tasked with developing and applying models that capture multi-node risk transformation, amplification, and threshold effects. This means understanding how risks can evolve, spread, and intensify through social systems, and identifying critical points where these risks could escalate dramatically.
- • You will play a crucial role in the design and enhancement of robust technical governance frameworks and assessment methodologies. This includes addressing catastrophic risks, with a particular emphasis on loss-of-control scenarios, ensuring that our approaches are comprehensive and forward-looking.
- • Leadership in quality assurance is a significant component of this position. You will provide strategic and tactical quality control for the team's research outputs, ensuring the conceptual soundness, technical accuracy, and overall rigor of all analyses and methodologies developed.
- • This role requires you to drive or take significant ownership of original research projects that align with the CARMA team's strategic objectives. This autonomy allows for deep dives into critical areas of AI risk management and the development of novel solutions.
- • Collaboration is essential. You will work closely with other CARMA teams to ensure that risk assessment paradigms are effectively integrated with other workstreams, such as policy development and technical safety approaches. This cross-functional collaboration will foster a holistic approach to AI risk management.
- • You will contribute to the development of technical standards and best practices for the evaluation, risk measurement, and risk thresholding of AI systems. Your expertise will help establish benchmarks for safety and reliability in the rapidly evolving AI landscape.
- • A critical output of this role will be the crafting of persuasive communications. You will translate complex technical findings into clear, compelling narratives for a variety of key stakeholders, including policymakers, industry leaders, and the broader public, advocating for effective AI risk management strategies.
- • The CARMA team is dedicated to lowering the risks to humanity and the biosphere from transformative AI. We achieve this by grounding AI risk management in rigorous analysis, developing policy frameworks that address advanced general intelligence (AGI), advancing technical safety approaches, and fostering global perspectives on durable safety. Your work will directly support these overarching goals.
- • This position is entirely remote, offering global flexibility, though occasional travel may be required for team meetings or conferences. You will be joining a fiscally-sponsored project of Social & Environmental Entrepreneurs, Inc., a 501(c)(3) nonprofit public benefit corporation.
- • The ideal candidate possesses a strong foundation in risk modeling approaches such as causal modeling, Bayesian networks, and systems dynamics. Experience with systemic and sociotechnical modeling of risk propagation is highly valued, as is a proven ability to identify subtle flaws in complex arguments through excellent analytical thinking. Strong written and verbal communication skills are essential for effectively conveying technical information to diverse audiences. A publication record or equivalent demonstrated expertise in AI safety, alignment, or governance is expected, alongside a systems thinking approach and independent intellectual rigor. The ability to collaborate constructively in fast-paced, intellectually demanding environments and comfort with uncertainty in a rapidly evolving knowledge landscape are also key attributes.
- • Preferred qualifications include a background in complex systems theory, control theory, cybernetics, multi-scale modeling, or dynamical systems. Prior work experience at AI safety research organizations, technical AI labs, policy institutions, or adjacent risk domains would be advantageous. Experience with quality assurance processes for technical research, the ability to model threshold effects and nonlinear dynamics in sociotechnical systems, and an understanding of international dynamics in AI development are also desirable. The capacity to balance acute and aggregate AI risks and experience with specific risk analysis tools like causal, Bayesian, or semi-quantitative hypergraphs would further strengthen an application.
🎯 Requirements
- • 5+ years of experience in AI safety, alignment, and/or governance, with demonstrated depth of expertise.
- • Strong understanding of multiple risk modeling approaches (e.g., causal modeling, Bayesian networks, systems dynamics).
- • Experience with systemic and sociotechnical modeling of risk propagation.
- • Excellent analytical thinking with the ability to identify subtle flaws in complex arguments.
- • Strong written and verbal communication skills for technical and non-technical audiences.
- • Publication record or equivalent demonstrated expertise in relevant areas.
🏖️ Benefits
- • Fully remote work opportunity with global flexibility.
- • Opportunity to work on cutting-edge AI safety research with significant societal impact.
- • Collaborative and intellectually stimulating work environment.
- • Contribution to shaping industry standards and global AI policy.
Skills & Technologies
About Future of Life Institute
The Future of Life Institute (FLI) is a global research and advocacy organization working to mitigate existential risks facing humanity. They focus on the risks posed by advanced artificial intelligence, biotechnology, and nuclear weapons. FLI supports research to ensure that AI is developed safely and beneficially, and they advocate for policies that reduce the likelihood of catastrophic outcomes from these powerful technologies. Their work includes funding research, organizing conferences, and engaging with policymakers and the public to raise awareness and promote responsible innovation. FLI's ultimate goal is to steer humanity toward a future where advanced technologies enhance, rather than threaten, our existence.
Subscribe to the weekly newsletter for similar remote roles and curated hiring updates.
Newsletter
Weekly remote jobs and featured talent.
No spam. Only curated remote roles and product updates. You can unsubscribe anytime.
Similar Opportunities

Wayflyer Limited
11 days ago

Shift Technology SAS
1 month ago

Feedzai, Inc.
1 month ago
16 days ago
