
Job Overview
Location
Anywhere (Open Globally)
Job Type
Full-time
Category
Data Science
Date Posted
February 17, 2026
Full Job Description
đź“‹ Description
- • The Center for AI Risk Management and Alignment (CARMA) is at the forefront of addressing the profound challenges posed by increasingly sophisticated Artificial Intelligence systems. Our Public Security Policy (PSP) Program is actively seeking a dedicated and insightful Policy Researcher to significantly contribute to our mission of advancing AI governance, mitigating existential risks, and developing robust emergency response frameworks. This pivotal role is designed for an individual passionate about safeguarding society from the potential downsides of transformative AI, with a specific focus on preparing for and responding to AI-related crises.
- • In this capacity, you will delve into the critical area of disaster preparedness planning, meticulously crafting strategies for AI-related crises that span multiple jurisdictional levels. This includes national, state/provincial, multinational, and alliance-based contexts, ensuring our preparedness is comprehensive and adaptable. Your work will involve conducting in-depth vulnerability assessments of critical societal systems, identifying the weak points that could be exploited or exacerbated by advanced AI. This requires a sophisticated understanding of how AI might interact with and disrupt essential services, infrastructure, and social structures.
- • A core component of this role is the development of robust governance approaches. You will research and propose frameworks that can effectively manage the risks associated with advanced AI, ensuring that societal resilience is paramount. This involves translating complex technical concepts and potential risk scenarios into clear, actionable policy recommendations that can be understood and implemented by decision-makers at various levels of government and international bodies.
- • The ideal candidate will possess a broad literacy across the sciences, encompassing both physical and information technologies. This interdisciplinary understanding is crucial for grasping the multifaceted nature of AI risks and for developing effective mitigation strategies. You will be instrumental in identifying systemic weaknesses and contributing to the design of resilience strategies that can withstand the pressures of advanced AI deployment.
- • This position offers a unique opportunity for direct engagement with critical AI safety challenges. You will have the chance to shape policy responses to emerging risks, contributing to a safer future in the face of rapid technological advancement. The role is full-time and envisioned as a long-term commitment, though it's important to note that funding beyond nine months is contingent on securing further grants, reflecting the dynamic nature of research and policy work in this field.
- • Your responsibilities will include conducting rigorous policy research to map the current landscape of AI governance approaches, pinpointing gaps and fragilities in existing frameworks. You will assess societal vulnerabilities across critical infrastructure, physical safety, information ecosystems, and social institutions, understanding how AI might impact each. Developing comprehensive AI disaster scenario plans, adaptable to diverse jurisdictional contexts and governance structures, will be a key output.
- • Furthermore, you will translate complex technical concepts and risk scenarios into clear, actionable policy recommendations for decision-makers. This involves drafting compelling policy briefs, reports, and blog posts to communicate AI governance challenges to a wide array of audiences, from technical experts to the general public. You will also develop realistic simulation materials and tabletop exercises designed to stress-test emergency response protocols for AI system failures.
- • Identifying early warning indicators of potential cascade failures in AI governance structures and mapping institutional dependencies and coordination challenges in cross-jurisdictional disaster response will be vital. You will analyze existing civil defense and disaster response frameworks to assess their adaptability to novel AI threats and support PSP advocacy initiatives, fostering strategic relationships with policymakers and stakeholders.
- • This role demands a proactive, self-directed individual capable of managing complex projects within a remote team environment, maintaining meticulous attention to detail. It's an opportunity to make a tangible impact on global AI safety and security, contributing to a future where advanced AI benefits humanity without posing catastrophic risks. Your work will directly inform policy and practice, helping to build a more resilient and secure society in the age of artificial intelligence.
🎯 Requirements
- • Master's degree in Public Policy, Public Administration, Political Science, International Relations, Science and Technology Studies, or a related field, OR a Bachelor's degree with substantial relevant professional experience.
- • Demonstrated ability to conduct thorough policy analysis and produce high-quality written outputs for both academic and policy audiences.
- • Exceptional writing abilities with experience translating complex technical concepts into accessible language for non-specialist audiences.
- • Strong grasp of policy processes, political systems, and strategic approaches to effectuating policy change.
- • Breadth of knowledge, curiosity, and literacy across the sciences and physical technologies.
- • Demonstrated understanding of AI safety concerns, systemic risks, and the sociotechnical implications of increasingly capable AI systems.
🏖️ Benefits
- • Full-time, long-term position with the potential for significant impact.
- • 100% remote work flexibility, allowing you to work from anywhere globally.
- • Opportunity to engage directly with critical AI safety challenges and shape global policy.
- • Collaborative and mission-driven work environment focused on mitigating existential risks.
Skills & Technologies
About Future of Life Institute
The Future of Life Institute (FLI) is a global research and advocacy organization working to mitigate existential risks facing humanity. They focus on the risks posed by advanced artificial intelligence, biotechnology, and nuclear weapons. FLI supports research to ensure that AI is developed safely and beneficially, and they advocate for policies that reduce the likelihood of catastrophic outcomes from these powerful technologies. Their work includes funding research, organizing conferences, and engaging with policymakers and the public to raise awareness and promote responsible innovation. FLI's ultimate goal is to steer humanity toward a future where advanced technologies enhance, rather than threaten, our existence.
Subscribe to the weekly newsletter for similar remote roles and curated hiring updates.
Newsletter
Weekly remote jobs and featured talent.
No spam. Only curated remote roles and product updates. You can unsubscribe anytime.
Similar Opportunities

Definitive Healthcare Corporation
6 months ago

Pluralsight, Inc.
21 days ago

