
Job Overview
Location
Washington DC
Job Type
Full-time
Category
Other
Date Posted
February 17, 2026
Full Job Description
📋 Description
- • The Future of Life Institute, through its Center for AI Risk Management and Alignment (CARMA), is seeking a highly motivated and intellectually curious Director of Legal Research to focus on Emergency Preparedness and Civil Defense in the context of advanced AI.
- • This pivotal role involves deep dives into the U.S. federal and state-level legal and policy landscape to identify and develop interventions that bolster public safety, security, and well-being against potential catastrophic scenarios arising from highly multipolar Artificial General Intelligence (AGI) run amok.
- • You will be instrumental in uncovering underutilized or entirely novel legal authorities that can be leveraged for proactive defense mechanisms.
- • Your research will explore opportunities for enhanced communication and response strategies, as well as identify legal loopholes that could be addressed through executive agency actions, all while navigating the dynamic political realities of Washington D.C.
- • This position requires a creative and holistic approach to problem-solving, working closely with the Public Security Policy program lead to craft innovative solutions for projected catastrophic issues.
- • The scope of your work will encompass a broad range of critical areas including public health and safety, civil rights, human rights, national security, global security, emergency preparedness, emergency management, economic stability, infrastructure security, and the unique challenges posed by novel threat profiles.
- • While deep technical AI knowledge is not a prerequisite, a strong understanding of legal frameworks and policy is essential, as the focus is on the societal defense mechanisms that would be deployed after AI governance structures might falter.
- • You will collaborate with CARMA's other specialized teams in AI risk assessment, offense-defense, and geostrategic dynamics, who will provide invaluable support in scenario analysis and risk mitigation.
- • Key responsibilities include identifying and researching underutilized or unused legal authorities for proactively dealing with the novel risks anticipated from AGI.
- • You will collaboratively identify differentially exploitable gaps in the existing legal framework and its practical application, under specific assumptions, to help prioritize defense strategies.
- • Your work will involve researching potential federal executive, legislative, and state-level policy interventions designed to address these identified gaps in relation to projected risks.
- • A significant part of the role will be analyzing existing deficits in emergency preparedness and ideating structural solutions to address them.
- • You will contribute to crafting comprehensive solution recommendations for the prevention, mitigation, and/or response to a variety of projected novel crises.
- • The position demands a thorough research of the realpolitik and judicial dynamics that might influence the implementation of prospective policy interventions.
- • This role is an opportunity to contribute to a critical mission: lowering the risks to humanity and the biosphere from transformative AI.
- • CARMA's broader objective is to provide critical support to society in managing the outsized risks from advanced AI through robust policy research, technical safety advancements, and fostering global perspectives on durable safety.
- • You will be at the forefront of exploring how existing legal structures can be adapted or new ones created to safeguard society against unprecedented technological challenges.
- • The ideal candidate will possess a keen analytical mind, exceptional research skills, and the ability to translate complex legal concepts into actionable policy recommendations.
- • This position offers a unique chance to shape the future of AI safety and societal resilience by leveraging legal and policy expertise.
- • You will be expected to work independently, manage multiple concurrent tasks, and meet deadlines with minimal supervision, demonstrating a proactive and results-oriented approach.
- • The role requires a strategic thinker capable of anticipating future challenges and developing forward-looking solutions.
- • Collaboration will be key, as you'll work with internal teams and potentially external stakeholders to achieve CARMA's ambitious goals.
- • Your research will directly inform policy recommendations and advocacy efforts aimed at ensuring a safe and beneficial future with advanced AI.
- • This is a challenging yet rewarding opportunity for a legal professional passionate about addressing existential risks and contributing to global security.
🎯 Requirements
- • Juris Doctor (J.D.) degree plus a minimum of 3 years of relevant experience in law, policy research, bill drafting, or policy engagement; OR a Master's degree in a relevant field plus 5 years of legal research experience.
- • Proficiency in utilizing recent advancements in legal research information technologies.
- • Demonstrated experience in a combination of the following areas: Federal policymaking apparatuses, Judicial dynamics, Policy research, State-level policymaking apparatuses, Policy analysis, Policy strategy, Lobbying familiarity, Drafting Executive Orders or bill language.
- • Experience in a combination of the following fields: Security mindset, Catastrophic risk policy, Emergency preparedness, Crisis management, CBRN (Chemical, Biological, Radiological, Nuclear) expertise, National security complex familiarity, Intelligence Community (IC) mindset on unknown unknowns, Strategic thinking, Multiphase multiprong strategies, Public safety, Public order, AI governance, AI safety strategy, Game theory, Criminology, Counter-terrorism, Futures studies, or Foresight methods.
- • Proven ability to manage, track, and successfully complete multiple concurrent tasks to meet deadlines with little supervision.
🏖️ Benefits
- • Opportunity to work on cutting-edge issues at the intersection of law, policy, and advanced AI, contributing to global safety and security.
- • Collaborative and intellectually stimulating work environment with leading experts in AI risk management.
- • Competitive salary commensurate with experience and qualifications.
- • Health, dental, and vision insurance (for employee positions).
- • Paid time off and holidays (for employee positions).
- • Professional development opportunities and support for continuous learning.
Skills & Technologies
About Future of Life Institute
The Future of Life Institute (FLI) is a global research and advocacy organization working to mitigate existential risks facing humanity. They focus on the risks posed by advanced artificial intelligence, biotechnology, and nuclear weapons. FLI supports research to ensure that AI is developed safely and beneficially, and they advocate for policies that reduce the likelihood of catastrophic outcomes from these powerful technologies. Their work includes funding research, organizing conferences, and engaging with policymakers and the public to raise awareness and promote responsible innovation. FLI's ultimate goal is to steer humanity toward a future where advanced technologies enhance, rather than threaten, our existence.
Subscribe to the weekly newsletter for similar remote roles and curated hiring updates.
Newsletter
Weekly remote jobs and featured talent.
No spam. Only curated remote roles and product updates. You can unsubscribe anytime.


