
Job Overview
Location
Anywhere (Open Globally)
Job Type
Full-time
Category
Machine Learning Engineer
Date Posted
February 17, 2026
Full Job Description
📋 Description
- • Join a pioneering initiative at the forefront of AI safety by becoming a Research Engineer for Novel AI Platforms for Multiscale Alignment within the Alignment of Dynamical Cognitive Systems program.
- • This role is a unique opportunity to bridge the gap between cutting-edge platform development and critical alignment research, focusing on the complex challenges posed by increasingly capable AI systems, particularly Large Language Model (LLM) agents.
- • You will be instrumental in co-designing and developing sophisticated AI platforms that serve as the bedrock for crucial safety research, enabling the creation of internal tools and fostering vital community resources.
- • Your work will involve creating dynamic environments specifically engineered to shape and understand multi-agent dynamics, paving the way for the implementation of cooperative AI architectures that possess inherently robust alignment properties.
- • A significant aspect of this position involves building the essential technical infrastructure required to rigorously investigate experimental AI alignment and control approaches, pushing the boundaries of what's currently possible.
- • You will collaborate closely with leading researchers, translating theoretical advancements in AI alignment into practical, implementable solutions and tangible technologies.
- • This is a chance to contribute to the development of foundational technologies that are not just beneficial but essential for ensuring AI safety and responsible development as AI capabilities continue their rapid and exponential advancement.
- • The Center for AI Risk Management and Alignment (CARMA) is dedicated to navigating the profound and potentially catastrophic risks associated with advanced AI. Our specific mission is to mitigate existential risks to humanity and the biosphere stemming from transformative AI.
- • CARMA's approach is multifaceted, focusing on grounding AI risk management in rigorous analytical frameworks, developing policy frameworks that proactively address Artificial General Intelligence (AGI), advancing state-of-the-art technical safety methodologies, and cultivating global perspectives essential for achieving durable AI safety.
- • Through these integrated strategies, CARMA aims to provide indispensable support to society in managing the outsized risks posed by advanced AI before they escalate into unmanageable crises.
- • As a Research Engineer, your responsibilities will span the research and development of AI systems and platforms, catering to diverse needs including safety research, the creation of internal tooling, and the support of the broader AI safety community.
- • You will be tasked with designing and implementing sophisticated architectures for agent execution environments and interaction platforms, enabling complex simulations and analyses.
- • A key technical contribution will be the development of advanced optimization algorithms tailored for multi-objective and cooperative AI systems, ensuring alignment with desired outcomes.
- • You will create innovative mechanisms for conflict resolution and preference aggregation within multi-agent settings, crucial for harmonious and safe AI interactions.
- • The role includes the implementation of comprehensive testing frameworks and the development of example environments designed to rigorously validate theoretical approaches to AI alignment.
- • You will build critical middleware components that facilitate secure, efficient, and reliable agent communication, forming the backbone of distributed AI systems.
- • Architecting and developing reusable software components specifically for AI alignment research will be a core function, promoting efficiency and collaboration within the research community.
- • Meticulous documentation of system architectures, APIs, and implementation details is expected, ensuring clarity and maintainability.
- • You will actively collaborate on technical publications and research presentations, contributing to the dissemination of knowledge and findings.
- • This position offers a unique blend of contributing to both practical, hands-on implementations and engaging in theoretical research within the vital field of AI alignment.
- • Support the evaluation and refinement of research prototypes, playing a crucial role in the iterative development process of AI safety technologies.
- • The work involves understanding and potentially applying concepts from dynamic plan recognition, activity recognition, dynamic multicriteria decision making, multi-agent systems, and AI builder platforms.
- • You will leverage strong programming skills in both Python and Java to build robust and scalable AI systems.
- • The ability to independently drive technical projects from conceptualization through to successful implementation will be essential.
- • Familiarity with techniques for AI alignment, control, safety, or related research areas is a prerequisite for success in this role.
- • Experience developing middleware, frameworks, or platforms will be highly valued.
- • A broad understanding of machine learning principles and AI systems architecture is expected.
- • Literacy in semi-formal semantics will aid in the precise definition and verification of AI behaviors.
- • Excellent communication skills, both written and verbal, along with strong technical documentation abilities, are crucial for effective collaboration and knowledge sharing.
- • This role is ideal for a proactive and intellectually curious individual eager to make a significant impact on the future of AI safety.
🎯 Requirements
- • MS or PhD in Computer Science, AI, or a related field, or equivalent practical experience.
- • Demonstrated experience with one or more of the following: dynamic plan recognition, activity recognition, dynamic multicriteria decision making, multi-agent systems, or AI builder platforms.
- • Strong programming proficiency in both Python and Java.
- • Proven ability to independently drive technical projects from concept to implementation.
- • Familiarity with techniques for AI alignment, control, safety, or related research areas.
🏖️ Benefits
- • Opportunity to work on cutting-edge AI safety research with a global impact.
- • Flexible work arrangements with a globally distributed team (remote work).
- • Contribute to foundational technologies shaping the future of AI.
- • Engage with a collaborative and intellectually stimulating research environment.
Skills & Technologies
About Future of Life Institute
The Future of Life Institute (FLI) is a global research and advocacy organization working to mitigate existential risks facing humanity. They focus on the risks posed by advanced artificial intelligence, biotechnology, and nuclear weapons. FLI supports research to ensure that AI is developed safely and beneficially, and they advocate for policies that reduce the likelihood of catastrophic outcomes from these powerful technologies. Their work includes funding research, organizing conferences, and engaging with policymakers and the public to raise awareness and promote responsible innovation. FLI's ultimate goal is to steer humanity toward a future where advanced technologies enhance, rather than threaten, our existence.
Subscribe to the weekly newsletter for similar remote roles and curated hiring updates.
Newsletter
Weekly remote jobs and featured talent.
No spam. Only curated remote roles and product updates. You can unsubscribe anytime.
Similar Opportunities

Heidi Health Pty Ltd
10 days ago

Heidi Health Pty Ltd
10 days ago

FundraiseUp Inc.
2 days ago

FundraiseUp Inc.
2 days ago