
Job Overview
Location
Remote
Job Type
Full-time
Category
Data Engineer
Date Posted
February 12, 2026
Full Job Description
đź“‹ Description
- • Join a dynamic, AI-driven technology company as a Data Engineer, playing a pivotal role in building and maintaining the robust data infrastructure that powers cutting-edge forecasting and attribution intelligence products. This is a hands-on, execution-focused contract position where your expertise will directly contribute to the quality and reliability of analytics-ready data, essential for our client's advanced AI use cases.
- • You will be instrumental in developing and maintaining scalable, fault-tolerant ELT pipelines. This involves leveraging your proficiency in Python, a cornerstone of modern data engineering, to construct efficient data flows. Your responsibilities will extend to orchestrating and meticulously monitoring these data workflows using Dagster, ensuring seamless execution and timely delivery of data.
- • A critical aspect of this role is proactive troubleshooting. You will be tasked with identifying and resolving pipeline failures, performance bottlenecks, and data inconsistencies. This requires a keen eye for detail and a systematic approach to diagnosing and rectifying issues, ensuring the integrity and availability of our data assets.
- • You will also be responsible for monitoring the overall health of the pipelines, utilizing observability tools and key metrics to maintain optimal performance and identify potential problems before they impact downstream processes.
- • In the realm of Analytics Engineering and Data Modeling, you will develop, optimize, and rigorously document dbt models. Adhering to best practices in analytics engineering is paramount, ensuring that the data models are clean, efficient, and readily consumable for business intelligence, forecasting, and machine learning feature generation.
- • This role involves a significant contribution to the continuous improvement of existing data workflows. As product needs evolve, you will actively participate in refactoring and enhancing these processes to meet new demands and maintain a high standard of data delivery.
- • Data Quality and Reliability are at the forefront of this position. You will implement and maintain comprehensive data quality checks and testing strategies, embedding a culture of data integrity throughout the pipeline.
- • Adherence to established team standards for Service Level Agreements (SLAs), code quality, and deployment processes is expected, ensuring consistency and reliability across all data operations.
- • Collaboration is key to success. You will work closely with data scientists to provide the foundational data necessary for their forecasting and AI-driven use cases. This partnership ensures that the data infrastructure directly supports the development and deployment of advanced analytical models.
- • Furthermore, you will collaborate cross-functionally with analytics and product teams. Your input will be crucial in ensuring that the data not only meets technical requirements but also aligns perfectly with business objectives and product roadmaps, enabling data-informed decision-making at all levels.
- • This role is ideal for a self-starter who thrives in a fast-paced, execution-focused environment. You will have the autonomy to deliver results in a contract setting, contributing directly to the success of innovative AI products.
- • The opportunity to work within a modern analytics engineering stack, including Python, dbt, and Dagster, provides a stimulating technical challenge. You will gain invaluable hands-on experience with real-world AI and analytics use cases, enhancing your professional growth and expertise.
- • Your work will directly impact customer onboarding, reporting workflows, and the development of sophisticated AI applications, making this a highly visible and impactful role within the organization.
- • This position is open to candidates located in LATAM, Africa, and Eastern Europe. It requires availability during U.S. business hours, specifically aligned with EST (9 AM - 5 PM EST), to effectively support our U.S.-based clients.
🎯 Requirements
- • 3+ years of professional experience in data engineering or analytics engineering.
- • Hands-on experience working with dbt (Core or Cloud).
- • Experience using an orchestration tool such as Dagster or similar (e.g., Airflow, Prefect).
- • Proficiency in Python, including data manipulation libraries like pandas, SQLAlchemy, or psycopg2.
- • Advanced SQL skills, including experience with CTEs, window functions, and query optimization.
- • Experience working with cloud data warehouses such as Snowflake, BigQuery, or Redshift.
🏖️ Benefits
- • Opportunity to work on cutting-edge AI-driven products and contribute to impactful data infrastructure.
- • Gain hands-on experience with a modern analytics engineering stack (Python, dbt, Dagster).
- • Collaborate closely with data science, analytics, and product teams on advanced AI and analytics use cases.
- • Fully remote contract role with flexibility in location (LATAM, Africa, Eastern Europe).
- • Direct contribution to customer onboarding, reporting workflows, and AI feature development.
Skills & Technologies
About Scale Army Careers
Scale Army Careers is a platform dedicated to providing information and resources for individuals interested in pursuing a career within the military. It aims to bridge the gap between potential recruits and the opportunities available in various branches of the armed forces. The website offers insights into different roles, training programs, and the overall lifestyle associated with military service. It serves as a comprehensive guide for those seeking to understand the requirements, benefits, and challenges of a career in the army. The platform emphasizes career development and personal growth within a structured and disciplined environment, highlighting the value of service and commitment.



