
Job Overview
Location
Remote - California, SF Bay Area
Job Type
Full-time
Category
Data Engineer
Date Posted
May 15, 2026
Full Job Description
đź“‹ Description
- • Assist in identifying and resolving QA bugs related to financial data pipelines and processes within the Azure stack.
- • Validate data accuracy, completeness, and consistency across finance-related data sources and transformations.
- • Participate in designing, building, and maintaining scalable data pipelines for financial analytics and reporting.
- • Implement data transformations, cleansing, and ETL processes using Python, SQL, and PySpark to support Finance Transformation Team initiatives.
- • Collaborate with senior data engineers to optimize pipeline performance and improve data flow efficiency.
- • Take ownership of small, defined features within the data pipeline including code development for data ingestion, processing, and storage.
- • Lead an independent project focused on a data engineering topic such as optimizing pipeline performance or automating data validation procedures.
- • Present findings, recommendations, and outcomes from your independent project to the finance and data engineering team.
- • Use Azure services including Data Factory, Synapse, Databricks, and Azure Functions to support end-to-end data engineering workflows.
- • Integrate data from external systems using APIs to enrich financial datasets and improve analytical coverage.
- • Apply knowledge of data visualization tools (Power BI, Tableau) to support reporting needs and communicate insights effectively.
- • Explore and apply foundational Gen AI and machine learning concepts to enhance data processes or automate decision-making tasks.
- • Independently develop basic machine learning models to address small-scale financial data challenges.
- • Document business requirements and technical processes clearly to ensure reproducibility and team alignment.
- • Work remotely up to 20 hours per week during business hours, contributing to a high-impact finance data engineering team.
- • Leverage strong problem-solving skills to troubleshoot data anomalies and improve data quality across financial systems.
- • Contribute to an inclusive, values-driven culture that prioritizes employee growth, innovation, and respect.
🎯 Requirements
- • Current student pursuing a Bachelor’s or Master’s degree in Computer Science, Information Technology, Data Science, Business Analytics, Business Intelligence, or a related field
- • Proficiency in SQL, Python, and PySpark
- • Familiarity with Azure services including Data Factory, Synapse, Databricks, and Azure Functions
- • Knowledge of data integration using APIs
- • Knowledge or prior experience with Gen AI and machine learning concepts
- • Ability to develop machine learning models independently
🏖️ Benefits
- • Competitive compensation
- • Generous 401(k) program in the US and similar programs internationally
- • Health benefits and programs supporting physical and mental well-being
- • Flexible work environment
- • Meaningful opportunities to keep learning and growing
- • Half-day Fridays, depending on location
Skills & Technologies
About The Clorox Company
The Clorox Company manufactures and markets consumer and professional products worldwide, including cleaning, disinfecting, laundry, food, water filtration, charcoal, and cat litter brands such as Clorox, Pine-Sol, Glad, Brita, Kingsford, and Fresh Step. Founded in 1913 and headquartered in Oakland, California, the company operates through four segments: Health and Wellness, Household, Lifestyle, and International. It distributes products through mass retailers, e-commerce, warehouse clubs, and professional supply channels.
Subscribe to the weekly newsletter for similar remote roles and curated hiring updates.
Newsletter
Weekly remote jobs and featured talent.
No spam. Only curated remote roles and product updates. You can unsubscribe anytime.


