
Job Overview
Location
Germany
Job Type
Full-time
Category
Data Engineer
Date Posted
September 14, 2025
Full Job Description
đź“‹ Description
- • Join the Cargo Models team at Kpler and become the architect behind the world’s most accurate cargo-tracking intelligence. You will own the end-to-end design, development, and optimisation of Python-based data pipelines that ingest, transform, and enrich live maritime AIS feeds, port call events, and commercial fixtures into high-confidence cargo flows. Every commit you push directly improves the real-time dashboards used by traders, charterers, and analysts to move billions of dollars of commodities.
- • Collaborate daily with data scientists, research engineers, and product managers to translate complex Operations Research models into production-grade micro-services. You will translate mathematical prototypes into scalable, fault-tolerant code, ensuring sub-second latency for queries that scan terabytes of streaming data.
- • Extend and harden our core algorithms for vessel behaviour classification, voyage segmentation, and cargo volume reconciliation. You will refactor legacy modules, add new feature flags, and implement comprehensive unit and integration tests so that model upgrades can be released continuously without downtime.
- • Guarantee data quality at planetary scale. You will build automated anomaly-detection jobs that surface outliers in berth-level events, draft data-quality SLAs, and partner with QA to run weekly regression tests. When a client questions a cargo estimate, you will dive into the lineage graph, trace the issue to its root, and ship a fix within hours.
- • Optimise system performance across the stack: tune Spark jobs to cut runtime by 30 %, rewrite hot-path SQL queries to leverage new indexes, and right-size Kubernetes pods to shave cloud costs. You will set up Prometheus alerts and Grafana dashboards so the team spots bottlenecks before clients do.
- • Champion best-practice engineering. You will enforce code-review standards, mentor junior engineers on clean architecture, and introduce new patterns such as feature stores or data-contract testing. Expect to present brown-bag sessions on topics ranging from Kafka partitioning strategies to Terraform module design.
- • Contribute to the roadmap. You will translate user pain points into technical epics, estimate effort in fortnightly sprints, and demo shipped features to stakeholders across Europe, Asia, and the Americas. Your voice will influence which markets we enter next and which data sets we acquire.
- • Maintain 24×7 reliability. You will join a rotating on-call roster, respond to PagerDuty alerts, and run blameless post-mortems. Over time you will automate away toil—self-healing jobs, canary deployments, chaos-engineering drills—so the pager stays quiet and the team sleeps well.
- • Stay curious. We iterate fast: last quarter we adopted Delta Lake, next quarter we may explore Flink or Rust. You will allocate 10 % of your time to spikes, prototypes, and conference learning, then bring fresh ideas back to the squad.
Skills & Technologies
About Kpler S.A.S.
Kpler S.A.S. provides real-time and historical data on commodity flows, tracking global shipments of crude oil, refined products, liquefied natural gas, metals, and agricultural goods. The company aggregates satellite, customs, and port data into a web-based analytics platform used by traders, producers, shippers, and financial institutions to monitor supply chains, assess inventories, and forecast market balances. Founded in 2014 and headquartered in Paris, Kpler employs proprietary algorithms and a global network of sources to deliver granular cargo-level information, enabling clients to make informed trading, logistics, and risk-management decisions across energy and bulk commodity markets.



