
Job Overview
Location
Bay Area Office
Job Type
Full-time
Category
Machine Learning Engineer
Date Posted
March 15, 2026
Full Job Description
📋 Description
- • Granica Inc. is at the forefront of building the next generation of efficient AI infrastructure, addressing the critical limitations imposed by inefficient data in today's AI systems. At enterprise scale, redundant data, suboptimal representations, and poorly optimized learning pipelines lead to significant costs and latency. Our mission is to eradicate this inefficiency by merging advances in information theory, machine learning, and distributed systems to create data infrastructure that continuously enhances how information is represented and utilized by AI.
- • This specific role is for an Applied AI Research Engineer, focusing on machine learning systems for structured and tabular data, rather than general LLM application development. You will be instrumental in translating fundamental research ideas into practical, scalable algorithms, optimized pipelines, and production-ready ML systems capable of processing petabytes of structured enterprise data.
- • The Applied AI Research Team operates at the crucial intersection of theoretical research and practical production. Your responsibilities will involve taking nascent ideas from fundamental research and transforming them into tangible engineering solutions. This is a high-ownership position designed for engineers who possess both the analytical rigor of a researcher and the robust building capabilities of a systems engineer. You will be directly responsible for converting theoretical concepts into measurable performance improvements and for defining the core engineering principles of structured AI.
- • Key responsibilities include transforming foundational research ideas from Granica Research and Prof. Andrea Montanari’s group into scalable algorithms and prototypes. You will develop robust evaluation harnesses, curated datasets, and precise benchmarks to rigorously measure the real-world signal derived from research concepts. Furthermore, you will define and refine key metrics that quantify progress and success in the domain of structured AI systems.
- • You will be tasked with inventing and optimizing novel algorithms, developing efficient learning methods specifically tailored for relational, tabular, graph, and diverse enterprise datasets. This includes prototyping advanced representation learning architectures and exploring compression-aware models. A significant part of your role will involve exploring new approaches for learning from heterogeneous structured data, pushing the boundaries of what's possible.
- • Building high-performance ML pipelines is central to this role. You will implement fast training and inference pipelines utilizing frameworks like PyTorch or JAX, potentially developing custom kernels for maximum efficiency. Optimization of memory usage, compute utilization, and data movement will be critical to ensure scalability and cost-effectiveness. Your efforts will directly improve the cost, latency, and throughput for large-scale ML workloads.
- • The role also involves building hybrid AI systems, designing architectures that seamlessly integrate symbolic, relational, and neural components. This will enable AI models to reason effectively over structured datasets without the need for text intermediaries, unlocking new capabilities for enterprise AI.
- • Collaboration is key. You will work closely with Research Scientists to validate hypotheses at scale, partner with Systems Engineers to integrate developed algorithms into Granica’s core data platform, and collaborate with Product Engineering teams to ship features that power real-world enterprise workloads.
- • A strong emphasis is placed on iterating rapidly and measuring everything. You will conduct controlled experiments, meticulously analyze performance improvements, and deliver results backed by clear benchmarks and reproducible evaluations. Your work will drive the entire cycle from prototype development through to production deployment and continuous optimization.
- • This role is pivotal in shaping the future of AI infrastructure, particularly for structured data, which represents the most valuable data in the world. Most current AI systems are ill-equipped to learn from this data efficiently. Granica is building the essential systems to bridge this gap. Your contributions will define the engineering foundations of structured AI, encompassing the algorithms, pipelines, and infrastructure necessary for efficient learning from enterprise data at a global scale. This position offers unparalleled opportunities for high ownership, significant research impact, immediate production relevance, and the chance to fundamentally shape a new generation of AI systems.
Skills & Technologies
About Granica Inc.
Granica builds an AI efficiency platform that compresses and secures petabyte-scale training data for cloud object stores. Its byte-granular deduplication and privacy filtering shrink S3 and GCS footprints, cutting storage and transfer costs while boosting downstream model accuracy. Designed for data scientists and MLOps teams, the service deploys as a transparent sidecar proxy, enforcing differential privacy and access policies without code changes. Founded in 2022 and headquartered in Palo Alto, the company targets enterprises running computer-vision and NLP workloads that need cheaper, safer data pipelines.
Subscribe to the weekly newsletter for similar remote roles and curated hiring updates.
Newsletter
Weekly remote jobs and featured talent.
No spam. Only curated remote roles and product updates. You can unsubscribe anytime.
Similar Opportunities

Heidi Health Pty Ltd
1 month ago

Heidi Health Pty Ltd
1 month ago

