
Job Overview
Location
Austin, Texas, USA
Job Type
Full-time
Category
Product Manager
Date Posted
March 4, 2026
Full Job Description
đź“‹ Description
- • Fluidstack is at the forefront of building the essential infrastructure for abundant intelligence, partnering with leading AI labs, governments, and enterprises such as Mistral, Poolside, Black Forest Labs, and Meta to deliver compute power at unprecedented speeds. Our mission is to accelerate the realization of Artificial General Intelligence (AGI), driven by a highly motivated and committed team dedicated to delivering world-class infrastructure. We operate with a deep sense of ownership, treating our customers' success as our own and taking immense pride in the systems we build and the trust we cultivate. If you are driven by a profound purpose, possess an obsession with excellence, and are ready to dedicate significant effort to advancing the future of intelligence, we invite you to join us in shaping what comes next.
- • This pivotal role of Product Manager, Compute NPI (New Product Introduction) is designed to spearhead the introduction of new GPU infrastructure and compute offerings. You will be instrumental in defining and executing Fluidstack's strategy for evaluating, qualifying, and bringing new GPU generations to market, encompassing cutting-edge hardware from NVIDIA (e.g., Blackwell, Rubin) and AMD (e.g., MI300X), as well as other emerging accelerators. This is a deeply cross-functional position that demands exceptional technical acumen, adept vendor relationship management, and a nuanced understanding of how hardware capabilities directly translate to the diverse requirements of AI workloads.
- • Your primary objective will be to ensure Fluidstack consistently maintains its competitive advantage by offering a meticulously optimized mix of compute options tailored for the demanding needs of AI training, inference, and specialized AI workloads. You will be the driving force behind our NPI roadmap for GPU SKUs, establishing clear evaluation criteria, defining rigorous qualification timelines, and crafting effective go-to-market strategies for each new hardware generation.
- • A significant aspect of your role will involve close collaboration with our datacenter teams to meticulously define the infrastructure requirements essential for next-generation GPUs. This includes specifying needs for advanced power delivery systems (HVDC/LVDC), sophisticated cooling solutions (liquid vs. air), optimized rack architectures, and the overall physical infrastructure necessary to support these high-performance components.
- • You will work hand-in-hand with our infrastructure engineers to rigorously validate hardware performance across critical dimensions. This includes assessing training throughput (MFU), inference latency (TTFT, TBT), memory bandwidth capabilities, and the intricacies of interconnect topologies such as NVLink and InfiniBand, ensuring optimal system performance.
- • A key responsibility will be to drive proactive and strategic engagement with leading GPU vendors, including NVIDIA, AMD, and other emerging XPU providers. This will involve conducting in-depth technical deep dives, skillfully negotiating supply agreements, and effectively managing early access programs to secure critical hardware.
- • You will be responsible for defining precise product specifications for a variety of system configurations, ranging from single-GPU instances and multi-GPU nodes to full rack deployments and expansive megacluster topologies, ensuring scalability and flexibility.
- • A crucial element of your role will be to analyze diverse customer workload profiles to strategically determine the optimal GPU mix. This involves understanding which GPUs are best suited for specific tasks, such as using H100 for large-scale model training, L40S for efficient inference, B200 for frontier research, and MI300X for cost-optimized workloads.
- • You will develop robust business cases for the introduction of new SKUs, encompassing detailed CapEx requirements, depreciation models, accurate utilization forecasts, and thorough competitive pricing analysis to ensure commercial viability.
- • Creating comprehensive technical documentation and insightful benchmarking reports will be essential to empower customers in making informed decisions about selecting the most appropriate GPU for their unique use cases.
- • You will continuously monitor GPU availability, anticipate supply chain constraints, and develop strategic allocation plans to ensure Fluidstack can meet escalating customer demand while simultaneously maintaining healthy profit margins.
- • Close collaboration with our networking teams is vital to ensure that the interconnect fabric (RoCE, InfiniBand) effectively scales with GPU performance and robustly supports complex distributed training patterns, enabling seamless large-scale AI operations.
Skills & Technologies
About FluidStack Inc.
FluidStack Inc. operates a distributed cloud platform that aggregates under-utilized GPUs in data centers and individual machines worldwide, renting them on-demand to AI researchers, startups, and enterprises for training and inference workloads. The company automates deployment, security, and billing, offering prices up to 80% below traditional hyperscalers while providing instant access to high-end NVIDIA A100, H100, and consumer GPUs through a single API and web console. Headquartered in London, FluidStack targets machine-learning engineers who need scalable, low-cost compute without long-term commitments, claiming thousands of active nodes and customers including Fortune 500 enterprises and leading research labs.
Similar Opportunities

Efinti Technologies, Inc.
3 months ago

Prezzee Pty Ltd
5 days ago

