
Job Overview
Location
Philippines
Job Type
Full-time
Category
QA Engineer
Date Posted
March 16, 2026
Full Job Description
đź“‹ Description
- • As an AI Performance Test Engineer at DevRev Inc., you will be at the forefront of ensuring the scalability, resilience, and efficiency of our groundbreaking AI-powered platform, Computer. This role is a unique opportunity to blend rigorous engineering principles with cutting-edge AI innovation, shaping the future of work by guaranteeing exceptional performance for millions of users worldwide.
- • You will be instrumental in designing, developing, and executing a sophisticated performance testing ecosystem. This involves not only mastering traditional performance testing methodologies but also pioneering the integration of AI-driven automation and intelligent agent orchestration to simulate complex, real-world scenarios.
- • Your primary responsibility will be to architect and implement comprehensive performance test strategies, encompassing Load, Stress, Soak, Spike, Scalability, and Endurance testing. This requires a deep understanding of how to push systems to their limits and identify potential failure points before they impact users.
- • You will develop and maintain robust performance test scripts using industry-leading tools such as JMeter, Gatling, LoadRunner, Locust, or k6. Proficiency in these tools is essential for creating realistic workload models that accurately reflect user behavior and system demands in a distributed environment.
- • A critical aspect of this role involves simulating realistic user traffic and complex workload models tailored for our distributed systems. This ensures that our platform can handle peak loads and unexpected surges in demand without compromising user experience.
- • You will conduct in-depth root cause analysis across all layers of the technology stack, including application, API, database, and infrastructure. This requires a meticulous approach to identifying performance bottlenecks and understanding their underlying causes.
- • Defining and maintaining clear performance baselines, Service Level Agreements (SLAs), and Service Level Objectives (SLOs) will be a key deliverable. These metrics will serve as benchmarks for system performance and guide optimization efforts.
- • You will play a vital role in integrating performance tests seamlessly into our CI/CD pipelines, enabling continuous validation and ensuring that performance is a constant consideration throughout the software development lifecycle.
- • Beyond traditional testing, you will pioneer the development of AI-driven performance analysis frameworks. This involves leveraging pattern recognition and anomaly detection techniques to proactively identify performance issues.
- • You will be tasked with developing custom test agents and orchestrators, potentially using advanced concepts like Multi-Cloud Platforms (MCPs), to simulate large-scale, multi-node workloads that mimic our production environment.
- • Implementing self-healing test systems that can dynamically adapt to environmental changes will be a significant contribution, enhancing the robustness and efficiency of our testing processes.
- • You will utilize Machine Learning (ML) models to predict performance degradation, allowing us to proactively optimize systems before issues arise and ensuring sustained high performance.
- • Automating root cause detection through AI-assisted observability insights will streamline troubleshooting and reduce the time to resolution for performance-related incidents.
- • You will leverage advanced observability tools such as Grafana, Prometheus, Datadog, New Relic, and AppDynamics to meticulously monitor and analyze performance metrics. This data-driven approach is crucial for identifying trends and opportunities for improvement.
- • Creating clear and insightful visual dashboards will be essential for communicating performance trends, potential risks, and optimization recommendations to stakeholders across engineering, product, and leadership teams.
- • Close collaboration with Site Reliability Engineering (SRE) and development teams is paramount for achieving end-to-end performance tuning and ensuring that performance considerations are embedded in the design and implementation phases.
- • You will partner with engineering, QA, and platform teams early in the Software Development Life Cycle (SDLC) to establish performance goals and requirements, ensuring that performance is a primary design consideration.
- • Conducting thorough post-release reviews and actively contributing to the evolution of testing standards and best practices will help maintain and elevate our quality bar.
- • This role offers a unique opportunity to work with a talented team dedicated to building the future of work with AI, contributing to a product that unifies data sources, tools, and workflows to provide real-time insights and powerful agentic actions for businesses globally.
Skills & Technologies
About DevRev Inc.
DevRev provides an AI-native platform that unifies product development and customer support workflows. The cloud software links engineering teams, product managers, and support agents on one data layer, replacing separate CRM, ticketing, and project tools. Live telemetry, knowledge graphs, and generative AI surface insights, automate responses, and prioritize backlogs. Founded in 2020 by former Nutanix CEO Dheeraj Pandey and Manoj Agarwal, the company targets enterprises seeking faster release cycles and improved customer experience. Headquartered in Palo Alto, it operates globally with a remote workforce and has raised over $100 million in seed and Series A funding.



