Nikhil Bhandari is a seasoned Quality Engineering leader with nearly two decades of experience driving excellence in software testing and automation at scale. Throughout his career, he has led high-performing QA teams within major global technology organizations, focusing on building robust quality cultures and modernizing testing lifecycles. An expert in test automation strategy, Nikhil has a proven track record of architecting scalable frameworks for both desktop, web-based and mobile applications. He is a strong advocate for open-source testing solutions. As a recognized thought leader in the testing community, Nikhil spoke at prestigious international conferences. He is passionate about sharing his insights on automation architecture, code quality, and the evolution of Quality Engineering. Beyond his technical leadership, he is a dedicated mentor committed to coaching the next generation of testers and helping organizations bridge the gap between development and quality. He holds a degree in Engineering and remains committed to advancing the craft of software testing through innovation and community engagement.
Speech: From Scripts to Agents: Building Autonomous Testing Ecosystems That You Can Actually Trust
We have all heard the pitch: AI is going to revolutionize testing. Yet, many of us are still stuck maintaining flaky test scripts and dealing with automation frameworks that break with minor UI changes. Now, as AI testing agents enter the scene, the real question for quality engineers isn’t just about auto-generating more code – it’s about whether these agents can actually make our testing more targeted, reliable, and less of a maintenance nightmare.
In this session, we will step away from the hype and look at the practical reality of implementing AI agents in our daily quality engineering processes. We will examine how shifting from static scripts to agentic design patterns fundamentally changes how we build and maintain test automation frameworks.
Instead of treating AI as a “silver bullet,” we will explore real-world use cases. You will see how agents can handle auto-healing when application elements change, how they can leverage production insights to target the test scenarios that actually matter, and how they can free up test engineers to focus on quality strategy rather than endless script maintenance. We will also cover how to set up the right guardrails – enforced rules and quality scoring, so these tools work for us, improving productivity rather than creating a new mess of unpredictable, AI-generated tests to untangle.
Key Takeaways:
- Beyond Simple Code Generation: Understand the practical differences between AI copilots and testing agents, and how agents integrate into existing frameworks to significantly improve test engineer efficiency.
- Real-World Agentic Patterns: How to implement auto-healing mechanisms and dynamic test generation to reduce the daily overhead of script maintenance.
- The Evolving Role of the QA Engineer: How to transition from scripting the “how” to engineering the “what” (test specifications and guardrails), keeping human expertise at the center of quality.
- Targeted Testing via Production Insights: Techniques for using real-time production data and feedback loops to automatically prioritize high-impact testing and eliminate redundant test execution.
- Building Guardrails for Trust: Practical frameworks for setting up constraints, quality scoring, and reliability metrics to ensure your AI agents act as dependable partners in your CI/CD pipeline.
