The Awkward Truths of AI–Tester Collaboration

AI has already stepped into the world of software testing—but the story isn’t the seamless partnership that vendors promise. This talk explores the awkward truths of what really happens when testing experts and machines are forced to collaborate.

On the bright side, AI can crunch through mountains of checks, surface subtle issues, and accelerate coverage beyond what human teams can achieve. But the reality is often messier: machines miss the obvious, overwhelm teams with noise, and create tension as testers wrestle with the fear of replacement. Many experts resist, adapt reluctantly, or discover new roles in guiding and curating machine output.

Drawing on real-world examples, Jason Arbon reveals what actually works, what consistently fails, and how testers are reacting in practice—not theory. Attendees will learn the practical lessons and cautionary tales that can help them apply AI effectively, avoid wasted effort, and navigate the uncomfortable but inevitable collaboration between humans and machines.

Jason ArbonJason Arbon

Jason Arbon has been blending humans and machines in testing for more than a decade. At Google and Microsoft, he scaled testing for Search and Chrome. At Applause/uTest, he redefined the balance between human testers and automation. Today, as CEO of testers.ai, Jason builds AI testing agents that both complement and compete with experts to test the world’s software. His candid stories and hands-on experience offer rare insight into the promises—and pitfalls—of AI–tester collaboration.