W9 Getting Dirty with Data, Bots, Agents and Code - A Hands-On Approach
AI has infused itself into nearly every modern app, feature, and service. But while AI-powered development is accelerating at breakneck speed, the skills and strategies for testing AI haven’t kept pace. Many teams are struggling to validate AI-generated outputs, assess risk in ML-driven features, or even know where to start when it comes to quality in this new world. This workshop is your opportunity to close that gap.
In this hands-on, high-impact session, you’ll learn how to generate realistic test data with AI, build lightweight bots and agents, and apply AI tools to write and enhance your test code. Just as importantly, you’ll explore how to test AI itself—designing ways to evaluate systems where output is probabilistic, biased, and inconsistent. We’ll cover how to validate generative features, and how to adapt your existing testing mindset to this rapidly evolving space.
By the end of the session, you’ll have built AI-assisted test assets, tested real AI-driven functionality, and gained new techniques for navigating quality in AI-infused software. If you’re ready to move past the hype and get your hands dirty with data, bots, agents, and code—this is the workshop for you.
Kevin Pyles
Kevin Pyles works for FamilySearch as an AI/ML SDET where he is testing test AI with AI. He has been in the QA industry now for over 15 years with project, product, and management roles throughout. Kevin worked previously for test.ai where he was the Head of Product. Kevin also served on the board for QA at the Point (a local testing meetup), and is an award-winning international speaker. Kevin believes AI and Python are the future of testing.
Kevin enjoys golf, attending soccer games, and watching college football. He loves spending time with his family including his wife and 6 kids. He loves Brazilian food and pepperoni pizza, and can’t help himself when there is ice cream.