Tariq King – test.ai
Although there are several controversies and misunderstandings surrounding AI and machine learning, one thing is apparent — people have quality concerns about the safety, reliability, and trustworthiness of these types of systems. Not only are ML-based systems shrouded in mystery due to their largely black-box nature, they also tend to be unpredictable since they can adapt and learn new things at runtime. Validating ML systems is challenging and requires a cross-section of knowledge, skills, and experience from areas such as mathematics, data science, software engineering, cyber-security, and operations.
Join Tariq King as he gives you a quality engineering introduction to testing AI and machine learning. You’ll learn AI and ML fundamentals, including how intelligent agents are modeled, trained and developed. Tariq then dives into approaches for validating ML models offline, prior to release, and online, continuously post-deployment. Engage with other participants to develop and execute a test plan for a live ML-based recommendation system, and experience the practical issues around testing AI first-hand. Tariq wraps up the session with a set of expert-recommended, AI engineering practices to help your organization develop trusted machine learning systems.
- Fundamentals of AI Engineering
- Offline ML Validation: Performance Measures
- Online/Adaptive ML Validation: Testing Strategies
- Foundational Practices for Trusted AI/ML