Don't Anthropomorphize the Lawnmower - Metaphors for Testing AI 

"Metaphors We Live By" by George Lakoff and Mark Johnson argue that metaphors are the central way humans think about complex or abstract ideas. I'd suggest the way we as a culture think about AI is: "AI as like a Human."  As a tester who strives to identify risks which others miss, when I see a pattern of thought emerging I try challenging it to see what happens. In this case I'd ask: does thinking of AI Systems as "like a human" blind us to any specific risks? 

In this presentation I want to suggest alternative metaphors for thinking about AI systems, demonstrate how these metaphors illuminate risks that were missed in real world examples and suggest some questions these metaphors provoke we could use when testing systems leveraging AI. George Box, a British statistician, spoke of this reality when famously wrote "All models are wrong, but some are useful."  Each of the metaphors suggested in this talk is wrong, it fails to capture some important parts of how these AI systems work - however if they open new questions they prove themselves useful.

Paper | Presentation

Nate Custer 

Having worked as a Systems Architect, QA Automation Lead, Application Developer, Developer of Tools and Testing Harnesses for QA teams, and as Principal Consultant with TTC for many Fortune 500 organizations ? Nate is passionate about helping teams deliver quality software. When he is not at a computer, you'll most likely find him reading a book, sipping scotch, or talking with his friends about Manchester United.