Strange Loop

Just-So Stories for AI: Explaining Black-Box Predictions

As machine learning techniques become more powerful, humans and companies are offloading more and more ethical decisions to ML models. Which person should get a loan? Where should I direct my time and attention? Algorithms often outperform humans, so we cede our control happily and love the extra time and leverage this gives us.

There's lurking danger here. Many of the most successful machine learning algorithms are black boxes - they give us predictions without the "why" that accompanies human decision-making. Trust without understanding is a scary thing. Why did the self-driving car swerve into traffic, killing its driver? How did the robotic doctor choose that dose? The ability to demand a plausible explanation for each decision is humanity's key to maintaining control over our ethical development.

In this talk we'll explore (with code!) several state-of-the-art strategies for explaining the decisions of our black box models. We'll talk about why some algorithms are so difficult to interpret and discuss the insights that our explanation-generating models can give us into free will and how humans invent explanations for our own decisions.

Sam Ritchie

Sam Ritchie


Sam Ritchie works on machine learning and fraud detection at Stripe. He is the author of a number of successful open source projects in Scala and Clojure, including Summingbird, Algebird and Cascalog 2.0. Before Stripe he founded @paddleguru, @racehubhq and worked as a senior software engineer at Twitter. Sam currently lives, runs and climbs in Boulder, CO.