Strange Loop

2009 - 2023

/

St. Louis, MO

Explainable AI: the apex of human and machine learning

Black Box AI technologies like Deep Learning have seen great success in domains like ad delivery, speech recognition, and image classification; and have even defeated the world's best human players in Go, Starcraft, and DOTA. As a result, adoption of these technologies has skyrocketed. But as employment of Black Box AI increases in safety-intensive and scientific domains, we are learning hard lessons about their limitations: they go wrong unexpectedly and are difficult to diagnose.

From these failures, a new trend of "Explainable AI" has emerged. These are AI technologies designed to be intuitive and understandable to their human users while maintaining the power and expressiveness of Black Box AI.

In this talk we will discuss explainable AI: what it is, when and why it's needed, and how to build it. We will explore the fundamental differences between human and machine learning, and discuss research at the apex of computation and cognition that has lead to machines that are not only intuitive and understandable to data scientists, but can efficiently communicate their knowledge to anyone by exploiting humans' innate social learning capabilities.

Baxter Eaves

Baxter Eaves

bax@redpoll.ai

Baxter Eaves, PhD is currently co-founder of Redpoll, a company building humanistic AI to help drive science. He received his PhD in experimental psychology from the University of Louisville where he built machines that learn to trust and distrust. Since then he has worked on probabilistic programming languages at MIT, Machine Teaching at Rutgers, Genomic Selection and Monsanto, and has been involved with the DARPA XDATA and PPAML projects. It has been his lifelong goal to build, and be destroyed by, the first sentient machine.