Strange Loop

How to Fix AI: Solutions to ML Bias (And Why They Don't Matter)

Bias in machine learning is a Problem. This is common knowledge for many of us now, and yet our algorithms continue to operate unfairly in the real world, perpetuating structural inequality along lines of class and color. After all, "better training data" is not so easy to get our hands on, right?

In this talk, I argue that it is time for us to begin building algorithms that are designed to be resilient to biased data. Building on a basic introduction to ML concepts, I present an in-depth, intuitive explanation of several deep learning techniques that combat underlying bias in data, and use these models to explore what "algorithmic fairness" really means in measurable terms. Finally, diving into a few case studies of real world systems, I suggest that even perfect "fairness" is not necessarily the fairy-tale ending we like to think it is. Blindly optimizing for it may still miss the real problem behind AI bias, and to come to a real solution we may just have to reframe the problem itself.

Joyce Xu

Joyce Xu

Sidewalk Labs

Joyce is an AI/ML engineer (tinkerer?) who might be a little too excited about history, urban studies, and binging HBO to be a productive tech worker. She is currently at Sidewalk Labs, where she's thinking about how to engineer privacy-preserving ML solutions in urban mobility and sustainability. Previously, she conducted research at DeepMind and the Stanford NLP Group, where her pursuits centered around multi-agent reinforcement learning and natural language generation respectively. Having begun her journey in AI self-taught, she is a strong advocate of accessibility in research and tooling: she's helped build an open-source ML framework for functional programming in Clojure, and blogs regularly on AI research and fundamentals. When she's not busy daydreaming about becoming a DJ, she sometimes looks forward to finishing her university studies at Stanford.