Strange Loop

2009 - 2023

/

St. Louis, MO

Privacy Governance & Explainability in ML/AI

Since the General Data Protection Regulation (GDPR) went into effect in May 2018, matters of data privacy have grown from minor organizational adjustments to enterprise-level initiatives with impact on innovation and day-to-day operations alike. While privacy compliance may be straightforward in some areas, the growth and expansion of machine learning (ML) and artificial intelligence (AI) have has created an impasse between consumer data and processes that are, to say the least, difficult to fully explain. Integrating processes with ML and AI techniques often prove to significantly benefit the accuracy and efficiency of processes and decision making, but one's ability to fully understand precisely how an output was generated or a decision was made for an individual is much easier said than done. Yet, regulators across the globe are challenging businesses to explain how they are using governance techniques to protect consumer data privacy and to explain how decision making within ML/AI is impacting consumers. How can one identify bias? What processes can be introduced to protect consumer privacy while rooting out potential bias in the underlying models? In this talk, we will explore methods for enhancing privacy and governing data that is used for ML/AI, as well as to consider procedural approaches available for rooting out bias and building a foundation for consumer confidence in an otherwise complex and opaque space.

Jared Maslin

Jared Maslin

Slalom Consulting

Jared is a Solution Architect with Slalom Consulting in St. Louis, as well as an Educator with the University of California, Berkeley, where he supports a course on Human Values and Ethics in Data Science. Jared has more than a decade of diverse experience in Data Privacy, Data Analytics, Auditing, Compliance, and Finance, which has contributed to a unique perspective in the space.