Interpreting complicated models is a hot topic. How can we trust and manage AI models that we can’t explain? In this episode, Janis Klaise, a data scientist with Seldon, joins us to talk about model interpretation and Seldon’s new open source project called Alibi. Janis also gives some of his thoughts on production ML/AI and how Seldon addresses related problems.
Join the discussion
Changelog++ members support our work, get closer to the metal, and make the ads disappear. Join today!
Sponsors:
DigitalOcean • – Check out DigitalOcean’s dedicated vCPU Droplets with dedicated vCPU threads. • Get started for free with a $50 credit. Learn more at do.co/changelog • .
DataEngPodcast • – A podcast about data engineering and modern data infrastructure.
Fastly • – Our bandwidth partner. • Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com • .
Featuring:
• Janis Klaise – GitHub • , LinkedIn • , X • Chris Benson – Website • , GitHub • , LinkedIn • , X • Daniel Whitenack – Website • , GitHub • , X Show Notes:
Seldon Seldon Core Alibi
Books
“The Foundation Series” by Isaac Asimov “Interpretable Machine Learning” by Christoph Molnar
Something missing or broken? PRs welcome!
★ Support this podcast ★
Nyd den ubegrænsede adgang til tusindvis af spændende e- og lydbøger - helt gratis
Dansk
Danmark