Scott Zoldi, FICO Chief Analytics Officer and PhD in Theoretical Physics, shares how to use AI responsibly. FICO uses consumer data and machine learning models to make decisions ranging from fraud to credit risk. Hundreds of thousands of signals can be used to make a single decision by comparing new data with historical data. Scott's team is focused not just on making accurate decisions but also ensuring the signals used and the decision-making process are bias-free.
Listen and learn...
1. How Scott's team uses AI to make automated decisions using consumer data 2. Why Scott's priorities are "explainability first and performance second" 3. Why the principles of "humble AI" are as important as the principles of ethical AI 4. What's required to increase public trust in AI-based decisions 5. What's the role of data scientists in the future when AutoML is prevalent 6. What Scott means when he says "models aren't biased when they're built, they're only biased in production"
References in this episode:
Scott's blog posts at FICO@ScottZoldi • on Twitter Facebook's AI experiment gone awryAmazon's facial recognition failure Thanks to Benjamin Baer for the intro to Scott!
Scott Zoldi, FICO Chief Analytics Officer and PhD in Theoretical Physics, shares how to use AI responsibly. FICO uses consumer data and machine learning models to make decisions ranging from fraud to credit risk. Hundreds of thousands of signals can be used to make a single decision by comparing new data with historical data. Scott's team is focused not just on making accurate decisions but also ensuring the signals used and the decision-making process are bias-free.
Listen and learn...
1. How Scott's team uses AI to make automated decisions using consumer data 2. Why Scott's priorities are "explainability first and performance second" 3. Why the principles of "humble AI" are as important as the principles of ethical AI 4. What's required to increase public trust in AI-based decisions 5. What's the role of data scientists in the future when AutoML is prevalent 6. What Scott means when he says "models aren't biased when they're built, they're only biased in production"
References in this episode:
Scott's blog posts at FICO@ScottZoldi • on Twitter Facebook's AI experiment gone awryAmazon's facial recognition failure Thanks to Benjamin Baer for the intro to Scott!
Nyd den ubegrænsede adgang til tusindvis af spændende e- og lydbøger - helt gratis
Dansk
Danmark