How Explainable AI enables trust with Fiddler.AI’s Krishna Gade
DESCRIPTION
There are a number of pillars that make up responsible AI — this idea that AI should be free from bias and impact people and society for the better.
In this episode of the Georgian Impact podcast, we’ll be talking about one pillar, explainable AI. Explainability provides insight into what's training your data and how it's performing, so if something goes wrong you know exactly where to start looking for a solution. The transparency means you're not only preventing performance issues, you're also preventing potential fines or negative publicity from biased AI.
Today’s guest is from Palo Alto-based Fiddler.AI, which promises to provide a unified environment that has a common language, centralized controls, and actionable insights to operationalize ML/AI with trust.
Fiddler.AI founder and CEO Krishna Gade breaks down how explainability provides insights into training data and performance, and why visibility on how this works enables trust.
You’ll Hear About:
● The balance between data and models.
● What happens to the model when the data changes.
● Model performance management.
● Krishna’s definition of MLOps.
● Bias as a model performance issue.
● Explainability and what Fiddler does to help people understand what is coming out of a model.
● How Fiddler helps detect issues and monitors models.
● Krishna’s view on trust.