As generative AI moves into production, traditional guardrails and input/output filters can prove too slow, too expensive, and/or too limited. In this episode, Alizishaan Khatri of Wrynx joins Daniel and Chris to explore a fundamentally different approach to AI safety and interpretability. They unpack the limits of today’s black-box defenses, the role of interpretability, and how model-native, runtime signals can enable safer AI systems.
Featuring:
• Alizishaan Khatri – LinkedIn • Chris Benson – Website • , LinkedIn • , Bluesky • , GitHub • , X • Daniel Whitenack – Website • , GitHub • , X Upcoming Events:
• Register for upcoming webinars here • !
Nyd den ubegrænsede adgang til tusindvis af spændende e- og lydbøger - helt gratis
Dansk
Danmark
