In the first episode of an "AI in the shadows" theme, Chris and Daniel explore the increasing concerning world of agentic misalignment. Starting out with a reminder about hallucinations and reasoning models, they break down how today’s models only mimic reasoning, which can lead to serious ethical considerations. They unpack a fascinating (and slightly terrifying) new study from Anthropic, where agentic AI models were caught simulating blackmail, deception, and even sabotage — all in the name of goal completion and self-preservation.
Featuring:
• Chris Benson – Website • , LinkedIn • , Bluesky • , GitHub • , X • Daniel Whitenack – Website • , GitHub • , X Links:
Agentic Misalignment: How LLMs could be insider threatsHugging Face Agents Course Register for upcoming webinars here!
Nyd den ubegrænsede adgang til tusindvis af spændende e- og lydbøger - helt gratis
Dansk
Danmark