AI Safety...Ok Doomer: with Anca Dragan

0 Anmeldelser
0
Episode
22 of 37
Længde
38M
Sprog
Engelsk
Format
Kategori
Personlig udvikling

Building safe and capable models is one of the greatest challenges of our time. Can we make AI work for everyone? How do we prevent existential threats? Why is alignment so important? Join Professor Hannah Fry as she delves into these critical questions with Anca Dragan, lead for AI safety and alignment at Google DeepMind.

For further reading, search "Introducing the Frontier Safety Framework" and "Evaluating Frontier Models for Dangerous Capabilities".

Thanks to everyone who made this possible, including but not limited to:

• Presenter: Professor Hannah Fry • Series Producer: Dan Hardoon • Editor: Rami Tzabar, TellTale Studios • Commissioner & Producer: Emma Yousif • Production support: Mo Dawoud • Music composition: Eleni Shaw • Camera Director and Video Editor: Tommy Bruce • Audio Engineer: Perry Rogantin • Video Studio Production: Nicholas Duke • Video Editor: Bilal Merhi • Video Production Design: James Barton • Visual Identity and Design: Eleanor Tomlinson • Commissioned by Google DeepMind

Please leave us a review on Spotify or Apple Podcasts if you enjoyed this episode. We always want to hear from our audience whether that's in the form of feedback, new idea or a guest recommendation!


Lyt når som helst, hvor som helst

Nyd den ubegrænsede adgang til tusindvis af spændende e- og lydbøger - helt gratis

  • Lyt og læs så meget du har lyst til
  • Opdag et kæmpe bibliotek fyldt med fortællinger
  • Eksklusive titler + Mofibo Originals
  • Opsig når som helst
Prøv nu
DK - Details page - Device banner - 894x1036

Other podcasts you might like ...