0
Episode1087 of 1128
Længde53M
SprogEngelsk
FormatKategori
You hear a lot about AI safety, and this idea that sufficiently advanced AI could pose some kind of threat to humans. So people are always talking about and researching "alignment" to ensure that new AI models comport with human needs and values. But what about humans' collective treatment of AI? A small but growing number of researchers talk about AI models potentially being sentient. Perhaps they are "moral patients." Perhaps they feel some kind of equivalent of pleasure and pain -- all of which, if so, raises questions about how we use AI. They argue that one day we'll be talking about AI welfare the way we talk about animal rights, or humane versions of animal husbandry. On this episode we speak with Larissa Schiavo of Eleos AI. Eleos is an an organization that says it's "preparing for AI sentience and welfare." In this conversation we discuss the work being done in the field, why some people think it's an important area for research, whether it's in tension with AI safety, and how our use and development of AI might change in a world where models' welfare were to be seen as an important consideration. Only Bloomberg.com subscribers can get the Odd Lots newsletter in their inbox — now delivered every weekday — plus unlimited access to the site and app. Subscribe at bloomberg.com/subscriptions/oddlots
See omnystudio.com/listener for privacy information.
Nyd den ubegrænsede adgang til tusindvis af spændende e- og lydbøger - helt gratis
Lyt og læs så meget du har lyst til
Opdag et kæmpe bibliotek fyldt med fortællinger
Eksklusive titler + Mofibo Originals
Opsig når som helst
Om Mofibo
Jobs
nye app-funktioner
Investor Relations
Presse
Bæredygtighed
Tilgængelighedserklæring
Whistleblow
Søg
Bøger
Bogserier
Mofibo Originals
Podcasts
Forfattere
Indlæsere
Kategorier
Hjælpecenter
Abonnementer
Køb gavekort
Indløs gavekort
Indløs kampagnekode
Studierabat
Dansk
Danmark
Privatlivspolitik
Medlemsvilkår
Cookies
