Large Language Negligence

Large Language Negligence

0 Anmeldelser
0
Episode
1641 of 1997
Længde
53M
Sprog
Engelsk
Format
Kategori
Historie

As large language models like ChatGPT play an increasingly important role in our society, there will no doubt be examples of them causing harm. Lawsuits have already been filed in cases where LLMs have made false statements about individuals, but what about run-of-the-mill negligence cases? What happens when an LLM provides faulty medical advice or causes extreme emotional distress?

A forthcoming symposium in the Journal of Free Speech Law tackles these questions, and Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Senior Editor at Lawfare, spoke with three of the symposium's contributors at the University of Arizona and the University of Florida: law professors Jane Bambauer and Derek Bambauer, and computer scientist Mihai Surdeanu. Jane's paper focuses on what it means for a LLM to breach its duty of care, and Derek and Mihai explore under what conditions the output of LLMs may be shielded from liability by that all-important Internet statute, Section 230.

Support this show http://supporter.acast.com/lawfare.

Hosted on Acast. See acast.com/privacy for more information.


Lyt når som helst, hvor som helst

Nyd den ubegrænsede adgang til tusindvis af spændende e- og lydbøger - helt gratis

  • Lyt og læs så meget du har lyst til
  • Opdag et kæmpe bibliotek fyldt med fortællinger
  • Eksklusive titler + Mofibo Originals
  • Opsig når som helst
Prøv nu
DK - Details page - Device banner - 894x1036
Cover for Large Language Negligence

Other podcasts you might like ...