Gary Marcus is a professor emeritus at NYU, founder of Robust.AI and Geometric Intelligence, the latter is a machine learning company acquired by Uber in 2016. He is the author of several books on natural and artificial intelligence, including his new book Rebooting AI: Building Machines We Can Trust. Gary has been a critical voice highlighting the limits of deep learning and discussing the challenges before the AI community that must be solved in order to achieve artificial general intelligence. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on iTunes or support it on Patreon. Here's the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode):
00:00 - Introduction 01:37 - Singularity 05:48 - Physical and psychological knowledge 10:52 - Chess 14:32 - Language vs physical world 17:37 - What does AI look like 100 years from now 21:28 - Flaws of the human mind 25:27 - General intelligence 28:25 - Limits of deep learning 44:41 - Expert systems and symbol manipulation 48:37 - Knowledge representation 52:52 - Increasing compute power 56:27 - How human children learn 57:23 - Innate knowledge and learned knowledge 1:06:43 - Good test of intelligence 1:12:32 - Deep learning and symbol manipulation 1:23:35 - Guitar
Gary Marcus is a professor emeritus at NYU, founder of Robust.AI and Geometric Intelligence, the latter is a machine learning company acquired by Uber in 2016. He is the author of several books on natural and artificial intelligence, including his new book Rebooting AI: Building Machines We Can Trust. Gary has been a critical voice highlighting the limits of deep learning and discussing the challenges before the AI community that must be solved in order to achieve artificial general intelligence. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on iTunes or support it on Patreon. Here's the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode):
00:00 - Introduction 01:37 - Singularity 05:48 - Physical and psychological knowledge 10:52 - Chess 14:32 - Language vs physical world 17:37 - What does AI look like 100 years from now 21:28 - Flaws of the human mind 25:27 - General intelligence 28:25 - Limits of deep learning 44:41 - Expert systems and symbol manipulation 48:37 - Knowledge representation 52:52 - Increasing compute power 56:27 - How human children learn 57:23 - Innate knowledge and learned knowledge 1:06:43 - Good test of intelligence 1:12:32 - Deep learning and symbol manipulation 1:23:35 - Guitar
Nyd den ubegrænsede adgang til tusindvis af spændende e- og lydbøger - helt gratis
Dansk
Danmark