Part 3 in our 100th episode celebration. Previous guests answered the question:
Given the continual surprising progress in AI powered by scaling up parameters and using more compute, while using fairly generic architectures (eg. GPT-3):
Do you think the current trend of scaling compute can lead to human level AGI? If not, what’s missing?
It likely won’t surprise you that the vast majority answer “No.” It also likely won’t surprise you, there is differing opinion on what’s missing.
Timestamps:
0:00 – Intro
3:56 – Wolgang Maass
5:34 – Paul Humphreys
9:16 – Chris Eliasmith
12:52 – Andrew Saxe
16:25 – Mazviita Chirimuuta
18:11 – Steve Potter
19:21 – Blake Richards
22:33 – Paul Cisek
26:24 – Brad Love
29:12 – Jay McClelland
34:20 – Megan Peters
37:00 – Dean Buonomano
39:48 – Talia Konkle
40:36 – Steve Grossberg
42:40 – Nathaniel Daw
44:02 – Marcel van Gerven
45:28 – Kanaka Rajan
48:25 – John Krakauer
51:05 – Rodrigo Quian Quiroga
53:03 – Grace Lindsay
55:13 – Konrad Kording
57:30 – Jeff Hawkins
102:12 – Uri Hasson
1:04:08 – Jess Hamrick
1:06:20 – Thomas Naselaris
Support the Show How does knowledge in the world get into our brains and integrated with the rest of our knowledge and memories? Anna...
Pieter and I discuss his ongoing quest to figure out how the brain implements learning that solves the credit assignment problem, like backpropagation does...
Support the show to get full episodes, full archive, and join the Discord community. Today I'm in conversation with Rajesh Rao, a distinguished professor...