Part 3 in our 100th episode celebration. Previous guests answered the question:
Given the continual surprising progress in AI powered by scaling up parameters and using more compute, while using fairly generic architectures (eg. GPT-3):
Do you think the current trend of scaling compute can lead to human level AGI? If not, what’s missing?
It likely won’t surprise you that the vast majority answer “No.” It also likely won’t surprise you, there is differing opinion on what’s missing.
Timestamps:
0:00 – Intro
3:56 – Wolgang Maass
5:34 – Paul Humphreys
9:16 – Chris Eliasmith
12:52 – Andrew Saxe
16:25 – Mazviita Chirimuuta
18:11 – Steve Potter
19:21 – Blake Richards
22:33 – Paul Cisek
26:24 – Brad Love
29:12 – Jay McClelland
34:20 – Megan Peters
37:00 – Dean Buonomano
39:48 – Talia Konkle
40:36 – Steve Grossberg
42:40 – Nathaniel Daw
44:02 – Marcel van Gerven
45:28 – Kanaka Rajan
48:25 – John Krakauer
51:05 – Rodrigo Quian Quiroga
53:03 – Grace Lindsay
55:13 – Konrad Kording
57:30 – Jeff Hawkins
102:12 – Uri Hasson
1:04:08 – Jess Hamrick
1:06:20 – Thomas Naselaris
Support the show to get full episodes and join the Discord community. Check out my short video series about what's missing in AI and...
Show notes BLAM (Brain, Learning, Animation, and Movement) Lab homepage: http://blam-lab.org/ BLAM on Twitter: @blamlab Papers we discuss: Neuroscience Needs Behavior: Correcting a Reductionist...
Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and...