Part 3 in our 100th episode celebration. Previous guests answered the question:
Given the continual surprising progress in AI powered by scaling up parameters and using more compute, while using fairly generic architectures (eg. GPT-3):
Do you think the current trend of scaling compute can lead to human level AGI? If not, what’s missing?
It likely won’t surprise you that the vast majority answer “No.” It also likely won’t surprise you, there is differing opinion on what’s missing.
Timestamps:
0:00 – Intro
3:56 – Wolgang Maass
5:34 – Paul Humphreys
9:16 – Chris Eliasmith
12:52 – Andrew Saxe
16:25 – Mazviita Chirimuuta
18:11 – Steve Potter
19:21 – Blake Richards
22:33 – Paul Cisek
26:24 – Brad Love
29:12 – Jay McClelland
34:20 – Megan Peters
37:00 – Dean Buonomano
39:48 – Talia Konkle
40:36 – Steve Grossberg
42:40 – Nathaniel Daw
44:02 – Marcel van Gerven
45:28 – Kanaka Rajan
48:25 – John Krakauer
51:05 – Rodrigo Quian Quiroga
53:03 – Grace Lindsay
55:13 – Konrad Kording
57:30 – Jeff Hawkins
102:12 – Uri Hasson
1:04:08 – Jess Hamrick
1:06:20 – Thomas Naselaris
Mark and I discuss his book, The Spike: An Epic Journey Through the Brain in 2.1 Seconds. It chronicles how a series of action...
Support the show to get full episodes and join the Discord community. https://youtu.be/lbKEOdbeqHo The Transmitter is an online publication that aims to deliver useful...
Support the Podcast Jess and I discuss construction using graph neural networks. She makes AI agents that build structures to solve tasks in a...