Part 3 in our 100th episode celebration. Previous guests answered the question:
Given the continual surprising progress in AI powered by scaling up parameters and using more compute, while using fairly generic architectures (eg. GPT-3):
Do you think the current trend of scaling compute can lead to human level AGI? If not, what’s missing?
It likely won’t surprise you that the vast majority answer “No.” It also likely won’t surprise you, there is differing opinion on what’s missing.
Timestamps:
0:00 – Intro
3:56 – Wolgang Maass
5:34 – Paul Humphreys
9:16 – Chris Eliasmith
12:52 – Andrew Saxe
16:25 – Mazviita Chirimuuta
18:11 – Steve Potter
19:21 – Blake Richards
22:33 – Paul Cisek
26:24 – Brad Love
29:12 – Jay McClelland
34:20 – Megan Peters
37:00 – Dean Buonomano
39:48 – Talia Konkle
40:36 – Steve Grossberg
42:40 – Nathaniel Daw
44:02 – Marcel van Gerven
45:28 – Kanaka Rajan
48:25 – John Krakauer
51:05 – Rodrigo Quian Quiroga
53:03 – Grace Lindsay
55:13 – Konrad Kording
57:30 – Jeff Hawkins
102:12 – Uri Hasson
1:04:08 – Jess Hamrick
1:06:20 – Thomas Naselaris
Show notes: Anna’s website: annawexler.com. Follow Anna on Twitter: @anna_wexler. Check out her documentary Unorthodox. The papers we discuss: Recurrent themes in the history...
We made it to the last bit of our 100th episode celebration. These have been super fun for me, and I hope you’ve enjoyed...
Support the show to get full episodes and join the Discord community. Àlex Gómez-Marín heads The Behavior of Organisms Laboratory at the Institute of...