

Randy and I discuss his LEABRA cognitive architecture that aims to simulate the human brain, plus his current theory about how a loop between cortical regions and the thalamus could implement predictive learning and thus solve how we learn with so few examples. We also discuss what Randy thinks is the next big thing neuroscience can contribute to AI (thanks to a guest question from Anna Schapiro), and much more.
A few take-home points:
Timestamps:
0:00 – Intro
3:54 – Skip Intro
6:20 – Being in awe
18:57 – How current AI can inform neuro
21:56 – Anna Schapiro question – how current neuro can inform AI.
29:20 – Learned vs. innate cognition
33:43 – LEABRA
38:33 – Developing Leabra
40:30 – Macroscale
42:33 – Thalamus as microscale
43:22 – Thalamocortical circuitry
47:25 – Deep predictive learning
56:18 – Deep predictive learning vs. backrop
1:01:56 – 10 Hz learning cycle
1:04:58 – Better theory vs. more data
1:08:59 – Leabra vs. Spaun
1:13:59 – Biological realism
1:21:54 – Bottom-up inspiration
1:27:26 – Biggest mistake in Leabra
1:32:14 – AI consciousness
1:34:45 – How would Randy begin again?
Omri, David and I discuss using recurrent neural network models (RNNs) to understand brains and brain function. Omri and David both use dynamical systems...
Show Notes: Nando’s CIFAR page.Follow Nando on Twitter: @NandoDF He's giving a keynote address at Cognitive Computational Neuroscience Meeting 2020.Check out his famous machine...
Mentioned in the show: Dan’s Stanford Neuroscience and Artificial Intelligence Laboratory: The 2 papers we discuss Performance-optimized hierarchical models predict neural responses in higher...