Randy and I discuss his LEABRA cognitive architecture that aims to simulate the human brain, plus his current theory about how a loop between cortical regions and the thalamus could implement predictive learning and thus solve how we learn with so few examples. We also discuss what Randy thinks is the next big thing neuroscience can contribute to AI (thanks to a guest question from Anna Schapiro), and much more.
A few take-home points:
Timestamps:
0:00 – Intro
3:54 – Skip Intro
6:20 – Being in awe
18:57 – How current AI can inform neuro
21:56 – Anna Schapiro question – how current neuro can inform AI.
29:20 – Learned vs. innate cognition
33:43 – LEABRA
38:33 – Developing Leabra
40:30 – Macroscale
42:33 – Thalamus as microscale
43:22 – Thalamocortical circuitry
47:25 – Deep predictive learning
56:18 – Deep predictive learning vs. backrop
1:01:56 – 10 Hz learning cycle
1:04:58 – Better theory vs. more data
1:08:59 – Leabra vs. Spaun
1:13:59 – Biological realism
1:21:54 – Bottom-up inspiration
1:27:26 – Biggest mistake in Leabra
1:32:14 – AI consciousness
1:34:45 – How would Randy begin again?
Panelists: Athena Akrami: @AthenaAkrami.Demba Ba.Gunnar Blohm: @GunnarBlohm.Kunlin Wei. This is the first in a series of panel discussions in collaboration with Neuromatch Academy, the...
Support the show to get full episodes and join the Discord community. Damian Kelty-Stephen is an experimental psychologist at State University of New York...
Megan and I discuss her work using metacognition as a way to study subjective awareness, or confidence. We talk about using computational and neural...