Omri, David and I discuss using recurrent neural network models (RNNs) to understand brains and brain function. Omri and David both use dynamical systems theory (DST) to describe how RNNs solve tasks, and to compare the dynamical stucture/landscape/skeleton of RNNs with real neural population recordings. We talk about how their thoughts have evolved since their 2103 Opening the Black Box paper, which began these lines of research and thinking. Some of the other topics we discuss:
Timestamps:
0:00 – Intro
5:41 – Best scientific moment
9:37 – Why do you do what you do?
13:21 – Computation via dynamics
19:12 – Evolution of thinking about RNNs and brains
26:22 – RNNs vs. minds
31:43 – Classical computational modeling vs. machine learning modeling approach
35:46 – What are models good for?
43:08 – Ecological task validity with respect to using RNNs as models
46:27 – Optimization vs. learning
49:11 – Universality
1:00:47 – Solutions dictated by tasks
1:04:51 – Multiple solutions to the same task
1:11:43 – Direct fit (Uri Hasson)
1:19:09 – Thinking about the bigger picture
Randal, Ken, and I discuss a host of topics around the future goal of uploading our minds into non-brain systems, to continue our mental...
Support the show to get full episodes and join the Discord community. Doris, Tony, and Blake are the organizers for this year’s NAISys conference,...
Show notes: Donders Institute for Brain, Cognition and Behaviour Artificial Cognitive Systems on Twitter. Artificial Cognitive Systems research group. The paper we discuss: Generative...