Ida and I discuss the current landscape of reinforcement learning in both natural and artificial intelligence, and how the old story of two RL systems in brains - model-free and model-based - is giving way to a more nuanced story of these two systems constantly interacting and additional RL strategies between model-free and model-based to drive the vast repertoire of our habits and goal-directed behaviors. We discuss Ida’s work on one of those “in-between” strategies, the successor representation RL strategy, which maps onto brain activity and accounts for behavior. We also discuss her interesting background and how it affects her outlook and research pursuit, and the role philosophy has played and continues to play in her thought processes.
Related links:
Time stamps:
0:00 - Intro
4:50 - Skip intro
9:58 - Core way of thinking
19:58 - Disillusionment
27:22 - Role of philosophy
34:51 - Optimal individual learning strategy
39:28 - Microsoft job
44:48 - Field of reinforcement learning
51:18 - Learning vs. innate priors
59:47 - Incorporating other cognition into RL
1:08:24 - Evolution
1:12:46 - Model-free and model-based RL
1:19:02 - Successor representation
1:26:48 - Are we running all algorithms all the time?
1:28:38 - Heuristics and intuition
1:33:48 - Levels of analysis
1:37:28 - Consciousness
Support the show to get full episodes and join the Discord community. Check out my short video series about what's missing in AI and...
Mentioned in the show: Mark’s lab The excellent blog he writes on Medium The paper we discuss: An ensemble code in medial prefrontal cortex...
Show notes: His new book, The Deep Learning Revolution: His Computational Neurobiology Laboratory at the Salk Institute. His faculty page at UCSD. His first...