Support the Podcast

Andrew and I discuss his work exploring how various facets of deep networks contribute to their function, i.e. deep network theory. We talk about what he’s learned by studying linear deep networks and asking how depth and initial weights affect learning dynamics, when replay is appropriate (and when it’s not), how semantics develop, and what it all might tell us about deep learning in brains.
Show notes:
A few recommended texts to dive deeper:
Support the show to get full episodes and join the Discord community. Peter Stratton is a research scientist at Queensland University of Technology. I...
In this second part of our conversation David, John, and I continue to discuss the role of complexity science in the study of intelligence,...
Ida and I discuss the current landscape of reinforcement learning in both natural and artificial intelligence, and how the old story of two RL...