Support the Podcast
Andrew and I discuss his work exploring how various facets of deep networks contribute to their function, i.e. deep network theory. We talk about what he’s learned by studying linear deep networks and asking how depth and initial weights affect learning dynamics, when replay is appropriate (and when it’s not), how semantics develop, and what it all might tell us about deep learning in brains.
Show notes:
A few recommended texts to dive deeper:
In this first part of our conversation (here's the second part), Wolfgang and I discuss the state of theoretical and computational neuroscience, and how...
Mentioned in the show The two papers we discuss: The Roles of Supervised Machine Learning in Systems Neuroscience Machine learning for neural decoding Kording...
Support the show to get full episodes and join the Discord community. Luis Favela is an Associate Professor at Indiana University Bloomington. He is...