Support the Podcast
Andrew and I discuss his work exploring how various facets of deep networks contribute to their function, i.e. deep network theory. We talk about what he’s learned by studying linear deep networks and asking how depth and initial weights affect learning dynamics, when replay is appropriate (and when it’s not), how semantics develop, and what it all might tell us about deep learning in brains.
Show notes:
A few recommended texts to dive deeper:
K, Josh, and I were postdocs together in Jeff Schall’s and Geoff Woodman’s labs. K and Josh had backgrounds in psychology and were getting...
BI NMA 05: NLP and Generative Models Panel This is the 5th in a series of panel discussions in collaboration with Neuromatch Academy, the...
Brain Inspired turns 100 (episodes) today! To celebrate, my patreon supporters helped me create a list of questions to ask my previous guests, many...