Support the Podcast
Andrew and I discuss his work exploring how various facets of deep networks contribute to their function, i.e. deep network theory. We talk about what he’s learned by studying linear deep networks and asking how depth and initial weights affect learning dynamics, when replay is appropriate (and when it’s not), how semantics develop, and what it all might tell us about deep learning in brains.
Show notes:
A few recommended texts to dive deeper:
Matt and I discuss how cognition and behavior drifts over the course of minutes and hours, and how global brain activity drifts with it....
Jackie and Bob discuss their research and thinking about curiosity. Jackie’s background is studying decision making and attention, recording neurons in nonhuman primates during...
Support the show to get full episodes and join the Discord community. I was recently invited to moderate a panel at the Annual Bernstein...