Support the Podcast

Andrew and I discuss his work exploring how various facets of deep networks contribute to their function, i.e. deep network theory. We talk about what he’s learned by studying linear deep networks and asking how depth and initial weights affect learning dynamics, when replay is appropriate (and when it’s not), how semantics develop, and what it all might tell us about deep learning in brains.
Show notes:
A few recommended texts to dive deeper:
When a waiter hands me the bill, how do I know whether to pay it myself or let my date pay? On this episode,...
In this 2nd special 100th episode installment, many previous guests answer the question: What is currently the most important disagreement or challenge in neuroscience...
Mentioned in the show: Mark’s lab The excellent blog he writes on Medium The paper we discuss: An ensemble code in medial prefrontal cortex...