Support the Podcast

Andrew and I discuss his work exploring how various facets of deep networks contribute to their function, i.e. deep network theory. We talk about what he’s learned by studying linear deep networks and asking how depth and initial weights affect learning dynamics, when replay is appropriate (and when it’s not), how semantics develop, and what it all might tell us about deep learning in brains.
Show notes:
A few recommended texts to dive deeper:
Art by Julia Kuhl: http://www.somedonkey.com/ Show notes: Check out Anne's lab website.Follow her on twitter: @anne_churchlandAnne's List, the list of female systems neuroscientists to...
Support the Podcast David and I talk about his work to understand how sound waves floating in the air get transformed into meaningful concepts...
Support the show to get full episodes and join the Discord community. Welcome to another special panel discussion episode. I was recently invited to...