Support the Podcast
Andrew and I discuss his work exploring how various facets of deep networks contribute to their function, i.e. deep network theory. We talk about what he’s learned by studying linear deep networks and asking how depth and initial weights affect learning dynamics, when replay is appropriate (and when it’s not), how semantics develop, and what it all might tell us about deep learning in brains.
Show notes:
A few recommended texts to dive deeper:
Dileep’s homepage. Dileep on Twitter: @dileeplearning Vicarious, the general AI robotics company Dileep cofounded. Vicarious on Twitter: @vicariousai. The papers we discuss: A generative...
Show notes: Anna’s website: annawexler.com. Follow Anna on Twitter: @anna_wexler. Check out her documentary Unorthodox. The papers we discuss: Recurrent themes in the history...
Russ and I discuss cognitive ontologies - the "parts" of the mind and their relations - as an ongoing dilemma of how to map...