Support the Podcast

Andrew and I discuss his work exploring how various facets of deep networks contribute to their function, i.e. deep network theory. We talk about what he’s learned by studying linear deep networks and asking how depth and initial weights affect learning dynamics, when replay is appropriate (and when it’s not), how semantics develop, and what it all might tell us about deep learning in brains.
Show notes:
A few recommended texts to dive deeper:
Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord...
Show notes: Ryota founded the company Araya. Follow him on twitter: @kanair. Integrated Information Theory. Pansychism. The paper we discuss: A unified strategy for...
Patrick and I mostly discuss his path from a technician in the then nascent Jim DiCarlo lab, through his graduate school and two postdoc...