BI 052 Andrew Saxe: Deep Learning Theory

November 06, 2019 01:25:48
BI 052 Andrew Saxe: Deep Learning Theory
Brain Inspired
BI 052 Andrew Saxe: Deep Learning Theory

Nov 06 2019 | 01:25:48

/

Show Notes

Support the Podcast

Andrew and I discuss his work exploring how various facets of deep networks contribute to their function, i.e. deep network theory. We talk about what he’s learned by studying linear deep networks and asking how depth and initial weights affect learning dynamics, when replay is appropriate (and when it’s not), how semantics develop, and what it all might tell us about deep learning in brains.

Show notes:

A few recommended texts to dive deeper:

Other Episodes

Episode 0

January 15, 2020 00:55:10
Episode Cover

BI 058 Wolfgang Maass: Computing Brains and Spiking Nets

In this first part of our conversation (here's the second part), Wolfgang and I discuss the state of theoretical and computational neuroscience, and how...

Listen

Episode 0

September 07, 2018 00:53:34
Episode Cover

BI 008 Joshua Glaser: Supervised ML for Neuroscience

  Mentioned in the show The two papers we discuss: The Roles of Supervised Machine Learning in Systems Neuroscience Machine learning for neural decoding Kording...

Listen

Episode 0

July 31, 2024 01:41:03
Episode Cover

BI 190 Luis Favela: The Ecological Brain

Support the show to get full episodes and join the Discord community. Luis Favela is an Associate Professor at Indiana University Bloomington. He is...

Listen