BI 052 Andrew Saxe: Deep Learning Theory

November 06, 2019 01:25:48
BI 052 Andrew Saxe: Deep Learning Theory
Brain Inspired
BI 052 Andrew Saxe: Deep Learning Theory

Nov 06 2019 | 01:25:48

/

Show Notes

Support the Podcast

Andrew and I discuss his work exploring how various facets of deep networks contribute to their function, i.e. deep network theory. We talk about what he’s learned by studying linear deep networks and asking how depth and initial weights affect learning dynamics, when replay is appropriate (and when it’s not), how semantics develop, and what it all might tell us about deep learning in brains.

Show notes:

A few recommended texts to dive deeper:

Other Episodes

Episode

September 02, 2018 01:01:48
Episode Cover

BI 007 Daniel Yamins: Infant AI and CNNs

Mentioned in the show: Dan’s Stanford Neuroscience and Artificial Intelligence Laboratory: The 2 papers we discuss Performance-optimized hierarchical models predict neural responses in higher...

Listen

Episode 0

March 02, 2022 01:21:01
Episode Cover

BI 129 Patryk Laurent: Learning from the Real World

Support the show to get full episodes and join the Discord community. Patryk and I discuss his wide-ranging background working in both the neuroscience...

Listen

Episode 0

January 29, 2021 01:34:10
Episode Cover

BI 096 Keisuke Fukuda and Josh Cosman: Forking Paths

K, Josh, and I were postdocs together in Jeff Schall’s and Geoff Woodman’s labs. K and Josh had backgrounds in psychology and were getting...

Listen