Uri and I discuss his recent perspective that conceives of brains as super-over-parameterized models that try to fit everything as exactly as possible rather than trying to abstract the world into usable models. He was inspired by the way artificial neural networks overfit data when they can, and how evolution works the same way on a much slower timescale.
Show notes:
Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and...
K, Josh, and I were postdocs together in Jeff Schall’s and Geoff Woodman’s labs. K and Josh had backgrounds in psychology and were getting...
Support the Podcast Andrew and I discuss his work exploring how various facets of deep networks contribute to their function, i.e. deep network theory....