Uri and I discuss his recent perspective that conceives of brains as super-over-parameterized models that try to fit everything as exactly as possible rather than trying to abstract the world into usable models. He was inspired by the way artificial neural networks overfit data when they can, and how evolution works the same way on a much slower timescale.
Show notes:
Mentioned in the show: Dan’s Stanford Neuroscience and Artificial Intelligence Laboratory: The 2 papers we discuss Performance-optimized hierarchical models predict neural responses in higher...
Support the show to get full episodes and join the Discord community. Hakwan and I discuss many of the topics in his new book,...
Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and...