
Uri and I discuss his recent perspective that conceives of brains as super-over-parameterized models that try to fit everything as exactly as possible rather than trying to abstract the world into usable models. He was inspired by the way artificial neural networks overfit data when they can, and how evolution works the same way on a much slower timescale.
Show notes:
Support the show to get full episodes and join the Discord community. The Transmitter is an online publication that aims to deliver useful information,...
Mentioned in the show: The paper we discuss: How biological attention mechanisms improve task performance in a large-scale visual system model. Follow Grace on...
Romain and I discuss his theoretical/philosophical work examining how neuroscientists rampantly misuse the word "code" when making claims about information processing in brains. We...