
Uri and I discuss his recent perspective that conceives of brains as super-over-parameterized models that try to fit everything as exactly as possible rather than trying to abstract the world into usable models. He was inspired by the way artificial neural networks overfit data when they can, and how evolution works the same way on a much slower timescale.
Show notes:
Dileep and I discuss his theoretical account of how the thalamus and cortex work together to implement visual inference. We talked previously about his...
Support the Podcast Andrew and I discuss his work exploring how various facets of deep networks contribute to their function, i.e. deep network theory....
Ginger and I discuss her book Are You Sure? The Unconscious Origins of Certainty, which summarizes Richard Burton's work exploring the experience and phenomenal...