
Thomas and I discuss the role of recurrence in visual cognition: how brains somehow excel with so few “layers” compared to deep nets, how feedback recurrence can underlie visual reasoning, how LSTM gate-like processing could explain the function of canonical cortical microcircuits, the current limitations of deep learning networks like adversarial examples, and a bit of history in modeling our hierarchical visual system, including his work with the HMAX model and interacting with the deep learning folks as convolutional neural networks were being developed.
Show Notes:
Support the show to get full episodes and join the Discord community. Mac and I discuss his systems level approach to understanding brains, and...
It’s generally agreed machine learning and AI provide neuroscience with tools for analysis and theoretical principles to test in brains, but there is less...
Support the Podcast Jess and I discuss construction using graph neural networks. She makes AI agents that build structures to solve tasks in a...