Thomas and I discuss the role of recurrence in visual cognition: how brains somehow excel with so few “layers” compared to deep nets, how feedback recurrence can underlie visual reasoning, how LSTM gate-like processing could explain the function of canonical cortical microcircuits, the current limitations of deep learning networks like adversarial examples, and a bit of history in modeling our hierarchical visual system, including his work with the HMAX model and interacting with the deep learning folks as convolutional neural networks were being developed.
Show Notes:
Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and...
Show notes: This is the first in a series of episodes where I interview keynote speakers at the upcoming Cognitive Computational Neuroscience conference in...
Jon and I discuss understanding the syntax and semantics of language in our brains. He uses linguistic knowledge at the level of sentence and...