
Thomas and I discuss the role of recurrence in visual cognition: how brains somehow excel with so few “layers” compared to deep nets, how feedback recurrence can underlie visual reasoning, how LSTM gate-like processing could explain the function of canonical cortical microcircuits, the current limitations of deep learning networks like adversarial examples, and a bit of history in modeling our hierarchical visual system, including his work with the HMAX model and interacting with the deep learning folks as convolutional neural networks were being developed.
Show Notes:
Omri, David and I discuss using recurrent neural network models (RNNs) to understand brains and brain function. Omri and David both use dynamical systems...
Mentioned in the show The two papers we discuss: The Roles of Supervised Machine Learning in Systems Neuroscience Machine learning for neural decoding Kording...
Support the show to get full episodes and join the Discord community. As some of you know, I recently got back into the research...