Nicole and I discuss how a signature for visual memory can be coded among the same population of neurons known to encode object identity, how the same coding scheme arises in convolutional neural networks trained to identify objects, and how neuroscience and machine learning (reinforcement learning) can join forces to understand how curiosity and novelty drive efficient learning.
Jay's homepage at Stanford.Implementing mathematical reasoning in machines:The video lecture.The paper.Parallel Distributed Processing by Rumelhart and McClelland.Complimentary Learning Systems Theory and Its Recent Update.Episode...
Show Notes: Federico’s website.Federico’s papers we discuss: Conflicting emergences. Weak vs. strong emergence for the modelling of brain functionFrom homeostasis to behavior: balanced activity...
Mazviita and I discuss the growing divide between prediction and understanding as neuroscience models and deep learning networks become bigger and more complex. She...