
In this second part of my discussion with Wolfgang (check out the first part), we talk about spiking neural networks in general, principles of brain computation he finds promising for implementing better network models, and we quickly overview some of his recent work on using these principles to build models with biologically plausible learning mechanisms, a spiking network analog of the well-known LSTM recurrent network, and meta-learning using reservoir computing.
Support the show to get full episodes and join the Discord community Mark and Mazviita discuss the philosophy and science of mind, and how...
Show notes: Visit Rafal’s Lab Website. Rafal’s papers we discuss: Theories of Error Back-Propagation in the Brain. An Approximation of the Error Backpropagation Algorithm...
Catherine, Jess, and I use some of the ideas from their recent papers to discuss how different types of explanations in neuroscience and AI...