
In this second part of my discussion with Wolfgang (check out the first part), we talk about spiking neural networks in general, principles of brain computation he finds promising for implementing better network models, and we quickly overview some of his recent work on using these principles to build models with biologically plausible learning mechanisms, a spiking network analog of the well-known LSTM recurrent network, and meta-learning using reservoir computing.
Show notes BLAM (Brain, Learning, Animation, and Movement) Lab homepage: http://blam-lab.org/ BLAM on Twitter: @blamlab Papers we discuss: Neuroscience Needs Behavior: Correcting a Reductionist...
Support the show to get full episodes and join the Discord community. As some of you know, I recently got back into the research...
Omri, David and I discuss using recurrent neural network models (RNNs) to understand brains and brain function. Omri and David both use dynamical systems...