In this second part of my discussion with Wolfgang (check out the first part), we talk about spiking neural networks in general, principles of brain computation he finds promising for implementing better network models, and we quickly overview some of his recent work on using these principles to build models with biologically plausible learning mechanisms, a spiking network analog of the well-known LSTM recurrent network, and meta-learning using reservoir computing.
Show notes: DeepMind. The papers we discuss: Neuroscience-Inspired Artificial Intelligence. A nice summary of the meta-reinforcement learning work. Learning to reinforcement learn. Prefrontal cortex...
Show Notes: Follow Konrad on Twitter: @KordingLab. Konrad's lab website. The paper we discuss: Bioscience-scale automated detection of figure element reuse.
Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord...