In this second part of my discussion with Wolfgang (check out the first part), we talk about spiking neural networks in general, principles of brain computation he finds promising for implementing better network models, and we quickly overview some of his recent work on using these principles to build models with biologically plausible learning mechanisms, a spiking network analog of the well-known LSTM recurrent network, and meta-learning using reservoir computing.
[et_pb_section fb_built=”1″ admin_label=”Header” _builder_version=”4.9.2″ background_color=”#ad876d” background_enable_image=”off” parallax=”on” custom_padding=”0vw||0vw||true|false” custom_css_main_element=”.podcast .entry-title {||display: none;||}” background_size__hover=”cover” background_size__hover_enabled=”cover”][et_pb_row _builder_version=”4.9.2″ background_color=”#d5a570″ use_background_color_gradient=”on” background_color_gradient_start=”rgba(26,24,68,0)” background_color_gradient_end=”#231f20″ background_color_gradient_overlays_image=”on” background_enable_image=”off” background_position=”top_center” width=”100%” max_width=”100%”...
Jim and I discuss his reverse engineering approach to visual intelligence, using deep models optimized to perform object recognition tasks. We talk about the...
Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver...