Mazviita and I discuss the growing divide between prediction and understanding as neuroscience models and deep learning networks become bigger and more complex. She describes her non-factive account of understanding, which among other things suggests that the best predictive models may deliver less understanding. We also discuss the brain as a computer metaphor, and whether it's really possible to ignore all the traditionally "non-computational" parts of the brain like metabolism and other life processes.
Show notes:
Show notes: This is the first in a series of episodes where I interview keynote speakers at the upcoming Cognitive Computational Neuroscience conference in...
Support the Podcast Andrew and I discuss his work exploring how various facets of deep networks contribute to their function, i.e. deep network theory....
Support the show on Patreon for almost nothing. I speak with Tom Griffiths about his “resource-rational framework”, inspired by Herb Simon's bounded rationality and...