BI 215 Xiao-Jing Wang: Theoretical Neuroscience Comes of Age

July 02, 2025 01:52:02
BI 215 Xiao-Jing Wang: Theoretical Neuroscience Comes of Age
Brain Inspired
BI 215 Xiao-Jing Wang: Theoretical Neuroscience Comes of Age

Jul 02 2025 | 01:52:02

/

Show Notes

Support the show to get full episodes, full archive, and join the Discord community.

The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.

Read more about our partnership.

Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.

To explore more neuroscience news and perspectives, visit thetransmitter.org.

Xiao-Jing Wang is a Distinguished Global Professor of Neuroscience at NYU

Xiao-Jing was born and grew up in China, spent 8 years in Belgium studying theoretical physics like nonlinear dynamical systems and deterministic chaos. And as he says it, he arrived from Brussels to California as a postdoc, and in one day switched from French to English, from European to American culture, and physics to neuroscience. I know Xiao-Jing as a legend in non-human primate neurophysiology and modeling, paving the way for the rest of us to study brain activity related cognitive functions like working memory and decision-making.

He has just released his new textbook, Theoretical Neuroscience: Understanding Cognition, which covers the history and current research on modeling cognitive functions from the very simple to the very cognitive. The book is also somewhat philosophical, arguing that we need to update our approach to explaining how brains function, to go beyond Marr's levels and enter a cross-level mechanistic explanatory pursuit, which we discuss. I just learned he even cites my own PhD research, studying metacognition in nonhuman primates - so you know it's a great book. Learn more about Xiao-Jing and the book in the show notes. It was fun having one of my heroes on the podcast, and I hope you enjoy our discussion.

0:00 - Intro 3:08 - Why the book now? 11:00 - Modularity in neuro vs AI 14:01 - Working memory and modularity 22:37 - Canonical cortical microcircuits 25:53 - Gradient of inhibitory neurons 27:47 - Comp neuro then and now 45:35 - Cross-level mechanistic understanding 1:13:38 - Bifurcation 1:24:51 - Bifurcation and degeneracy 1:34:02 - Control theory 1:35:41 - Psychiatric disorders 1:39:14 - Beyond dynamical systems 1:43:447 - Mouse as a model 1:48:11 - AI needs a PFC

View Full Transcript

Episode Transcript

[00:00:03] Speaker A: So it becomes actually really a puzzle. [00:00:06] Speaker B: Right. [00:00:07] Speaker A: If you assume that, you know, different areas are made of the same stuff and they all talk to each other in a dense network, how do you get differentiation of function? Really, you can think about bifurcations as a mathematical machinery to create novelty. Now let me make one more sweeping statement. [00:00:35] Speaker C: I'll make four more. Come on. [00:00:37] Speaker B: Yeah. [00:00:47] Speaker C: This is brain inspired, Powered by the transmitter. Xiaojing Wang is the director of the Swartz center for Theoretical Neuroscience at NYU New York University. Xiao Jing was born and grew up in China, spent eight years in Belgium studying theoretical physics areas like nonlinear dynamical systems and deterministic chaos. And as he says it, he arrived from Brussels to California as a postdoc and in one day switched from French to English language, from European to American culture, and from physics to neuroscience. Okay, so I know Xiao Jing as a legend in non human primate neurophysiology and modeling theoretical neuroscience. And having paved the way for the rest of us to study brain activity related to cognitive functions like working memory and decision making, he has just released his new textbook called Theoretical Understanding Cognition, which covers things that textbooks cover. In this case, he covers the history and the current research on modeling cognitive functions, from the very simple to the very cognitive. The book is also somewhat philosophical, arguing that we need to update our approach in neuroscience to explaining how brains function essentially to go beyond Mars levels, the famous David Mars levels, which we discuss in the episode, and instead to enter into what Zhao Jing refers to as a cross level, mechanistic explanatory pursuit, which again, we discussed that as well. I just learned he even cites my own PhD research studying metacognition in non human primates, and also my postdoctoral research studying response inhibition right there, right in the citations. So, you know, a great and worthy book. You can learn more about Xiaojing and the book in the show notes. It was fun having one of my heroes on the podcast and I hope you enjoy our discussion. Xiao Jing, I have the book Theoretical Understanding Cognition. That's slightly ambitious subtitle. So. So there's a lot in this book. Were you asked to write this book or did you decide it's time to write this book? [00:03:25] Speaker B: Hi Paul. [00:03:27] Speaker A: It's great to be on a podcast. I decided to write this book actually for some time now. You know, I guess, you know, probably it's a good time for a new textbook in the field. As you know, neuroscience has developed tremendously over the last maybe two decades or so in terms of two developments. [00:03:53] Speaker B: Right. [00:03:54] Speaker A: So you have new technological methods and as A result, you get a lot of big data of all kinds. So there's this recognition that perhaps we need theory and modeling that go hand in hand with experimentation to help accelerate discoveries. At the same time, computational neurosciences field has matured, so maybe this is a good time for a new textbook. [00:04:24] Speaker C: So if you don't mind me saying, at least from my matriculation in computational neuroscience, you, you already were a legend in the field when I began. And so I, I suppose you're still a legend in the field. And this is a single author book. Most textbooks are multiple authors. Right. So that's kind of stands out in that respect. And I want to kind of understand, you just alluded to how things have sort of progressed and now we're in a regime of big data. Back when I was beginning, we were recording single neurons. You would lower a single electrode and get one or two or maybe three neurons for a recording session while your animal was performing a task. And that's sort of what I knew you from as well, studying working memory. And you were really early on in the computational neuroscience of working memory and decision making. I would just like to ask you to reflect a little bit about what's different today besides just that it's bigger and more what's different today than from earlier on in your career. [00:05:41] Speaker B: Right. [00:05:44] Speaker A: Well, as you said back then, we're Talking about maybe 30 years ago, 25 years ago, most of the recordings from behaving animals writing an interesting task were limited to one cell at a time. And perhaps, I'm sure you agree, you are from the field, that it's probably fair to say that studies of cognitive functions like working memory and decision making roughly started around the turn of the century. Before that, I would say most of the efforts in our field were about sensory coding, sensory information processing, or movement, like central pattern generators in insects, for example. [00:06:32] Speaker B: Right. [00:06:34] Speaker A: And what's happening in between in a kind of more flexible, deliberate process at a single cell level really started around before or after the turn of the century. [00:06:49] Speaker B: Right. [00:06:49] Speaker A: You're thinking about selective attention by the work of Bob Desmond, for example. [00:06:54] Speaker B: Right. [00:06:55] Speaker A: Action selection by Jeff Shaw, perceptual decision making by Bill Nielson, Mike Sheldon, and economical decision making by people like Dale and Paul Nimcher. So they already happened roughly around that time. [00:07:12] Speaker C: That's true, yeah. [00:07:14] Speaker A: When I think about that. [00:07:15] Speaker B: Right. [00:07:18] Speaker A: And that frankly in my mind has not really been covered in a systematical way, in a pedagogical way, in a textbook. So that's what I try to do in this book. [00:07:32] Speaker C: I'm not sure what percentage of textbooks have quoted Dr. Seuss, but the last chapter in your book, you know, has a Dr. Seuss quote. With your head full of brains and your shoes full of feet, you're too smart to go down any not so good street, man. I'm not a good Dr. Seuss reader. But having you having said that, you have this table in the last chapter that lists everything that you were just talking about sort of in, in a sort of order of like a hierarchical order of all the, the tasks that have been used, many of the tasks that have been used to study what you're alluding to here with the cognition. So it's a very thorough and modern obviously book. But so why did you feel. Okay, so you felt like they just hadn't been put together? Our modern understanding of cognitive type tasks and theoretical notions. [00:08:37] Speaker A: Yeah. For computational neuroscience, that I think is something really important. You just mentioned tasks actually being trained. As a physicist, for me, it's also really a big change. When I started in neuroscience, even when I tried to build a model for working memory, I didn't really know much about behavioral psychology. So I didn't really pay too much attention on behavioral performance. [00:09:10] Speaker B: Right. [00:09:12] Speaker A: Gradually, I think that's true for many people from physics or mathematics. [00:09:17] Speaker B: Right. [00:09:17] Speaker A: You say, oh, this is neural network model, which is some kind of physical dynamical system. So I'm interested in scale free dynamics, I'm interested in oscillations, dynamical phenomena without thinking too much about tasks. [00:09:34] Speaker B: Right. [00:09:34] Speaker A: So that is for me really a very interesting personally change and gradually how you appreciate the tradition in psychology. [00:09:48] Speaker B: Right. [00:09:49] Speaker A: And how people design tasks to really get to the specific questions about how brain works. And I do think, as you know, you know, computational neuroscience is very cross disciplinary. [00:10:03] Speaker B: Right. [00:10:05] Speaker A: You know, each time I teach a course, there are always some students from physics or mathematics who didn't know anything about the brain. Or of course also a few experimentalists trained in biology with relatively weak math background. So, you know, I thought it's good to. [00:10:25] Speaker B: Right. [00:10:27] Speaker A: Kind of inspired in a way in teaching how important is really to think about behavior. [00:10:35] Speaker C: Yeah, you highlight that a lot in the book. So do these students come in from physics these days and have the same appreciation that you did, like where sort of ignorance of the behavior or not a focus on the behavior and the psychological aspects of these things and you try to hammer that into them? [00:10:54] Speaker A: That depends. [00:10:54] Speaker B: Right. [00:10:55] Speaker A: These days, maybe more students are being interested in our field because of AI, because of machine learning, but I would say it's probably still the case. It's still common, at least. Right. People from quantitative fields may not have had exposure to this richness of behavior. [00:11:18] Speaker B: Right. [00:11:19] Speaker A: And brain functions. [00:11:22] Speaker C: But people that are interested in AI might be more interested in benchmarks, which is different than behavior. I'm not sure if that's a nuanced difference. What do you think about that? [00:11:37] Speaker A: Yeah, sure. I mean, that is about performance, right? [00:11:41] Speaker B: Performance. [00:11:42] Speaker C: That's a very general term, but yes. Okay. [00:11:44] Speaker A: Performance of what? [00:11:46] Speaker B: Right. [00:11:46] Speaker A: So I guess different people are interested in different things. And that's the other thing I found interesting. Perhaps we can discuss, you know, in this dialogue, namely, in some sense, we're a bit schizophrenic when we talk about the brain. And I'll tell you what I mean. Okay. [00:12:10] Speaker C: Yeah, please. [00:12:11] Speaker A: Of course, we all know, right. That different mental processes, different functions, somehow depend on different parts of the brain. We know that for vision, we know that for audition. We know that for motor behavior. But very often when we talk about neuron computation generically, for example, when we talk about the relationship between the brain and AI, we kind of throw that away. [00:12:45] Speaker B: Right. [00:12:45] Speaker A: We kind of don't really emphasize functional specialization. [00:12:51] Speaker C: Modularity. [00:12:53] Speaker A: Like. Modularity. [00:12:54] Speaker B: Right. [00:12:54] Speaker A: So. [00:12:55] Speaker B: Right. [00:12:56] Speaker A: You know, you can, for example, you can maybe do a survey. [00:13:00] Speaker B: Right. [00:13:01] Speaker A: For the general public or even for AI researchers. How much do they know about the differences between the, you know, the ventral stream of the visual system that were the original inspiration of the convolutionary neural networks and the dose of stream. [00:13:21] Speaker B: Right. [00:13:21] Speaker A: What are the differences and why? How to explain the differences? I bet very few people really, really think hard on that. So that's what I mean. But we know that they're different. But still, perhaps it's good to. [00:13:39] Speaker B: Really. [00:13:40] Speaker A: Discuss, dive into more depth. [00:13:42] Speaker B: Right. [00:13:44] Speaker A: How different parts of the brain really can subserve different functions. That's the question of familiarity. Does that make sense? [00:13:52] Speaker B: Go ahead. [00:13:53] Speaker A: Does that make sense to you? [00:13:55] Speaker C: Yeah. Yes. I mean, so I immediately want to ask you now about working memory. So I sort of think of you as being famous for working memory and. And then revisiting a lot of your historical work. You're famous for a lot of things, but one of the things that you talk about is that there are lots of circuits, lots of areas that have working memory types of what you would think of as working memory types of activity within the circuitry throughout the brain. And this speaks to the modularity that you were just discussing. Is that a good example, do you think, to illustrate the modularity? [00:14:35] Speaker A: Yeah, definitely. So, as you know, in Neuroscience, usually we get hints from damaged patients, patients with damaged brain. Right. So, and then later on, you know, you could do lesions, Careful lesion studies, and those studies point to the role of, say, the prefrontal cortex in working memory. And that led people like Fuster and Patricia Konmarkish to put the electrode into the prefrontal cortex in behaving animals performing a working memory task. It's very nice because all you do in delayed response task is not to allow animal to respond right away to a stimulus, but you have to hold in mind in working memory, maybe do something about it during a delay period before you can perform a memory guided response. [00:15:37] Speaker B: Right. [00:15:37] Speaker C: And we should maybe define working memory as a sort of temporary memory, but also information processing within that temporary hold. [00:15:46] Speaker B: Right? [00:15:46] Speaker A: Yeah. The key is that it's not enslaved by environment, so it's something internal. [00:15:52] Speaker B: Right. [00:15:53] Speaker A: So when you see some working memory signals, they are not directly driven by external input. It's during a time when you kind of hold something in your mind. And so that's really interesting. In contrast to, say, primary sensory neurons responding to stimulation. [00:16:15] Speaker B: Right. [00:16:16] Speaker C: But our entire brain is highly recurrent. So these types of signals are basically kind of everywhere. There are some areas where it's more pronounced, I suppose. So. Speak to that for a moment. [00:16:31] Speaker A: Yeah, well, so there are multiple parts. You know, in addressing this puzzle, I would say it's still really a puzzle, actually. It's becoming, I would say, one of the really key central questions today. And I'll explain to you why unpack a little bit? Right. So because these days, in contrast to what we said just earlier, that you could record from one cell at a time. Today, with the advances of tools like neuron pixels, people record from thousands of single neurons in multiple brain regions at the same time form animals doing some task. [00:17:15] Speaker B: Right. [00:17:16] Speaker A: And so people then analyze those data in different ways. For example, you can decode what's encoded in neuron activity in different parts of the brain. And found that you can decode, say, working memory signals in many brain regions. And that's very interesting. So suggesting that working memory is distributed. [00:17:44] Speaker B: Right. [00:17:46] Speaker A: It also, I would say, creates some kind of confusion. [00:17:50] Speaker B: Right. [00:17:51] Speaker A: You know, we can ask a bunch of questions. We don't really know the answers to those questions. Why is that when you see some neuron signal in an area. [00:18:03] Speaker B: Right. [00:18:03] Speaker A: You can ask, you know, how does that happen and why? [00:18:07] Speaker B: Right. [00:18:07] Speaker A: For example, as you said, right. Areas really interact with each other. If you just look at the connectomic measurements in the primate cortex, for example, right. About 67% of all possible connections are there. What I mean by this is that if you have N areas, right. For each area you have n minus 1 possible long distance connections. [00:18:33] Speaker B: Right. [00:18:34] Speaker A: So altogether you have n times n minus 1 possible connections. [00:18:38] Speaker B: Yeah. [00:18:39] Speaker A: Out of that, about 67% of all connections are there. I mean, the proportion is even higher in mice. People say 95%, 97%. [00:18:50] Speaker C: I did not know that. [00:18:51] Speaker A: Yeah. [00:18:52] Speaker C: Huh, that's interesting. Wait, so that means it's more modular, at quote unquote, higher levels in the taxonomic tree, like in primates, non human primates, and, and human primates. That would make it more modular. Right. So it would make the mouse brain more homogeneous, maybe. [00:19:11] Speaker A: Yeah, but, but so you can, in principle, Right. There are other things, but just, just look at this connectomic measurements. You'd say you can go from anywhere to anywhere else by one or two synapse. [00:19:24] Speaker B: Right. [00:19:25] Speaker A: Okay, so, so is it possible, for example, just take an extreme example. [00:19:31] Speaker B: Right. [00:19:32] Speaker A: You know, maybe in the end, working memory depends on only a few areas, but by virtue of interactions, some other areas, you know, have, you know, some kind of working memory signals as a result of receivers. [00:19:48] Speaker B: Right. [00:19:49] Speaker A: As receivers of projections from the core. [00:19:54] Speaker B: Right. [00:19:54] Speaker A: For example. [00:19:55] Speaker B: Right. [00:19:56] Speaker A: That's a possibility in principle. And then you can of course discuss functionally, is that a bug, is that useful? [00:20:07] Speaker B: Right. [00:20:07] Speaker A: I think the brain would find ways to use signals. Depends on behavior demands. [00:20:16] Speaker B: Right. [00:20:17] Speaker A: So those are the questions that people are starting to study, motivated by new data. [00:20:24] Speaker C: Right, right. That is kind of an interesting, that redundancy. I mean, von Neumann wrote about this as a feature of the brain. [00:20:34] Speaker B: Right. [00:20:34] Speaker C: Redundancy and how basically, if you send a bunch of copies, then you reduce the noise. That's a very short way of saying it, but what you were just saying reminded me of how power efficient brains are and that it might not be such a cost, actually if you have that extra stuff outside the core. [00:20:59] Speaker A: Right, but like, if you have memory signals in V1, how do you use that? [00:21:09] Speaker B: Sure, yeah. [00:21:10] Speaker C: I was going to ask you, where is working memory then, by the way? [00:21:14] Speaker A: Actually, I'm not so sure. I mean, redundancy certainly is desirable for several reasons. But I also want to say that what I just told you could be understood a bit in terms of this old notion of equipotentiality by Lashley. Remember, he kind of visioned, you know, bigger and bigger problems, bigger and bigger. [00:21:37] Speaker C: Parts of the brain. Right. Yeah. [00:21:39] Speaker A: It's more or less doing the same thing. I actually don't believe that. [00:21:42] Speaker B: Right. [00:21:42] Speaker A: So that's actually the Puzzle we try to study and try to kind of propose some solutions. [00:21:48] Speaker B: Right. [00:21:49] Speaker A: So you take two premises. One is that have this long range connections, by the way, that's also very different. You know, in my mind, a fundamental difference between neural circuits and the physical systems. In physical systems, interactions are local. [00:22:05] Speaker C: What do you mean physical systems? [00:22:07] Speaker A: Well, like molecules, you know. [00:22:10] Speaker B: Right. [00:22:11] Speaker A: So interactions are by collision or chemical reactions. They are local interactions. [00:22:17] Speaker C: Okay. [00:22:18] Speaker A: And that's very different in the brain circuits where you have this long range connections that allow you to go to different places. [00:22:28] Speaker B: Right. [00:22:29] Speaker A: So this is number one, number one premise. The second premise is the notion of canonical circuits. [00:22:37] Speaker C: Right. Which is so Mount Castle canonical micro circuitry in the cortex. [00:22:41] Speaker A: Exactly. So, and you know, David Huber, Hubble, Tolston Wiesel. [00:22:49] Speaker B: Right. [00:22:50] Speaker A: And later Kevin Martin, Rodney Douglas. [00:22:54] Speaker B: Right. [00:22:54] Speaker A: And it's a very elegant principle. [00:22:57] Speaker B: Right. [00:22:57] Speaker A: So basically the cortex is made of the same stuff, right. It just repeats more and more repeats of the same local canonical circuit. [00:23:07] Speaker C: And if we figure out what one of those little microcircuits are doing, then we have solved the cortex. Yeah, that's the goal. [00:23:14] Speaker A: Right, exactly. And also across species, right. From rodents to monkeys and humans. So that's a very powerful, elegant idea and people love it. But then if you take those two premises, it's become even more kind of non trivial. How do you explain modularity, functional modularity? By the way, people use the term modularity in different ways. People who use graph theory, for example. When you think about the graph applied to neural networks or even the brain connectome. [00:23:47] Speaker C: The physical structure. Yeah, you mean? Right, the physical structure. [00:23:52] Speaker A: Yeah. [00:23:52] Speaker B: Right. [00:23:54] Speaker A: Network science. [00:23:55] Speaker B: Right. [00:23:56] Speaker A: When you see a graph, actually implicitly you kind of think about nodes being all the same. [00:24:03] Speaker B: Right. [00:24:05] Speaker A: And what differ is inputs and outputs. [00:24:09] Speaker C: Well, also, I mean, there are connectivity differences, right. Small world network, et cetera. [00:24:15] Speaker B: Yeah. [00:24:15] Speaker A: That's input and outputs. Right. So each node has different inputs, different outputs. [00:24:20] Speaker B: Right. [00:24:22] Speaker A: But the nodes are all the same. So in that sense it's a bit like canonical local circuit idea. [00:24:27] Speaker B: Right. Okay. [00:24:29] Speaker A: So of course they use the word modules modularity in different ways. So I want to just be clear what I mean by modularity. I mean functionality. Modularity, like working memory is a module dedicated to working memory. So it becomes actually really a puzzle. [00:24:51] Speaker B: Right. [00:24:52] Speaker A: If you assume that different areas are made of the same stuff and they all talk to each other in a dense network, how do you get differentiation of function, really? [00:25:05] Speaker C: I mean, I know that you have multiple answers to this and it actually doesn't take much. Well, perhaps you could Enlighten us how that would work, then. [00:25:18] Speaker A: We don't know the answer. We do try hard over the last years to build. [00:25:27] Speaker C: But in principle, I think that we do know the answer. [00:25:30] Speaker B: Right. [00:25:31] Speaker C: I mean, if the answer is that not all cortical columns are the same because they express different proteins, and even these nuanced differences then give rise to different function within the cortical column. [00:25:45] Speaker B: Right. [00:25:47] Speaker C: And then the modularity itself, the connectivity itself, one cortical column is receiving inputs from these 14 different areas, and another cortical column is receiving inputs from 12 of those and one of these. And so there is differentiation just with the input output structure. Just as you were saying. I thought that's what you were going to. I thought that's what you're going to talk about. [00:26:11] Speaker A: Yeah, great. You know, that certainly is. Some other people say, you know, just maybe the canonical circuit idea is appealing but not sufficient. [00:26:24] Speaker B: Right. [00:26:24] Speaker A: So maybe it's actually not true that different parts of the cortex are the same. Of course we see all kinds of differences, and these days, with the new transcriptomic data, connectomic data, all kinds of data point to that direction. [00:26:40] Speaker B: Right. [00:26:42] Speaker A: So you say, oh, it's just not true. There are heterogeneities. I don't know if you know this. For example, in primates, of course, in the cortex, there are excitatory neurons, pyramidal cells, and inhibitory neurons. In V1, about 15% of neurons are inhibitory, 85% of neurons are excitatory. In the PFC, actually, there are twice as many inhibitor neurons, prefrontal cortex, which. [00:27:16] Speaker C: If you think of the brain as, as a feed forward input output system, the prefrontal cortex is at the top of that hierarchy in the cortex. [00:27:24] Speaker A: Yeah. So the prefrontal cortex, or PFC is the part of the cortex right above your eyes, like in the front. And that has been kind of mysterious for a long time. And only in recent decades we start to realize how important it is for cognitive functions and executive control of behavior. Sometimes it's called a CEO of the brain. [00:27:51] Speaker C: Right, right, right. [00:27:53] Speaker A: So I actually want to contrast sometimes, you know, PFC with primary sensory area like V1. [00:27:59] Speaker B: Right. [00:27:59] Speaker A: And just to say the difference is quite marked. [00:28:03] Speaker C: Well, that's one big difference is just the number of inhibitory neurons, as you just said, is 15% to. How high is it in prefrontal cortex? [00:28:12] Speaker A: 30%. [00:28:13] Speaker C: 30%. So what does you hear all the time? That excitation, inhibition, balance is key to neural functioning. So what does that difference mean functionally between those areas? [00:28:29] Speaker A: Well, so I guess People still try to figure it out. We have some. We've done some research, try to address this question. In fact, it's something. If you don't mind, I can dive into a bit of detail. I think it's pretty interesting for your audience for several reasons. So we talk about working memory, right? So when we try to build a model for working memory, we were worried about how you can ignore distractors. When you try to hold something in your mind, it's really hard to ignore intruding signals that are not relevant. That can be either external stimulation. It can also be some internal thoughts that you should ignore. By chance, this was like 20 years ago. So we published a paper in 2004 by chance. @ that time, I was talking to an anatomist from Hungary called Thomas fr. We used to think about inhibitor neurons targeting accessory neurons, controlling excitatory neurons. He was telling me that they just discovered a group of inhibitory neurons that avoid pyramidal cells. They don't target E cells. That was a big surprise to him. That was in hippocampus. Then I read more about some bits of anatomical evidence, and it turns out that those guys that avoid pyramidal cells target another class of inhibitory neurons, a. [00:30:18] Speaker C: Different class than their own. [00:30:19] Speaker A: A second class of inhibitory neurons. [00:30:22] Speaker B: Right. [00:30:22] Speaker A: And those second class of inhibitor neurons target dendrites of pyramidal cells. Okay. So they're actually controlling input flow to PMD cells. So if this second type is very active, it will just block gate out input flow to PMD cells. [00:30:46] Speaker C: Is that for sparsity? Is that for an efficient coding kind of scheme? You mentioned hippocampus, and you need sparsity in parts of the hippocampus. [00:30:54] Speaker A: But we first found that kind of cells that avoid pyramidal cells in the hippocampus. But I thought that may be a way for gating that gives some names. The one that Front discovered can be labeled by some biomarker. This states VIP neurons or carontinin as a marker, and that's called that interneuron targeting interneurons. [00:31:27] Speaker C: Okay. [00:31:28] Speaker A: And then the second type can be labeled by somatostatin or carbandin. And they target dendrites. Again, if they're active, they can just block inputs to PM the cells. But if for some reason the first type, the VIP neurons, are active, they would suppress SOM in the neurons, thereby opening the gate. [00:31:57] Speaker C: Yeah, this is a concept like disinhibition, which is. I still. It's. It's. It's a. Not. Not. So it's hard to even conceptualize I mean, it happens in the basal ganglia circuitry a lot. [00:32:10] Speaker A: But that's true. [00:32:11] Speaker C: Disinhibiting something. It's for some reason for my human brain. I have to pause and, and think about it for a second. [00:32:19] Speaker B: Yep. [00:32:19] Speaker A: So, and this one is more about dendrite, Right. About inputs on the PMD cells. So we build a model with actually three types of inhibiting neurons. Those two, and then the third one in turn actually controls the spiking output of pyramid cells. So you have the ones that are labeled by SOM that control the inputs. [00:32:50] Speaker B: Right. [00:32:50] Speaker A: And the other one, PV neurons, that control the spiking outputs of PM the cells. And so this motif with three kinds of neurons, I don't know how many people really know it was first proposed theoretically in the modeling paper. [00:33:11] Speaker C: I'm sorry, which. So I can link to it in the show notes. This was 2003, you said 2004. 2004. [00:33:18] Speaker B: Okay. [00:33:18] Speaker A: Yeah. And then nowadays, with genetic tools, people have established this motif basically universally across the whole cortex. But then just back to your point, right? Maybe different parts of the cortex have different needs for output control and input control. Okay, Right. [00:33:44] Speaker B: Yeah. [00:33:45] Speaker A: Now think about the primary visual cortex. Maybe there are few sources of inputs, Right. But in pfc, there's this huge convergence of inputs onto cells in the pfc. So maybe you need more input controlling inhibitor neurons in PFC than in early sensory areas. That's exactly what you find. And so the proportion of input controlling versus output controlling in the neurons is very different from area to area. [00:34:19] Speaker C: Does the. Okay, so it goes from inhibitory neurons. Sorry to backtrack a little bit. Go from 15% in primary visual cortex, which is the early visual cortical areas, up to 30% in prefrontal cortex. Is it a. Does it increase like linearly along that gradient? One thing that we know, right, is that there are longer and longer time scales as you go from early sensory areas to prefrontal cortical areas, which tracks, I suppose, with the inhibitory percent, the percentage of inhibitory neurons. But is there a jump in there or does it just increase along that axis? [00:35:04] Speaker A: Yeah, that's a great question. So when people say there are a lot of heterogeneities, it's interesting to quantify. [00:35:12] Speaker B: Right. [00:35:12] Speaker A: So, you know, is it just random heterogeneity, that kind of high dimensional distribution? [00:35:19] Speaker B: Right. [00:35:20] Speaker A: Or there's some, alternatively, some systematical changes Right. Along some low dimensional axis. So we and others like John Murray have used different kinds of data to address that question, including the inhibitor NEURON proportion and all kinds of things like even transcriptomic data. And we found that the answer is the latter. So in fact there are systematical changes of biological properties along certain low dimensional axis, which we now called microscopic gradient. So, you know, by the way, there are kind of two slightly different things here, right? One is the proportion of inhibited neurons relative to accessory neurons. The other one is among the inhibited neurons, what's the proportion of neurons that control inputs and what's the portion of neurons that control the outputs? [00:36:22] Speaker B: Right. [00:36:23] Speaker A: But anyway, they all show pretty systematical microscopic gradients, which is nice because you quantify them. So you have numbers, right? So then you can build a model of the large scale model, which is I think, still very new, but promising. So basically you say, I don't invent connectivity, we just use connectomic measurements to build a multi regional large scale, say macaque cortex. And then you introduce gradients of synaptic excitation or inhibition that ensure that you have some graded differences in different cortical areas. Mathematically, it means that you should use the same equations for each local area. That's the canonical part. And then the heterogeneity part is to say, well, even if you use the same equation, the values of parameters may change. And so it's kind of a disciplined way as it is, right. To try to build a biologically constrained model. [00:37:40] Speaker C: So I'm sorry to take us as an aside, but as you're speaking. [00:37:45] Speaker B: I. [00:37:45] Speaker C: Realized like you were so early in, so computational. When I think of neuroscience now, I think, oh, it's all computational neuroscience. When you began, it was not. There was very little computational neuroscience. And you were one of the original people who ushered in theoretical ideas, mathematical models into neuroscience. I mean, what can you reflect just a moment because do I have it right that now it seems like it's all computational? Do you feel justified? Was it like, did you come into neuroscience and think, oh, these biological stamp collectors, I'm going to fix this. What was like, what was that like back then? Did you feel alone in your endeavors? [00:38:31] Speaker A: Thank you. I guess so. When I teach a course, I sometimes say that the field, there are always pioneers, right? In any field, I'm thinking about people like Hodgkin, Huxley, right, Who built Hodging Huxley model and then applied mathematicians like John Renzel, Will Rowe, Bart Almondschout, Nancy Coppel. [00:38:58] Speaker B: Right. [00:39:00] Speaker A: But as a field, perhaps it really started when data collection came to a point. And then people like David Maher, coming from more functional perspective, pushed the idea of using Mathematical models to try to understand how the brain works. [00:39:22] Speaker C: I won't ask you about David Marr again in a moment when. [00:39:25] Speaker B: But. Yeah, right. [00:39:28] Speaker C: You said it happened when data collection began. But I. So I. That. [00:39:33] Speaker A: Well, okay, maybe I'll frame it slightly differently. So I guess in my mind at least, that when you. When the system's neuroscience. [00:39:46] Speaker B: Right. [00:39:47] Speaker A: Is developed to some point, you know, in our case, Right. You had Mount Castle, hub and weasel. [00:39:58] Speaker B: Right. [00:39:59] Speaker A: Et cetera. And then you start to use math. [00:40:02] Speaker B: Right. [00:40:02] Speaker A: Again, combined maybe with mathematical psychology. [00:40:05] Speaker B: Right. [00:40:05] Speaker A: You use signal detection theory. You use a point process. So you start to use mathematical tools to analyze. Yeah, yeah. [00:40:16] Speaker C: Very simple. Well, I won't say simple, but very small models. You'd call these small models. Yeah. [00:40:22] Speaker A: And then you use information theory to quantify coding. And then there are people who, like Eve Mater, tried to look at rhythms underlying motor behavior, start to use dynamical system theory. So I would say those are examples, early examples. And then again, think about Hodgkin, Huxley, Rolle and Renzel. It's more neurophysiology, which is always very quantitative. [00:40:57] Speaker B: Right. [00:40:59] Speaker A: So I came from the background of dynamical system theory and statistical physics. But just by chance, in a way, I got into the prefrontal cortex very early on. [00:41:11] Speaker C: What do you mean, by chance? [00:41:14] Speaker A: It's a bit accidental. So when I switched to neuroscience, I was in the experimental lab of Walter Freeman at UC Berkeley. I don't know if you know that his father was the infamous person who performed frontal lobotomy. [00:41:34] Speaker C: Oh, no. Okay, I did not know that. Infamous? [00:41:40] Speaker B: Yeah. [00:41:40] Speaker A: He actually performed thousands of cases. [00:41:43] Speaker C: He would drive through the eyes, ice pick. Frontal lobotomy. [00:41:47] Speaker B: Yeah. [00:41:47] Speaker A: He was the one who would drive his old car from state to state and offer, you know, as a treatment. And he was the one who performed frontal lobotomy on Rosemary Kennedy, a sister of John Kennedy. [00:42:03] Speaker B: Right. [00:42:03] Speaker C: He performed that. And that was. Walter Freeman was his son. [00:42:06] Speaker A: Yes. [00:42:07] Speaker C: Gosh. Do you know how Walter Freeman thought about all that? [00:42:11] Speaker A: I don't remember if we had really depth conversation. Anyway, I heard about pfc and maybe it's important to understand in psychiatry, although. [00:42:24] Speaker C: That'S hilarious, that you came to PFC because we were lopping off PFC and someone's dad was getting rid of psc, the thing that's not important. And you thought maybe that's important. [00:42:36] Speaker A: Well, it's true. [00:42:37] Speaker B: Right. [00:42:37] Speaker A: It's terrible. [00:42:38] Speaker B: Right? [00:42:38] Speaker A: So like Rosemary Kennedy, she had procedure when she was like 23, and that failed miserably, so she could not function afterwards. [00:42:50] Speaker C: Yeah. I don't know much about her, but I've heard that it was bad, particularly for her. [00:42:55] Speaker A: Yeah, she lived in an institution for more than five decades. Was a terrible example I just showed. We didn't know anything about what PFC is good for. And then actually in Pittsburgh, I actually started my first faculty position in Pittsburgh, I got to know David Lewis. He told me about PFC in a more scientific way. And that's actually how I learned that PFC is important in psychiatry because apparently all major psychiatric disorders somehow involve abnormal pfc. And it was him, he introduced me to Patricia Gome Rakesh. And so that's how I got into. [00:43:43] Speaker C: I see, yeah. So just to pause there, Patricia Goldman Rakish. I mean, this is the way I was taught about working memory. Also recorded neural activity in the prefrontal cortex of monkeys performing a working memory task. And the old story was that you would have these single neurons that when the monkey is holding something in its working memory, would have this persistent activity. And so you'd have a single neuron that would start becoming active when the monkey, when there was no stimulus. And the monkey is trying to hold something in mind. And it would be active until the monkey was cued to respond to indicate the answer of its working memory content. And that was the story of working memory back then. [00:44:33] Speaker A: Yeah. Again, there were others. [00:44:36] Speaker B: Right. [00:44:36] Speaker A: Like Fuster and Watanabe in Japan. But Konamari Kish, I think was kind of unique because she really tried to use multiple approaches to get to the mechanistic understanding across levels. She. So not only in her lab, they did single unit recording in the working memory task as you described, but also she did brain imaging, she did in vitro slice, and she trained as an anatomist. Also did a lot of anatomical analysis. So I guess that's her really unique contribution, really trying to use multiple approaches to try to understand the mechanisms. [00:45:28] Speaker C: Okay, well, you've just cued me now to go back to David Marr. And so let's talk about cross level mechanisms, which is one of the unique things about your textbook is that it's not just a textbook, it's sort of a calling for ways to approach the brain. And so we've talked about David Marr a lot on this podcast. But just to sort of recap, and you mentioned David Marr earlier. His approach became popular in the neurosciences, thinking about how to understand brains, how to explain brain activity and brains. And there are three famous Mars levels. Right. You have your computational level, computational functional level, which is like, what is the task? What is the object of the cognitive activity, you have the algorithmic slash representational level. What are the steps that the brain has to do to get to accomplish that task? And then you have the implementation level, which is just how do the neurons act? How do the neurons carry out the function to carry out these algorithms to carry out the task? And one of the things that you argue very early on in the first chapter of the book is that we should go beyond Mars levels. Now why is that? [00:46:51] Speaker A: I guess historically, maybe. Let me talk a bit about sociological aspect of the VMAR first. Right, historically. Back then in early 80s when he proposed the three levels. By the way, he actually initially proposed three levels together with Tommy Pojo. [00:47:12] Speaker C: I'm going to have Tommy on at some point. [00:47:15] Speaker B: Yeah. [00:47:16] Speaker A: Yes, please ask him. I had a conversation with him. So it's interesting, I guess there are kind of two motivations. One was that back then neurobiologists didn't really think too much about the behavior, frankly. [00:47:37] Speaker B: Right. [00:47:37] Speaker A: So a lot of people try to understand how single neurons work, how snap transmission work, and for that you can do a lot of in vitro slice studies. Of course, there's no behavior in the slides. [00:47:53] Speaker B: Right? [00:47:54] Speaker C: That's true, I guess. I mean, you could call that kind of stamp collecting. But I'm correcting. [00:48:01] Speaker A: It's just not about behavior. [00:48:03] Speaker B: Right. [00:48:04] Speaker A: And then even when you record in vivo back then, very often you record from anesthetized animals. [00:48:12] Speaker C: Yeah, but you're not asking, so you're not asking anything about behavior, you're trying to figure out how the stuff works, which is a very valid and necessary thing. [00:48:21] Speaker A: Absolutely. Yeah. But I guess David Ma and his friends feel like as a complementary approach, maybe you can study behavior without worrying too much about implementation. So they say, okay, vision is really hard. How do I understand the stereo vision? Let's first define stereo vision quantitatively. And that's the function part. [00:49:00] Speaker B: Right. [00:49:01] Speaker A: And then let's try to propose some mathematical algorithm. And that's the software part. [00:49:08] Speaker B: Yeah. [00:49:09] Speaker C: How to accomplish it despite whatever, whatever the stuff is made of underlying it, despite the. Whether it's brain material, computer material. [00:49:18] Speaker A: Yeah, yeah, that's the third part. [00:49:21] Speaker B: Right. [00:49:22] Speaker A: Hardware. [00:49:23] Speaker B: Right. [00:49:23] Speaker A: Okay. So there's this function, software and hardware. [00:49:27] Speaker B: Right. [00:49:28] Speaker A: Actually, by the way, the second motivation was a bit sociological because apparently from Tommy back then, people think that all you need to know is molecular biology. [00:49:40] Speaker C: Yeah, right. Well, I don't. I mean. Okay, yeah, from Tommy, that was his perspective. [00:49:47] Speaker A: Yeah, that's a bit of that. And they say, you know, David Marr said, you know, we can Study behavior on its own merit. [00:49:55] Speaker B: Right. [00:49:56] Speaker A: And that makes sense. But I think over time we have to read his original writing actually. [00:50:03] Speaker B: Right. [00:50:03] Speaker A: Over time, maybe by some people, maybe naively, somehow David Mara's three levels are kind of understood or perceived a bit naively as unidirectional hierarchy. [00:50:20] Speaker B: Right. [00:50:20] Speaker A: So the most important is function, then software. You know, if you are interested at the end, you could worry about implementation in hardware. [00:50:30] Speaker C: That is the sort of common way. But do you think, are we misreading that? Is that just folklore now? [00:50:36] Speaker A: I don't think you remember that when you described it, you actually used the word just for hardware. [00:50:42] Speaker C: Well, I think culturally I've been inundated with that sort of. [00:50:46] Speaker B: Yeah. [00:50:50] Speaker A: Well, for one thing, actually like to make an observation. Right. So if say someone really just cares about function. [00:50:58] Speaker B: Right. [00:51:00] Speaker A: Let's say AI. [00:51:01] Speaker B: Right. [00:51:02] Speaker A: It's good to notice the workhorse of today's AI systems is convolutionally neural network. Deep neural networks. [00:51:12] Speaker C: Well, I think it's transformers now, but sure. The last generation was convolutional neural networks. Yeah. [00:51:20] Speaker A: But deep neural network to start with, deep nets. So that's a software thing. Yeah. But it was initially inspired by what we learned about the hardware or the visual system. [00:51:35] Speaker C: That is so underappreciated, at least in the AI world. [00:51:38] Speaker A: It's hardware that we learned. [00:51:40] Speaker B: Right. [00:51:41] Speaker A: From the visual system. That was an inspiration for people like Yann Luke. [00:51:46] Speaker B: Right. [00:51:46] Speaker A: To develop this kind of architecture and Fukushima. Yeah. So I can give you examples. I would advocate that maybe we are not done yet by learning from neuroscience and especially say the prefrontal cortex or dorsal stream. [00:52:06] Speaker B: Right. [00:52:06] Speaker A: Of the system. So I don't. And that's the other thing that I actually, I'm not so sure it's so productive. That is, people kind of believe that you can implement the same software with all kinds of different hardwares. [00:52:21] Speaker B: Right. [00:52:22] Speaker C: The functionalist viewpoint. [00:52:24] Speaker A: Yeah. And that in part is because of the drawing analogy between the brain and the computer. [00:52:31] Speaker B: Right. [00:52:31] Speaker A: So you can say, oh, a Turing machine can be implemented by the old fashioned vacuum tubes or by chips. [00:52:42] Speaker B: Right. [00:52:42] Speaker A: Okay. And I'm not so sure if that really is true. In fact. [00:52:50] Speaker C: Isn'T it true in principle that is true. [00:52:52] Speaker A: No, no. I mean about the brain functions. About brain functions. I don't know to what extent that's really true. I also don't know how fruitful, how productive that view is in brain research. [00:53:07] Speaker B: Right. [00:53:08] Speaker A: Or maybe that part of the joint analogy with a computer is not so interesting. So useful. [00:53:17] Speaker C: Did you always think that way or because you're an early computational neuroscientist and your bread and butter is thinking of the brain as a computer. [00:53:26] Speaker B: Right. [00:53:26] Speaker C: In some sense. [00:53:29] Speaker A: More as a dynamical system, actually. [00:53:30] Speaker C: Well, that's true. [00:53:32] Speaker B: Yeah. Yeah. [00:53:35] Speaker A: I can tell you a bit more. How is that different from computer in my view at least. [00:53:41] Speaker B: Right. [00:53:42] Speaker A: But in any way, if you do care about how the brain works. [00:53:46] Speaker B: Right. [00:53:47] Speaker A: I do think that we are in need of a new framework. That's what I try to describe. Although actually kind of briefly because of the lack of space. Happy to expand that view in the future. Namely we do have enough data and I think it's fruitful to try to go across levels. Maybe 30 years ago in David Marr's time it was not possible, but now I think it is possible and we have the obligation, really try to benefit from big data in transcriptome, in connectome. [00:54:27] Speaker B: Right. [00:54:28] Speaker A: To understand function. [00:54:29] Speaker B: Okay. [00:54:30] Speaker C: I mean that's sort of the dream. Right. So despite whether you go top down, you know, from function to implementation, which is the classic sense in which we in writ large interpret David Marr or if you go bottom up, the whole thing with David Maher was that, well, you keep these levels distinct and there's no need to go across levels. And so you're arguing that now is the time to do this cross level mechanistic understanding. [00:55:06] Speaker A: I don't know if he said the word there's no need, but I think it's reasonable for him to say you can study one level on its own merit. [00:55:16] Speaker C: Okay, That's a better way to put it. [00:55:18] Speaker A: Yeah. So that's I think valuable and important. [00:55:22] Speaker B: Right. [00:55:24] Speaker A: But indeed, again, maybe over time people misunderstood or naively took it in a too simplistic way. [00:55:34] Speaker B: Right. [00:55:34] Speaker A: It's like one directional hierarchy which I think is not so necessarily fruitful. Now it depends on your goal. [00:55:45] Speaker B: Right. [00:55:46] Speaker A: I'm just saying it is now really doable. Actually, I just give you an example. [00:55:54] Speaker B: Right. [00:55:54] Speaker A: So ignoring distractors in working memory is a functional thing. [00:55:58] Speaker B: Right. [00:55:59] Speaker A: And I kind of say that in fact this disinhibited motif of three kinds of interneurons is a not just implementation, but a kind of specific circuit mechanism that potentially can explain gating. [00:56:19] Speaker B: Right. [00:56:19] Speaker A: So then you can go across levels in that sense, so at least it's doable. [00:56:26] Speaker B: Right. [00:56:26] Speaker A: So depending on your goal, I think very often it's worth trying this way. Let me give you another example, if you don't mind, for my own example, that is about building a model for working memory. Okay. So you may. I don't know, wonder how do I get into decision making? [00:56:49] Speaker B: Right. [00:56:50] Speaker A: And that's in a way also is by accident how that happened. [00:56:55] Speaker C: Not lobotomies. You didn't get there by lobotomies. [00:56:58] Speaker A: It's basically our struggle, right? So. Oh, this is my struggle. So the idea of working memory signals representation, right. Is that it's maintained internally by recurrent interactions. So it's not driven by external stimulation. Like neurons in V1, for example, like you and me, right? I talk to you, you talk to me. If our voice is loud enough, not whispering too loudly, we can keep going like what we're doing now without external input. [00:57:35] Speaker B: Right. [00:57:36] Speaker A: So suppose that we are neurons. So that's the idea of reprisation, right. So if we were neurons, well, through this recurrent excitation, we can maintain some activity that can maintain working memory, right. You know, freed from the environment, so to speak. [00:57:54] Speaker B: Right. [00:57:55] Speaker A: Okay. And that's a key, right. That we try to test quantitatively when we build a model. You have to crank up the strength of excitation in a model. [00:58:06] Speaker B: Right. [00:58:07] Speaker A: Okay. So we either didn't get precise activity, so we could not have a working memory circuit, or everything blows up. [00:58:17] Speaker B: Right. [00:58:18] Speaker A: When the excitation is too powerful, you have this one away excitation, you say, oh, well, maybe you should also crank up inhibition. Yeah, Balance, Right. [00:58:32] Speaker B: Okay. [00:58:33] Speaker A: When that happens now, time and dynamics come in. Back then, people assumed that excitation is very fast. And so if you take into consideration this little biological detail in the model, that excision is faster than inhibition, you still cannot fix it. Because it's like an engineering system. If you are from engineering background, if you have very strong positive feedback that's very fast and you have strong negative feedback that's slower, you can really stabilize your device. For months I struggled, struggled, didn't work. I tried all kinds of things. Short term synaptic facilitation or depression, you know, adaptations, things like that. Didn't work. [00:59:22] Speaker C: What do you mean? For months. Because you could just tune the knobs pretty quickly, right. And then run it and. [00:59:26] Speaker A: See, that's the thing that sometimes I talk to. I've been fortunate to have collaborations with many experimentalists like Pat Gonakish. Sometimes they half jokingly to tell my collaborators that modeling takes time. [00:59:45] Speaker B: Right. [00:59:47] Speaker A: It's not like really just turning the knob and there you go. [00:59:49] Speaker C: I know, coming from an experimental background, I was always jealous of people who did models because it seems so fast. But I guess it's not as fast as it seems anyway. [01:00:00] Speaker A: So I end up Saying, oh, okay, maybe you don't have this problem if your excitation is slow, slower than inhibition, right? So if this slow, gradual reverberation and then you have fast inhibition, that keeps you in check all the time, quickly, efficiently. Right, right. And so that worked. And so, you know, that led to the idea that slow reverberation depends on the MDA receptors. [01:00:31] Speaker B: Right. [01:00:31] Speaker A: Okay. Now it turns out that it's exactly the slow reverberation that you need to explain physiological observations related to decision making. So, you know, there's this famous experiment by Mike Shadrin, right? Reuteman, Shaddelin. They found that when an animal is doing a very difficult decision task, there's this gradual ramping activity that is how neurons and neuron population accumulating information evidence in favor of different options. [01:01:15] Speaker B: Right. [01:01:16] Speaker C: Just to let the less informed listeners know. This is the famous, what's sometimes called the dots task. But it's a random percentage. So you can imagine looking at a screen, it is filled with some dots. A proportion of those dots are moving in one direction. And you can vary how easy it is to tell which direction they're moving by various ways to do it. But just one way to do it is by how many of them are moving in that given direction. And your job is to report which direction this somewhat random collection of dots is moving. So this is the random motion coherence task. Sorry to interrupt. I just want to make sure that people are on board. [01:02:02] Speaker A: Exactly. So I guess the beauty of it is that you can parametrically change the task difficulty. Right. So by changing the fraction of dots that move currently in one of the two directions. [01:02:14] Speaker C: So it goes from super easy to you can't tell at all. And you can vary it at very minor steps. And the thing that has been used to explain this in neural activity is this ramping of neural activity toward a threshold like that. You're ramping your evidence, you're accumulating evidence toward a threshold, and there's neural activity where that looks as if it is doing this sort of computation. [01:02:40] Speaker B: Exactly. [01:02:41] Speaker A: So when you make the task more and more difficult with less and less coherence, you see that at the single cell level and also these days with neural pixel at the population level, this gradual ramping activity over time. By the way, apparently that's the same kind of algorithm that Alan Turing used to decipher the Enigma code in the second world. [01:03:10] Speaker B: Right? [01:03:12] Speaker A: Okay. [01:03:13] Speaker C: He didn't know about nmda. [01:03:17] Speaker A: Well, it turns out that it's really, frankly, a shock that we just Took the model designed for working memory from the shelf and applied to this dots perceptual decision making task. You can pretty much explain everything observed in that experiment. And that's because you have this duality. So you have this gradual ramping by slow reverberation, which is kind of slow transient dynamics. [01:03:48] Speaker B: Right. [01:03:49] Speaker A: Not a tractor or anything. But at the same time you also have a winner take all leading to a categorical choice. So I thought this is another example, I guess, how you could try to go across levels. Because in the end what explains the behavior is this emergent collective population dynamics described by dynamical system theory. But then you can go down and ask what's the cellular even receptor mechanism on one hand and on the other hand you can really compare the model performance with monkeys psychophysics. So it's possible now to do this. [01:04:34] Speaker C: So is there a. Well, I want to ask you about bifurcation? Eventually, but. All right, so there are inhibitory neurons in primary visual cortex. Not as many as in prefrontal cortex. There's high recurrence in primary visual cortex, maybe less so than in prefrontal cortex. So what's qualitatively different about why wouldn't you have a working memory in primary visual cortex? Because you have the same sort of thing, but on a faster timescale. Is there a cutoff point where it is released from the sensory activity? [01:05:15] Speaker A: Yeah, I guess you are asking about bifurcation. [01:05:18] Speaker C: I guess I am, yeah. [01:05:20] Speaker B: Yeah. Right. [01:05:24] Speaker A: So by the way, I'm writing a piece for the transmitter. [01:05:28] Speaker C: Oh, you are? [01:05:29] Speaker B: Okay. [01:05:31] Speaker A: The title is something like the missing half of the dynamical systems theory. [01:05:36] Speaker B: Oh, okay. [01:05:37] Speaker A: And the missing half is bifurcation. [01:05:39] Speaker B: Right. [01:05:39] Speaker A: So. So these days dynamic system theory is becoming really common and popular in neuroscience. [01:05:50] Speaker B: Yeah. [01:05:50] Speaker C: And that's where you came from. And then did it kind of go away for a while? Like why is it. And it has re emerged. What is your viewpoint on the popularity of dynamical systems view? [01:06:05] Speaker A: I guess it's really driven by data. [01:06:07] Speaker B: Right. [01:06:08] Speaker A: So basically again, when you have just one spike trend from a single cell at a time, you probably tend to focus on time series analysis. And now if you do recording from thousands of neurons at the same time, what do you do? [01:06:25] Speaker C: It goes out the window. [01:06:26] Speaker B: Yeah. [01:06:26] Speaker C: What do you do? [01:06:27] Speaker A: So one thing pioneered by people like actually g loran with small species and then Krishina Shinoi and his collaborators. Nowadays you see neuroscience journals full of papers on trajectories in the state space, dimensionality reduction, subspace communication, manifold discovery. And so that's why? Right, so. And they are really, really important. [01:07:03] Speaker B: Right. Again, it's still early days. [01:07:09] Speaker A: It's cool to see that. [01:07:10] Speaker C: You think it's early days. [01:07:12] Speaker A: I think it's kind of the early days because you're going to see more data and we're not done yet trying to understand manifolds, things like that. But also because frankly it's still kind of descriptive, it's a way to look at data. It's not mechanistic understanding. [01:07:40] Speaker C: Well, there's some mechanistic understanding that sneaks in there when you're talking about dimensionality, for example. Right. If you have a system and you can't explain it with low dimension, that means that it has to remain in a high dimensional regime and in some sense that's. Is that not mechanistic? [01:08:03] Speaker A: No, no. It's really important knowledge. Sure. It's some understanding. [01:08:12] Speaker B: Right. [01:08:14] Speaker A: I guess about dimensionality. One example is that you mentioned a sparse coding. I think there's some evidence that sparse coding is really desirable in sensory systems. Whereas again in pfc, we together with Stefano Fousey proposed that you actually want to have kind of high dimensional representations. That's functionally desirable with the use of what we call mixed selectivity. So yeah, it certainly sounds. [01:08:54] Speaker C: Yeah, I'll just say mixed selectivity refers to the idea that like neurons can respond to lots of different signals or their activity is related to. It's not a single function essentially like it can have activity related to lots of different functions. So it has mixed select. It's selective to a mixed number of things. [01:09:14] Speaker A: Yeah. And for that it's maybe that's really important for flexibility in that circuit and that requires high dimensionality of representations. So I agree with you, that certainly is very important. But by mechanistic I guess I mean circuit mechanism and across levels, again. [01:09:39] Speaker B: Right. [01:09:40] Speaker A: So for example, when people say communication between areas are done by subspaces. [01:09:46] Speaker B: Right. [01:09:49] Speaker A: We would like to know. That's interesting. That's a discovery. [01:09:53] Speaker B: Right. [01:09:54] Speaker A: By itself. But maybe we want to understand how does that happen. [01:09:58] Speaker B: Right. [01:09:59] Speaker C: It's also just a slippery notion. I've used the terminology myself enough that I feel comfortable with it, but I don't really know what I'm talking about when I say subspace. [01:10:13] Speaker A: We can maybe that's a very interesting topic. Into that some other time perhaps. But again it's for me. Let me give you an example, if you don't mind, to go into a bit technical detail. This again has to do with PFC dependent behavioral flexibility. Brandon Milner, psychologist, pioneered this paradigm as a test for normal prefrontal function, called the Wisconsin card sorting test. So you are given a deck of cards, and you're supposed to sort the cards in one of the three ways. So on each card you have certain number of colored shapes. For example, this card has three green triangles and another card has, say, four red squares. So you are sorting the cards either by color or number or shape. So if the rule currently in play is color, you sort all the red cards in one pile, green or another pile, etcetera, without being told the rule can change suddenly. So in principle, when the rule changes, you now have to do with the same cards sorting in a different way. [01:11:45] Speaker C: Learn the new rule just by observation. [01:11:48] Speaker A: Just by observation and feedback. So you're told that you did it right or wrong. People with schizophrenia, for example, or frontal lobe damage have real difficulty performing this task, especially switching the rule. They tend to perseverate, just keep following this old rule, even though they are given negative feedback. So we build a model for something like this, which in principle requires internally maintain the rule across a long stretch of time and then switch the rule. One warranty. What we found, just cutting to the conclusion of this modeling study, is that the representation according to different rules correspond to different subspaces of neural population activity. They are more or less orthogonal. So if you follow the color rule, you are working, you're representing or processing information along this bird, this subspace. And if you're following the shape rule, everything is in here in this subspace. Now, we designed a model to test the idea that this dendrite targeting inhibitor neurons are doing the gating. So this is a somatocidin neurons. So first we found this orthogonal subspaces over representation. And then in the model, you can do a lot of things that you like. So in the model, one way optogenetically manipulate, you know, inactivating. [01:13:31] Speaker C: Why are you using air quotes with optogenetics? [01:13:34] Speaker A: Because it's model simulation, right? [01:13:36] Speaker C: Oh, yeah, yeah. [01:13:37] Speaker B: Okay. [01:13:39] Speaker C: Right. [01:13:41] Speaker A: The model. [01:13:42] Speaker B: Yeah. [01:13:42] Speaker A: And then the two subserves collapse, you know, and the performance is gone. [01:13:48] Speaker C: Performance is gone. [01:13:49] Speaker B: Yeah. [01:13:49] Speaker A: So that's what I mean by how do you go from descriptive manifold description to second mechanism? That's a special prediction. So you can go to someone who is training mice to do this kind of task, for example, and really use real optogenetic manipulation. I hope that illustrates what I mean by going between. [01:14:16] Speaker B: Right. [01:14:18] Speaker A: This kind of discretion and the circuit mechanism. Now, so back to the missing half. [01:14:25] Speaker B: Yeah. [01:14:25] Speaker C: Bifurcation, shall we? [01:14:28] Speaker B: Yeah. Right. [01:14:31] Speaker A: So bifurcation may be familiar to many people, but not for everyone. But it should be well known. In fact, I mean, one example everybody should know actually, is a single neuron. Think about how you learned about single neuron responses to a current injection described by Hodgkin Huxley model. If your input current is weak. [01:15:05] Speaker B: But. [01:15:06] Speaker A: Positive neural memory potential would go up a bit and reach a steady state. And that's what you see also in the experiment. [01:15:13] Speaker B: Right. [01:15:14] Speaker A: So in the slice, you disconnect neurons, you just watch one neuron at a time, you inject the current. Weak current gives you depolarization of the membrane. [01:15:23] Speaker C: Slight depolarization. Yeah. [01:15:26] Speaker A: So in the Hodgkin Huxley model, that corresponds to a steady state, a fixed point, so to speak. [01:15:32] Speaker B: Right. Okay. [01:15:34] Speaker A: And then you just gradually increase the intensity of your current injection. [01:15:39] Speaker B: Right. [01:15:40] Speaker A: At some point, suddenly you don't get steady state anymore. [01:15:45] Speaker B: Right. [01:15:46] Speaker A: Instead you start to see action potential that repeats at some frequency. [01:15:55] Speaker B: Yeah. [01:15:55] Speaker C: Injecting a steady, high enough current into the cell. [01:15:59] Speaker B: Yeah. Okay. [01:16:02] Speaker A: And that's no longer steady state. [01:16:04] Speaker B: Right. [01:16:04] Speaker A: And that's mathematically described as oscillation, technically called limit cycle in the state space. It's attractor in sense that if you perturb it briefly, it will go back after perturbation to the same trajectory in state space, you just circle like this in the state space of neural perturbation of the activity space of a single cell, in this case. [01:16:34] Speaker B: Right. Okay. [01:16:35] Speaker A: So all you did really is just gradually increase something, in this case a. [01:16:43] Speaker C: Current, let's say, linearly. You could do it linearly, right? [01:16:48] Speaker B: Yeah. [01:16:48] Speaker A: And then that linear, gradual quantitative change can lead to a qualitative change of behavior. So if you want to describe that rigorously, mathematically, it's called a bifurcation. [01:17:06] Speaker C: Okay. [01:17:09] Speaker A: So I'm sure you heard about. You're very knowledgeable about attractor networks applied to things like high direction cells, plate cells, and in our case, working memory. And they really can be described as emerging collective phenomena by changing something in a modest way. That's kind of the beauty of it. And that something, in this case could be the strength of recurrent excitation. Back to what we talked about earlier. So all you did, comparing to V1 and PFC, all you do is say you started with a generic canonical local circuit and just crank up the strands of recurrent excitation. Right. And suddenly you start to see attractive states. [01:18:03] Speaker C: So. [01:18:04] Speaker A: And that was, by the way, you know, the initial insights. You know. Sorry, I didn't mean to interrupt you, so maybe you should first finish your sentence. [01:18:13] Speaker C: No, No, I was going to ask you about the. Because before you said the word emergent, I was going to ask you about the relation between bifurcation and emergence because it's a qualitative change. But you should finish your thought. [01:18:25] Speaker B: Yeah. [01:18:25] Speaker A: I just want to acknowledge that this type of ideas were early on proposed in neuroscience by people like John Hopfield, which is of course a well known name, but also Daniel Amit, who did a lot of work pioneering this attractive network paradigm. And so that I think is one example where the idea of bifurcation is useful in neuroscience and not as well known as it should, I think. [01:19:03] Speaker C: So do you think that that's missing from the current zeitgeist of the dynamical systems approach? You think it's not appreciated enough? How is it, how is bifurcation missing the other half? [01:19:23] Speaker A: Well, it's like. [01:19:26] Speaker C: Sorry, and I'm sorry to interrupt, but how is it related to a phase change in state space? [01:19:33] Speaker A: It is related to phase change. [01:19:35] Speaker B: Right? [01:19:35] Speaker A: So in some sense, just at some general level, it's been like a phase transition in physics, right? Like you have water, you increase the temperature to 100 degree. At that point, suddenly you see vaporization, right. Of H2O. So it's similar to that. Okay, so how do you go from quantitative changes to sudden change transition? Right, okay. And that's, I think, I guess it should be more widely known in neuroscience. [01:20:12] Speaker B: Right. [01:20:13] Speaker A: So we use this idea to understand the emergence of modularity. So I call that emergent because. [01:20:20] Speaker B: Right. [01:20:20] Speaker A: It's not something you build in. It's not even built in by, I would say modules in the sense of graph theory. Okay. It's really emerging. It's bottom up through dynamics. It's through dynamics. So Connectome by the way is really important and really exciting to see. [01:20:41] Speaker B: Right. [01:20:43] Speaker A: All this new database coming out. A beautiful example is connectome based model of the navigation system in Drosophila flies. But it's not enough to explain dynamics and function. [01:20:59] Speaker B: Right. [01:21:01] Speaker A: You know, one example I like to give in the book is to start with two neurons that connect to each other through mutual inhibition. [01:21:09] Speaker B: Right? [01:21:10] Speaker A: So what do you get? Right, you can get, you know, say half center oscillator, right? One is active, the other one is not active and then they switch like this. And that may be a motif for central pattern generator. [01:21:28] Speaker B: Right. [01:21:30] Speaker A: But surprisingly, actually on some conditions about dynamics of synaptic interactions, this mutual inhibition actually can produce perfect synchronized oscillations in the system. So you need dynamics to explain really the function. And so, you know, I guess this Bifurcation space, I think is in my mind a way to say. Well, the idea of bifurcation is useful if we want to understand the emergence of modularity. Functional modularity. [01:22:12] Speaker B: Right. [01:22:13] Speaker A: In a large scale, multi regional system. Let me give you another example just to make it maybe really clear what we mean by functional modularity. We talk about decision making like the random dots task, right? [01:22:29] Speaker B: Yep. [01:22:30] Speaker A: Imagine that in a difficult example, in a difficult trial, the dots are really random. The evidence is very weak, Right? It is in favor of a, you know, leftward motion. [01:22:44] Speaker C: Barely in favor of leftward motion. [01:22:46] Speaker A: Yeah, but your judgment is right. One motion you say, I think it's right word. [01:22:52] Speaker B: Okay. [01:22:54] Speaker A: Now what happens in your brain, right? So you'd say the retina, right. Encodes faithfully the physical stimulation. And this retina should have more evidence in favor of leftward motion. Maybe it's the case with V1. Maybe it's the case for MT, which is a specialized visual system for motion. [01:23:20] Speaker C: Information processing along that dorsal stream that you spoke of. [01:23:25] Speaker A: Yeah, but then where suddenly you see the signal about subjective decision. [01:23:33] Speaker B: Right? [01:23:34] Speaker A: So there are certain areas that are really coding the physical stimulation. [01:23:39] Speaker B: A, A, A. Right. [01:23:41] Speaker A: And then there's some, the other area say. No, no, I think it's. Well, maybe it doesn't know A, it says it's B. [01:23:48] Speaker B: Right. [01:23:48] Speaker A: So that really is responsible for your subjective choice. [01:23:54] Speaker C: And so then eventually you, you get down to the motor neurons that are encoding the, that are in, in actuating the action. Right. And at some point in between those. [01:24:05] Speaker B: Yeah. [01:24:07] Speaker A: So that's, you know, I would say that that's a question of functional modularity about subjective choice. Let's say, okay, let's define a function of modularity responsible for subjective choice. [01:24:19] Speaker B: Okay. [01:24:20] Speaker A: And then repeating what we said earlier, right, you have canonical circuit, you have all these dense interactions. You can go anywhere by one or two synapses. [01:24:29] Speaker B: Right. [01:24:30] Speaker A: So how do you get this emergence of modularity dedicated to subjective choice? [01:24:36] Speaker B: Right. [01:24:37] Speaker A: So we think that this bifurcation in space could be a way to explain that. [01:24:44] Speaker C: So it makes it sound like bifurcation is a very sensitive thing. You mentioned Eve Martyr earlier and her work with rhythmic gustatory patterns in things like lobsters. And one of the take home messages of her work is that there's a lot of, a lot of different ways to get the same result. There's a lot of degeneracy, there's a lot of multiple realizability, which I'm trying to think about how that sits with the idea of bifurcation, which what you're saying is bifurcation is a super valuable thing in your system to be able to change, to flip the state space, to change phases, essentially. Right. But her work shows that you can be in the same phase under lots of different regimes, even if you try to bifurcate it. So how should we think about that? [01:25:41] Speaker A: No, that's interesting. Right, so it's a bit, I guess, more related to the question of redundancy. [01:25:52] Speaker B: Right. [01:25:53] Speaker A: So you want some function to be robust. [01:25:57] Speaker B: Right. [01:25:58] Speaker A: You know, in spite of changes of environment, changes of, you know, in the face of perturbations. So actually some people took Eve's findings in terms of multiple realizability. [01:26:16] Speaker B: Right. [01:26:17] Speaker A: The same function can be realized in different ways. That turns out to be a bit too simplistic. So her own work later showed that in fact, like, you know, in the crabs or lobsters live in different environments. [01:26:32] Speaker B: Right, right. [01:26:33] Speaker A: The temperature changes because of climate change. She actually had a paper on climate change, you know. [01:26:38] Speaker C: Yeah, yeah. [01:26:38] Speaker A: And how adapt. [01:26:39] Speaker C: You know, she actually, she. She's been on my podcast and she's passionate about that. And she observes it in the ocean like she lives. [01:26:47] Speaker A: Yeah, yeah, exactly. So. So that's about the usefulness of redundancy. [01:26:54] Speaker B: Right. [01:26:55] Speaker A: Okay. But here, if I put it in a kind of radical way, in a way, but I actually believe there's a kernel of truth. You can think about bifurcations as a mathematical machinery to create novelty. [01:27:14] Speaker B: Okay, Right. [01:27:16] Speaker A: Functional novelty, functional capabilities. So how do we really explain functional novelties? Of course, biological evolution is the ultimate answer. [01:27:29] Speaker B: Right. [01:27:30] Speaker A: But if you want to understand, if you are given the same kind of stuff, so to speak, canonical nuclear circuits, how can you really explain different functional capabilities in different parts of the brain in a way? Well, maybe all you need is differences, quantitative difference of some biology. [01:27:55] Speaker C: Same stuff. More is different. [01:27:58] Speaker A: Exactly. But different. More. [01:28:02] Speaker B: Right? [01:28:03] Speaker C: Yeah, but it's slightly different. Yeah, exactly. [01:28:08] Speaker A: So that if that's true. [01:28:10] Speaker B: Right. [01:28:10] Speaker A: If you can really, if we can as a field, not just my lab, can push this idea to see if it really is an interesting idea. [01:28:20] Speaker B: Right. [01:28:21] Speaker A: Maybe we can have a general way to try to understand different kinds of functions. But let me just say one more thing about the robustness. So the tricky thing interesting about bifurcation space is that it's actually not requiring fine tuning. So just as a contrast, when I go back to single neuron example, you do change your current injection intensity in a careful way until you see this bifurcation. So if you are on one side or the other, you miss it. So in that sense you need fine tuning to really get to the transition point. [01:29:10] Speaker B: Right. [01:29:11] Speaker A: So somehow the experimenter has to change something by hand. [01:29:16] Speaker B: Right. [01:29:17] Speaker A: Okay. Now bifurcation space is something that happens somewhere in the cortical tissue. [01:29:26] Speaker C: Is that necessarily true? Does it happen somewhere? Because it could be just distributed. [01:29:30] Speaker B: Right. [01:29:32] Speaker C: Do you think it happens like in this much of cortex or in this much of cortex? Okay, maybe it's a moot point. [01:29:40] Speaker A: No, no, it's not. It's something, I think it's still not well understood, by the way. It's right now still a theoretical proposal, I would say. [01:29:52] Speaker B: Right. [01:29:53] Speaker A: We have to come down to specific predictions that should be testable experimentally. [01:29:58] Speaker B: Right. [01:30:00] Speaker A: And so if you think about the cortex as a two dimensional spatial system. [01:30:06] Speaker B: Right. [01:30:07] Speaker A: What would the bifurcation transition be located? I don't know. What's the contour of the transition? If that's what you're asking. [01:30:18] Speaker B: Right. [01:30:18] Speaker C: Yeah. [01:30:19] Speaker A: But the key thing is that somehow it's localized because it has to be able to separate, say those areas that are engaged in working memory from those that are not. I guess, alternatively you'd say everything's everywhere. [01:30:36] Speaker B: So. Right. [01:30:36] Speaker A: Working memory is everywhere. So I'm actually taking the opposing view. Let's say you do have a module for working memory. It's not everywhere the same. So if you take that view at least and see how far that's compatible, that can explain data. [01:30:56] Speaker B: Right. [01:30:57] Speaker A: Then there's some localization space that separates this module, that defines this module. Let me just say one more thing, because you asked about fine tuning. So that transition in space is very robust. When you change any parameter in your model, for example, I would say in the real brain system, maybe exactly where that occurred, that transition occurs, maybe shift shifted. Okay, but the, but the phenomena, this transition itself will not require any fine tuning of parameters. [01:31:37] Speaker B: Right. [01:31:38] Speaker A: Does that make sense? [01:31:40] Speaker C: Yeah, yeah. I mean, this goes to like how you define an area and what you're saying is the borders can kind of shift and you don't need to be so precise with whether it's 100 neurons or 101 neurons that are doing the task, doing the, implementing the function. [01:31:58] Speaker B: Yeah. [01:31:59] Speaker A: Or maybe even at level of areas, I don't know. So depending on behavior demand, for example, you may want certain areas to be engaged in working memory. In some other tasks you may not. So that boundary can shift according to behavior demand. In principle, I'm doing pure speculation now. [01:32:20] Speaker C: Yeah, yeah, but what you're saying is like by boundary you mean different modules active together, like different subsets. Of areas active and co. Active. [01:32:33] Speaker A: Right, exactly. Yeah. So, I mean, that certainly is the case in the model. So in that sense, it's really. We still wrap our mind around it because bifurcation space is so new that I think we need to do more work to really understand it. And we are talking to experimentalists trying to test some specific prediction from that. [01:33:03] Speaker C: So when you say it's early days in dynamical systems usage to explain brains, you mean because we're halfway there, because we need to explain bifurcations and study them. [01:33:19] Speaker A: Yeah. Again, I think bifurcation should be better known and maybe useful for answering certain questions. [01:33:30] Speaker B: Right. [01:33:30] Speaker A: So if I try to be specific, I'm actually really trying to stay away from sweeping statements. [01:33:40] Speaker B: Right. [01:33:41] Speaker A: But still. And again, I already made a sweeping statement. Maybe this is one way to create a novel functionality. That's pretty sweeping. [01:33:51] Speaker C: That is pretty sweeping. [01:33:55] Speaker A: Maybe it's worth also mentioning that why maybe we should stay away from bifurcations. And brain is so complicated. We should embrace different perspectives. And it's important for us to have many different approaches to understand how the brain works. One approach is control theory. [01:34:22] Speaker B: Yeah. [01:34:23] Speaker A: Okay. If you really think about the brain as a machine, as a, you know, from the engineering perspective, like you do motor control. [01:34:32] Speaker B: Right. [01:34:34] Speaker A: You really want to stay away from bifurcation. [01:34:37] Speaker B: Right. [01:34:38] Speaker A: Because that always implies some instability. [01:34:41] Speaker B: Right. [01:34:42] Speaker A: Something that's unpredictable. [01:34:44] Speaker B: Right. [01:34:46] Speaker A: So for certain things, we probably rightly tried to stay away from instability and bifurcation. [01:34:56] Speaker B: Right. [01:34:56] Speaker C: Oh, that's interesting. Like the cybernetic view. And maybe cybernetics is coming back too. But maybe for like your sensory and motor systems, you really want to be cybernetic. We really want to be non bifurcation. But for everything else cognitive, everything that we value as our high human cognitive abilities, maybe bifurcations then are actually beneficial. Is that another sweeping statement that you're going to make? [01:35:20] Speaker A: It's one way, certainly one way to try to understand. [01:35:26] Speaker C: That's interesting. [01:35:27] Speaker A: Now let me make one more sweeping statement. [01:35:31] Speaker C: Make four more. Come on. [01:35:32] Speaker A: Yeah, well, one more serious statement is that how do you explain psychiatric disorders? So give me an example. [01:35:44] Speaker B: Right. [01:35:46] Speaker A: In spite of enormous efforts in neuroscience and in clinical research. By the way, there's a new book that just came out by Nicole Rust. You should get her on your podcast. [01:36:02] Speaker C: So it'll be released next Wednesday. Her episode that I recorded recently. [01:36:07] Speaker A: Yeah, it did. And her book is reviewed about how basic research in neuroscience should really better meet the challenge of mental health. [01:36:22] Speaker B: Right. [01:36:23] Speaker C: And she embraces dynamical systems and complexity in the book, yeah. [01:36:28] Speaker A: Did she mention bifurcation? [01:36:31] Speaker C: Did she in the book? I don't believe she did. [01:36:33] Speaker A: Okay, so here's the thing. So we can ask the question, right? If you really compare, quote, unquote, normal subjects and people afflicted by some disorder like schizophrenia, right. Are you expecting quantitative differences in biological abnormalities or are you expecting qualitatively totally different kind of things? [01:37:03] Speaker C: Doesn't it depend on the disorder? Because there are gradients and lots of disorders, right? [01:37:08] Speaker A: Maybe. Yeah. [01:37:09] Speaker B: Right. [01:37:10] Speaker A: But at least it's a possibility, right? We're in the realm of speculation here, Right. There's a possibility that all you need is quantitative differences in biology to explain qualitative behavior. [01:37:30] Speaker B: Right. [01:37:33] Speaker A: So maybe there's not enough control, then people become impulsive. [01:37:39] Speaker B: Right. [01:37:40] Speaker A: Things like that. So that could be also potentially at least a useful thing to think about. [01:37:49] Speaker B: Right. [01:37:50] Speaker A: When we try, you know, there's this nascent field called computational psychiatry. [01:37:54] Speaker B: Right. [01:37:55] Speaker A: And maybe that's one, you know, angle at least. Right. To think about. [01:38:01] Speaker C: So that's another sweeping statement. And I like that you cautioned against the way Claude Shannon cautioned against using information theory to apply it to everything. Because back, you know, right after he invented information theory, everyone was applying it to everything. And he said, no, no, this is a very specific thing. It shouldn't be applied to everything. And you made a few sweeping statements about bifurcation. But you also cautioned against it being the solution to everything. Essentially. [01:38:28] Speaker A: Of course, yeah. There are definitely, again, different perspectives. [01:38:34] Speaker B: Right. [01:38:35] Speaker A: Different angles. But I think as part of the theory of dynamical systems. [01:38:40] Speaker B: Right. [01:38:41] Speaker A: By the way, just to mention a detail, you will never get bifurcation if your system is linear. [01:38:49] Speaker B: Right. [01:38:50] Speaker A: It has to be nonlinear dynamical system. So it's part of the theory of nonlinear dynamic systems perspective that I just feel like it's, you know, worth mentioning that bifurcation needs to be more widely known and appreciated. [01:39:08] Speaker C: I was just at a conference and we were talking about human cognition and how we can scale brain data to better understand human specific cognition. And I was asking someone, what do you think if you added more, what's beyond our current brain evolution? If you expanded it further, what would that do? [01:39:28] Speaker B: Right. And. [01:39:29] Speaker C: And I'm not asking you that. What I'm asking you is, let's say, so you come from the dynamical systems background, you've seen it now flourishing and very embraced. And let's say we add bifurcation. And that explains a lot. What would be. And I've been swimming in this, you know, manifold. Everything's a Manifold, everything's subspaces. And so now I'm thinking in the same way of like information theory. And we were just talking about bifurcation. It seems like everything's a manifold, and that's not right. So what would be beyond dynamical systems thinking? I guess we don't have it yet. [01:40:08] Speaker A: Yeah, it's a good question. I don't know the answer to it. I think we should be humble and be modest. [01:40:15] Speaker B: Right. So. [01:40:19] Speaker A: The truth is that we, we don't know many things about the brain. About pfc. PFC used to be called a riddle in psychology. I think it's still kind of mysterious. By the way, I kind of tend to think that the book that I wrote really covers very elemental cognitive building blocks. [01:40:48] Speaker B: Right. [01:40:49] Speaker A: Those are the ones. Even the list of 26 tasks I summarized in the last chapter really are the ones that could be studied with non human animals without language, in relatively simple setting. [01:41:08] Speaker B: Right. [01:41:11] Speaker A: And what's really exciting is that most of them actually can now be studied in mice, even not just primates. That's great. But the question is how far we can go in that direction. I hope that at least what we learned are going to be useful to understand more and more complicated mental processes and even fluid intelligence. So what is fluid intelligence? [01:41:52] Speaker C: Raven's matrix. [01:41:54] Speaker A: Raven's matrix, for example. Right. But I guess one thing that I'm becoming more and more interested is compositionality. That's the idea that you learn some primitives, some building blocks. [01:42:11] Speaker B: Right. [01:42:12] Speaker A: And then you learn some rules, some grammar. [01:42:15] Speaker B: Right. [01:42:16] Speaker A: Okay. And then depending on what problem you try to solve, you flexibly combine different building blocks according to some syntax to create arbitrary complicated sequence of things. I don't know if you know this. It's interesting. People who have frontal damage had difficulty to do to cook a meal. Because when you cook a meal, you have to be creative, compositional. [01:42:51] Speaker B: Yeah. Right. [01:42:53] Speaker A: You kind of plan and then you organize your task into subtasks and you have a goal in your mind all the time. [01:43:05] Speaker B: Right. [01:43:06] Speaker A: You go through steps, subtask, and then go back to the task, go to the second subtask, et cetera, et cetera. Very often things don't happen the way you planned. Then you have to meet with, come up with new ideas to solve new problems until you reach the goal. [01:43:26] Speaker B: Right. [01:43:27] Speaker A: And the people who have frontal lobe damage couldn't do this kind of task. [01:43:32] Speaker B: Right. [01:43:32] Speaker A: So if we could really understand how brain does it, I think that would be amazing. [01:43:40] Speaker C: Okay, so you mentioned mouse earlier, and a lot of the tasks that you list in the last chapter. A lot of them can be assessed using a mouse model. I'm currently working with mouse, but I come from a non human primate background and the reason why I was in a non human primate lab is because I wanted to study something as close as I could to subjective experience or consciousness. And then I realized, well, it's a fool's errand, basically, but that's what my PhD is. But what do you think about. So mice have become popular, more popular again, in terms of like, oh, mice can do these cognitive tasks, but how far do you think we can go using rodents in general to study and understand and explain these higher cognitive functions that we want to know about? Like, is decision making in a mouse the same bifurcation as it is in a human brain, for example, you know, or what do you. What's your viewpoint on the using mice for these cognitive studies? [01:44:47] Speaker A: See, I try to avoid, you know, subjective mind kind of question because you're bright and I'm. [01:44:52] Speaker C: I'm not. [01:44:53] Speaker B: Yeah. [01:44:54] Speaker A: Let me actually, actually say something about it. So, you know, I don't expect the mice to cook a meal for us. So that's out of the question. Right. But I do think that a certain mental, subjective, you know, experience can be studied with mice. You ask about decision making. So the simplest decision making is actually detection. So if you show visual stimulus and all you need to do is to say, I see it or I don't see. But the trick is that you change the contrast of your stimulus. So this is this psychometric function. [01:45:40] Speaker B: Right. [01:45:41] Speaker A: As a function of the contrast. When the contrast is low, you don't see it. When countries are high, it's very easy to see and it's nonlinear. So it's kind of like a sigmoid curve. [01:45:51] Speaker B: Right. [01:45:52] Speaker A: So if you are right at the decision threshold with the same physical stimulation, the same photons onto your retina, sometimes you see it, sometimes you don't see it. [01:46:04] Speaker B: Right? Right. Okay. [01:46:07] Speaker A: Now that's subjective, I guess, you know, awareness of stimulus. [01:46:13] Speaker B: Right. [01:46:14] Speaker A: And people do this kind of experiment. You had Peter Rossimmer on your show. He did a very interesting monkey experiment using exactly that paradigm. And sometimes they don't show the stimulus. And most of the time the animal says, I don't see it. But occasionally the animal says, I sit. That's false alarm, right? [01:46:38] Speaker B: Yep. [01:46:39] Speaker A: So what they found is that early sensory neurons essentially reflect physical stimulation. Hip trial, mistrial, More or less the same response, no response, when you don't show the stimulus. But PFC seems to reflect subjective awareness and this seemingly kind of activity level, you know, in the hip trials or in the first alarm. [01:47:12] Speaker B: Right. [01:47:14] Speaker A: So, you know, if you agree that's a simple way to look at subjective awareness, I think that can be done with mice. [01:47:25] Speaker B: Yeah. [01:47:25] Speaker C: But their prefrontal cortex and the great. For example, like, the gradient of inhibitory neurons from visual cortex up toward their frontal cortex is different. Right. It's like the bifurcations are going to be all different, even though it's made of the same stuff. That's the worry. Right, that we're. [01:47:48] Speaker A: Well, it could be different. [01:47:50] Speaker B: Right. [01:47:50] Speaker A: In details. But again, there should be a module of areas somehow responsible for subjective choice. [01:48:03] Speaker B: Right. [01:48:04] Speaker C: So you think if artificial intelligence has a prefrontal cortex, it'll be subjective? This is what we'll end on. I know you don't. I know you're hesitant to talk on this, but before. Before we do that, is there. Is there other. Anything else that I haven't asked you that you want to share? [01:48:25] Speaker A: No. I think we covered quite a bit. [01:48:27] Speaker B: Right? Yeah. Okay. [01:48:29] Speaker C: Perhaps we'll end on this if you're willing. [01:48:34] Speaker A: Well, frankly, I have not thought about the question of awareness in machines. [01:48:40] Speaker C: Yeah, that was kind of a joke, but you wrote to me, AI needs a prefrontal cortex, so why would that be? [01:48:48] Speaker A: Right, so that's more about flexibility and fluid intelligence. Okay. So, for example, machines are not so great at multitasking. [01:48:59] Speaker B: Right. [01:48:59] Speaker A: So these days, if you try to train, say, a robot to do more than one task or, you know, software, usually what you do is to just add, you know, different cost functions. You add them up. [01:49:13] Speaker B: Right. [01:49:13] Speaker A: For different tasks, one for each. That doesn't work. [01:49:17] Speaker B: Right. [01:49:18] Speaker A: So PFC is actually crucial for multitasking. If we learn about how PFC does it, perhaps there's some new insights that we can translate through computational modeling to build smart machines capable of doing multiple things. I would argue actually thinking in a way, intelligence, solving new problems. In a way, it's a bit similar to complicated sequences of events, except that they are not motor acts, they are internal events in our mind. So if we really understand how this kind of flexible generation of sequences with recursiveness, with compositionality, I think we can go a long way in thinking about smart machines. [01:50:23] Speaker C: Okay. Xiaoxing. It is a joyous and also deep historical and modern view and recounting and perspective on theoretical neuroscience. And it's like, an excellent resource. It has it all. It reminds me how long I've been in this field and how little I still know. For one thing, you'll be modest. [01:50:49] Speaker B: Yeah. [01:50:50] Speaker C: Okay. So anyway, great to see you again. Thank you for coming on and it's been a pleasure having you on. [01:50:56] Speaker A: It was a real pleasure. Thank you. [01:51:06] Speaker C: Brain Inspired is powered by the Transmitter, an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives written by journalists and scientists. If you value Brain Inspired, support it through Patreon to access full length episodes, join our Discord community and even influence who I invite to the podcast. Go to BrainInspired Co to learn more. The music you hear is a little slow, jazzy blues performed by my friend Kyle Donovan. Thank you for your support. See you next time.

Other Episodes

Episode 0

November 29, 2022 01:22:27
Episode Cover

BI 154 Anne Collins: Learning with Working Memory

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord...

Listen

Episode 0

June 30, 2022 01:20:22
Episode Cover

BI 140 Jeff Schall: Decisions and Eye Movements

Check out my short video series about what's missing in AI and Neuroscience. Support the show to get full episodes and join the Discord...

Listen

Episode 0

July 31, 2024 01:41:03
Episode Cover

BI 190 Luis Favela: The Ecological Brain

Support the show to get full episodes and join the Discord community. Luis Favela is an Associate Professor at Indiana University Bloomington. He is...

Listen