Episode Transcript
[00:00:03] Speaker A: My view is that real neurons are smarter than the McCulloch and Pitts neurons.
So sometimes I refer to a new conceptualization of neurons as the smart neuron.
Maybe neurons outputs don't just predict their inputs, maybe they can influence their inputs.
The reason million dollar question here that I think you're alluding to is how to go between levels.
[00:00:39] Speaker B: Oh, God, yeah. I mean, that's the dream.
[00:00:42] Speaker A: That's the dream.
And of course, I don't have an answer here. I think this is probably the most fascinating question in neuroscience, because.
[00:00:59] Speaker B: This is brain inspired, powered by the transmitter. Since the 1940s and 50s, back at the origins of what we now think of as artificial intelligence, there have been lots of ways of conceiving what it is that brains do or what the function of the brain is. One of those conceptions, going back to cybernetics, is that the brain is a controller that operates under the principles of feedback control. This view has been carried down in various forms to us to present day. Also, since that same time period, back at the origins of artificial intelligence, when McCulloch and Pitts suggested that single neurons are logical devices, there have been lots of ways of conceiving what it is that single neurons do. Are they logical operators? Do they each represent something special? Are they trying to maximize efficiency? And so on. Dmitry Shyklovsky, my guest today, who goes by Mitya, runs the Neural Circuits and Algorithms Lab at the Flatiron Institute. Mitya believes that single neurons themselves are each individual controllers. They are smart agents, each trying to predict their inputs, like in predictive processing, but also functioning as an optimal feedback controller. So we talk about the historical conceptions of the function of single neurons and how Nietzsche's account differs. We talk about how to think of single neurons versus populations of neurons. Some of the neuroscience findings that seem to support Nietzsche's account. The control algorithm that simplifies the neuron's otherwise impossible job at implementing this feedback control and other various topics. We also discussed Nietzsche's early interests. He has a background in physics and engineering, and the way he got into neuroscience was an interest in figuring out how to wire up our brains efficiently, given the limited amount of space in our craniums. Obviously, evolution produced its own solutions for this problem. So this pursuit led Mitya to the study of the C. Elegans worm, because its connectome was nearly complete.
Actually, they thought it was complete. Turned out it was nearly complete. And Mitzje and his team helped complete the connectome so that he would have the whole wiring diagram to study it. So we talk about that work and what knowing the whole connectome of C Elegans has and has not taught us about how brains work. As always, I link to Mic's work and his lab and his information in the show notes at BrainInspired Co Podcast 205. Also, as always, thank you to my Patreon supporters. Consider supporting this podcast if you value what I do here and want access to the full archive, all the full episodes and so on. So thank you, Patreon supporters. Okay, here's Mitya.
[00:04:10] Speaker A: Okay, just one technical issue to get out of the way. You know how to pronounce my last name, right?
Yeah. So think of it as sh rather than ch.
[00:04:25] Speaker B: But what is Shklowski? What's the background?
[00:04:28] Speaker A: Oh, there is a town in Belarus called Shklov.
[00:04:32] Speaker B: Oh, cool. So you're of that town.
[00:04:35] Speaker A: Yeah, well, some very long time ago, probably.
[00:04:38] Speaker B: Yeah, sure. Okay, cool.
All right, so I'm going to start here.
[00:04:44] Speaker A: All right.
[00:04:46] Speaker B: You are, what. What percentage of modern computational neuroscientists do you think come from a physics background?
[00:04:55] Speaker A: That's a very good question. I have not formally done the statistical analysis, but certainly among the people that I hold in high regard, probably 50% do.
[00:05:11] Speaker B: Oh, wow, okay. So you do have that background in physics. And do I have this right, that was it, your second postdoc when you started getting into neuroscience? How did that come about?
[00:05:21] Speaker A: Yeah, that's true. My undergraduate was a combination of physics and engineering studies, and then I did a PhD in theoretical physics. And I was always torn between, you know, the rigor and the intellectual challenges of theoretical physics and sort of the desire to solve some real world problems. And after I did a first postdoc in physics, I actually liked it very much because of, you know, the freedom I had to do research that I actually turned out faculty offers in physics and took another postdoc in neuroscience, which was my way to continue, you know, doing research without, you know, constraints. And so then I kind of ended up as a faculty in neuroscience and I never regretted the switch.
[00:06:12] Speaker B: But why neuroscience? You could have gone anywhere, right?
[00:06:15] Speaker A: That's a good question. I actually looked at a lot of fields and basically I decided that neuroscience was exciting enough because, you know, I was motivated by solving big questions. And, you know, how does the brain work? Seems like one of those.
[00:06:35] Speaker B: How, how does the universe work? Is another one. We still haven't figured that out, have we?
[00:06:39] Speaker A: Well, that's absolutely true, but the problem was, and still is in physics is that sometimes those studies become very satirical because we are Limited in the kind of experiments we can do. And that has not changed very significantly in physics since I switched. But in neuroscience, I would say the progress has been immense. I remember, you know, when I was a postdoc in neuroscience, we were sitting around having beer and talking about, you know, what if I could do this experiment? What if I could record at the same time from 10 neurons.
[00:07:13] Speaker B: Oh, my God.
[00:07:15] Speaker A: Yeah. And. And, you know, then we could really understand how it works. And in the 30 years that passed, you know, basically, like these days, from my perspective, you know, they can do any experiment you want. The big question is, like, what experiments should be done.
[00:07:31] Speaker B: Right.
[00:07:32] Speaker A: That's my perspective at least.
[00:07:34] Speaker B: I'm kind of interested. You know, there are so many physicists who come into neuroscience over the years, over multiple decades now. And I've told this story multiple times before, but I cannot source the conference that I was at. But I remember the opening keynote was a physics or no, was a molecular biologist, I think, and he was saying, we need to give up and let the physicists come in and solve this for us because we are at the edge of our capabilities here.
And that was 15.
Yeah, 15 or so years ago. Maybe 15 to 20 years ago. And of course, that raised the hair on the back of my neck, like, how dare the physicists come in? You know. But do you kind of agree with that? And what. What approach do physicists in general bring that us wet more biological sciences folks have been traditionally missing out on? And then where do you sit in that?
[00:08:33] Speaker A: Yeah, that's a really good question. I think that.
What, what physics? I mean, on the lighter side, you know, physicists think they can solve any problem.
[00:08:43] Speaker B: Is that true? That's what I think. Is that true?
[00:08:45] Speaker A: Well, they're very arrogant, but they're right. Right. The problem is, as a physicist, I can say that, but the problem is that the physicists are trained to figure out how nature works, and they're not trained to figure out how to build things. That has been sort of the domain of engineers historically, and I think.
So the strength of physicists is that they can really come in and understand a different field and formulate a research problem that can be solved in terms of gaining understanding.
[00:09:27] Speaker B: But that's from a particular approach. The physicist's approach is a particular approach. And you think it's the right approach to understand anything?
[00:09:35] Speaker A: Well, I think that's the strength of the approach, that knowing what understanding means in this sort of scientific sense. But the shortcoming of physicists, and again, as a former physicist, I think I can criticize us, is that physicists are not trained to build stuff that works.
[00:09:55] Speaker B: But you're an engineer by background also.
[00:09:57] Speaker A: And engineers are okay. And I actually, you know, as I was telling you, you know, my undergraduate studies were half engineering, and so I kind of have this interest in building things. And in the end, you know, Richard Feynman, you know, one of the famous physicists of the 20th century, said, I think that you really understand something until you can build it.
[00:10:23] Speaker B: Yeah, What I cannot build, I do not understand. No one gets the quite exact. The quote exactly right. Which is crazy because it's written on his chalkboard and people can look it up really quickly. Yeah, right.
[00:10:33] Speaker A: So basically, I completely agree with this. But physicists, when I was trained as a theoretical physicist, at least, I mean, experimental physicists are different. Okay, they have to build equipment. But theoretical physicists, they're not trained this way. And that's where things kind of become tricky. Because I think the brain was built by evolution to solve some practical problems.
And to understand how to do that in a robust way that would withstand adversarial environment is an important consideration.
And so I think that the best approach in my mind is some kind of fusion of physics background with engineering skills. And of course, it has to be heavily grounded in biology. I don't believe in this approach. And that's where arrogance comes in, unfortunately. Right. That a physicist can just come in and say, okay, so what is the problem I need to solve? No, I think that the right way is to really understand the biology as well as biologists do, or better, and only then you can formulate the problem and solve it.
[00:11:52] Speaker B: So what would your advice be to a biologist who wants to. I mean, do they need to go back and get a theoretical physics degree and an engineering degree? Like, who wants to sort of join that approach, that framework for thinking?
[00:12:07] Speaker A: Well, I, you know, I think. I think biologists are doing okay. You know, I think there are a lot of things to do for biologists. They don't need this. But if you're talking about, like, solving the brain on the level that would allow you to build, you know, an artificial version of it, then I think you need a fusion of those three approaches. Biology, engineering, and physics.
[00:12:32] Speaker B: All right, so since you are talking about building a brain, so we're going to be talking about your conception of single neurons, essentially what you've come to. But before we get to that, just to round out your sort of approach and conceptual overview on how things stand and where we are and where we need to Go. You didn't mention computer science as all you need and that maybe that's all you need to build AI these days. Right. So my question is, you know, twofold, you know, what do you see that that might be missing in current AI and, or in current neuroscience from this perspective?
[00:13:11] Speaker A: Yeah, so that's actually a very important question, I think, and I'm a big fan of the recent developments in AI and I'm a daily user of ChatGPT and I'm in awe of this technology.
And more than that, if you asked me five years ago or my more computer science accredited friends, I don't think anyone would have predicted that we would have something like this today. This is really living in the future.
It's a fantastic technology, but the question at hand is that, is this emulating how the brain works?
And that I think, is a completely different question.
And with all due respect to computer scientists, I think they are trained to build things that work.
[00:14:12] Speaker B: Again, they're more on the engineering side.
[00:14:14] Speaker A: In the engineering side, less so than, you know, understand how living things work.
[00:14:21] Speaker B: But does it matter that what they're building does not emulate brain function except on the most abstract level, does it matter to whom?
[00:14:31] Speaker A: Right. I mean, you know, it seems like companies, you know, like Open and Ianthropic are doing pretty well, so for them it probably doesn't matter. For me it matters because my goal is to understand how the brain works.
[00:14:47] Speaker B: I wanted to talk a little bit about C. Elegans, the. What is it, 302 neuron organ, 302 for the females, is that right?
[00:14:53] Speaker A: Yes, yes, exactly.
[00:14:55] Speaker B: And so we have complete connectome, thanks to people like you for C. Elegans. So you've worked on connectomes and all the structural stuff.
Did you at one point think, well, when we have the structure, we'll know, we'll understand it. And then you came to realize that we need something beyond structure or how did your thinking about connectomes then evolve to how you think about them now?
[00:15:24] Speaker A: Yeah, glad you asked this question. You know, I think that my, my path in biology has been somewhat winding and I, I started out as a physicist, as we discussed, and I just wanted to see what I could do.
And you know, one of the things is that I was fascinated by evolution and, you know, how can you do something related to evolution? Well, since we don't have the equations that describe like how the brain really thinks, maybe we can come up with a simpler framework of equations that explain the structure of the brain.
Sort of just scratching on the surface of the function. And for a few years I was working on the topic which people call wiring economy, which is basically the idea that evolution had to build the brain, which is very highly interconnected structure. So it has a lot of wires, axons and d dendrites. And under certain constraints, right, there is a volume constraint, you know, to the brain, you know, you have to be born. And there are anabolic constraints, time constraints and so on. And so to solve the sort of the packing problem to arrange all the wires in the brain is actually very difficult. It's akin to arranging components on a semiconductor chip like transistors. And this is like a multi billion dollar industry. How to arrange the elements of a semiconductor chip in the most optimal way to economize unwiring. And so basically what I was doing is trying to understand the layout of brain structures, the shapes of neurons from the perspective of economizing on wiring.
And I was at a place called Cold Spring Harbor Laboratory and one of my biologist colleagues said, well, you know, this is just also theoretical. Why don't you just test if this wiring economy thing is true? And I said, well, I would love to test it, but you know, there is no circuit for which we know the full connectome. Of course, the word connectome then wasn't used, but the wiring diagram, okay. And he said, well, actually there is one. It's called C. Elegans, it has 302 neurons. And I said, really? Yeah. And there are these people in Cambridge that reconstructed the wiring diagram in, published in the 80s, so you can use it. So I had this great student, Beth Chen, and I said, you know, Beth, you know, why don't you go get the connectome and optimize the layout and see whether you can explain where the neurons are actually located in the worm.
And she comes back and she says, well, they don't have a full conductor because, you know, I'm trying to use their wiring diagram. And all the like, all the, you know, neurons, they collapse. One majority collapse towards the head and the other collapse towards the.
And then I looked up and the connections are not finished.
[00:18:33] Speaker B: They had the. They. So they had the like the skeleton but not the what, like the wires were.
[00:18:39] Speaker A: They had. They basically did 90% work. But the most interesting part was the head and the tail where the majority of synapses are. But they didn't go all the way through the body and link them up.
[00:18:52] Speaker B: I see, okay. They weren't concerned with wiring length. Is that the part of the issue or.
[00:18:58] Speaker A: No, no, they just, they were doing really hard work and it was all manual those days, you know, with, you know, electron micrographs on film and, you know, tracing them by hand. So it was very, very difficult and hard work. And they've been doing it for many years and they did the most essential part and then the part is kind of born, which is like the body segments, there are no real segments, but kind of like more or less stereotypical structure along the body they did not complete. And at that time DNA sequencing became feasible and so the majority of them switched to DNA sequencing.
[00:19:37] Speaker B: Okay.
[00:19:38] Speaker A: And so they just like published what they had. And in the C elegance world, most people didn't actually realize that it wasn't finished. And so my student, Beth Chen, she spent more than a year completing their work by basically using their micrographs that happened to be archived in New York at Albert Einstein College of Medicine, where she would go from, you know, almost every day. And you know, some of the materials were even there and. But they were so old and brittle that, you know, that the samples would deteriorate while they were being imaged under electron microscope. Sort of like, you know, quantum information. Right. They would like deteriorate once you scan it anyway. But she finished and that's how the first complete connectome has been assembled and published.
[00:20:35] Speaker B: That's crazy to think about. I mean, yeah, I mean that wasn't even that long ago and that's almost child's play compared to what they're doing today. But it was so much work.
[00:20:44] Speaker A: Yeah, of course.
[00:20:46] Speaker B: So you came to that through your ideals of optimization and efficiency and well, I guess just through that lens, huh?
[00:21:00] Speaker A: Yes.
[00:21:01] Speaker B: Okay, so then you. So now you've completed the C. Elegans connectome and then your career is done, right? That's what you think.
[00:21:09] Speaker A: Yeah.
Ready for retirement. So, you know, so I of course now like that we reconstructed the connectome. You know, arrogant physicist at me is like, oh, okay, now we can figure out how it works.
[00:21:25] Speaker B: Okay.
[00:21:26] Speaker A: And it was a very interesting process and you know, this work is highly cited. We analyzed the C. Elegans connectome to death with all the possible approaches that I think it was or maybe is called the network signs, you know, this methods of statistical analysis of frameworks of networks. Sorry.
And we applied every method that we could find off to analyze C. Elegans connectome and we made statistical discoveries for sure.
And some of them have been since replicated in other species including, you know.
[00:22:04] Speaker B: Mammalian brains and some scale free things. Yeah, that's right.
[00:22:07] Speaker A: Scale free. Your motifs, distribution of synaptics. Log normal distribution of synaptic strength and stuff like that. So there has been a lot of statistical discoveries, but on the other side, I don't think we made any real progress in terms of understanding how C. Elegans computes and how it generates behavior.
[00:22:32] Speaker B: So that's still kind of. I'm not sure how you feel about this, but it's not, I won't say it's the butt of jokes about neuroscience, but it kind of is a dig at neuroscience. People all the time say, oh, you know, we have the complete connectome of C. Elegans and we still don't know how C. Elegans works. So neuroscience, you know, is still lagging in terms of figuring out the function of how structure relates to function.
So how do you feel about that? That that's the go to that people often use, right?
[00:23:02] Speaker A: Absolutely. It's a fair criticism and I spent a lot of time perplexing about that very question because I feel that, you know, I thought as a physicist, you know, C. Elegans is a hydrogen atom of neuroscience. Right. And we're just going to discover the real principles here. And we didn't. Not just based on the connectome. And so, you know, the, the kind of arguments that people give why it is so hard for sale against are like, you know, physiology actually is very difficult in C Elegance just because it's enclosed in this cuticle and it's hard to penetrate with electrodes without blowing it up.
[00:23:42] Speaker B: And so the neurons don't spike, right?
[00:23:45] Speaker A: The neurons. Well, we have to be careful here. There are no sodium action potentials, but there are calcium spikelets. Okay, not, not everywhere, I think, but, but there is a combination of graded potential and, and calcium spikelets in neurons, but it's very difficult to record. So at the time when we were doing our work, there was very little physiology period. Now there is a lot more, but most of it is optophysiology like based on calcium imaging. And so, you know, it's, it's better, but it's still just calcium. Right. It's not voltage really. And people are starting to do voltage dies and so on. So the situation is slowly improving. But this is a big detriment.
And you know, just to give you example of why this is, this is so, so hard is that, you know, in vertebrate neuroscience, we used to think that each neuron produces a unique output spike train that is then distributed to all of the downstream neurons. Right. But there is one signal that is communicated downstream and that is seems to not be true. In C. Elegans, because there is no clear separation of its neuron into axons and dendrites.
[00:25:19] Speaker B: Is C. Elegans, is that. Does that have small world motifs? I mean, there's the famous small world kind of thing, but there's so few neurons that it's. Can you even consider it?
[00:25:27] Speaker A: Right, so that's where we first discovered motifs in neuroscience. You know, the small number of neurons. But what I'm trying to get to is that the nodes of those motifs, which are thought of as, you know, neurons and invertebrates, that's a reasonable node because it has one output in C. Elegans is actually not a single kind of node. But each neuron consists of multiple sub compartments. Each of them can communicate downstream its own signal.
[00:26:03] Speaker B: Oh, like it's almost like out like dendritic outputs or something.
[00:26:08] Speaker A: Exactly. Because there is no separation into axons and dendrites in C. Elegans generally, basically, the output synapses can happen on the same branches where the input synapses are. We know for a fact now, thanks to great experimental results obtained by calcium imaging, that neurons are separated into sub compartments and each of them is computed different thing. And there is no place perhaps where everything is getting integrated together and output like invertebrate neurons.
[00:26:43] Speaker B: Oh, okay.
[00:26:44] Speaker A: So each neuron is in C. Elegans is actually mapped onto several neurons in, you know, in a vertebra, several vertebrate neurons, so to say.
[00:26:55] Speaker B: So, so, and I mean, there. There are these results that every single neuron, if like a regular neuron that you think of, can actually be modeled as a neural network itself. But what you're saying is that it's even more so in C. Elegans, because every neuron is almost multiple neurons.
[00:27:11] Speaker A: Right, Right. It is multiple neurons, but there is no single output that sums up this network. Like in a vertebrate neuron, if you think of dendrites as its own neural network, in the end, it's all summed up. And there is one output in C. Elegans. No, that doesn't have to be the case. And so having the connectome in this case doesn't get you too far because you don't know what's inside the node. And moreover, the outputs of the node, if you look up the wiring diagram that, you know, white at all or reproduced or groups after us, doesn't mean that the same signal is communicated downstream along all those outputs, which is of course what normal people would assume.
[00:27:55] Speaker B: Does that give you any pause in. Okay, so like, not every neuron is alike. Not Every brain is alike. Right. So what we want to do is say, okay, here's this unit of function, the neuron, and then we want to apply that same whatever abstraction we take, whether it's a McCulloch Pitts kind of point process model or whatever, and just implant it in all the species to understand all brains. And that's. We're going to all understand all brains the same way. Does this make you think that we, that every brain is, for lack of a better term, special or unique in the species and that we need to have different understandings for each species? Or how does this change your view of understanding brains? What's interesting is what we're going to get to in the way that you conceptualize neurons right now.
[00:28:41] Speaker A: Right, right. So, you know, I, I have to, you know, I have to put on my biologist hat at this point. Oh no, yes, of course, you know, you know, every species has something special. Right.
But, you know, but I think that C elegance is much more extreme in this way that, you know, for spiking neurons, for example, it's very difficult to have multiple signals communicated downstream because the spike is such a, you know, global.
Global over the whole neuron event.
[00:29:20] Speaker B: Yeah, it's an event.
[00:29:20] Speaker A: Right. And so it's kind of hard to have independent outputs in a spike in neuron. So once we get to spiking neurons in evolution, then, you know, things become kind of simpler actually.
[00:29:34] Speaker B: Okay. If you buy that everything is trans, you know, there are electrical, chemical signals, there's all sorts of signals. But, but eventually there is this unit event that is sent that is the spike.
[00:29:48] Speaker A: Yeah, yeah, yeah. So, you know, this is also true. This is actually, I should have mentioned that that in C. Elegans 1 Other responses coming from biologists, why we can't understand having the connectome is that. Well, but there is communication along the so called wireless.
Right. Because there are this neuromodulators that are transmitted, what is it called euphatically, that don't require a synapse that you can identify electron microscopy to communicate between neurons. And that connectivity is actually rather intricate. It has been studied now that you can look it up. But it's the same level of complexity, at least as the sort of synaptic comet.
And so why should we be able to ignore it? And you know, in, in a sort of, in a bigger brain, I think the answer now put it on my physicist hat, I would say the separation of time scales. So because the, the, you know, those euphaptic interactions are mediated by diffusion and diffusion is Slow over long distances. Then if you're concerned about physiological properties that appear on fast timescales, right, like, you know, motion, coordination or memory recall, those has to occur through electrical means where, you know, the only diffusion you have is across a very, very thin synaptic cleft.
Otherwise you just don't have the time.
But then if you worried about even memory formation, that could be a different matter that takes a longer time. So I think by carefully choosing which problem you address, you can match it to your level of description.
[00:31:51] Speaker B: So keeping your physicist hat on for a second, when you look at the messy complexity of all of these communication type systems, do you see them just as little sub problems to isolate and understand on their own, or do you see any hope for an overarching understanding of their interactions and in all their complex glory?
[00:32:14] Speaker A: Yeah, so I, as a physicist, I believe that there should be a set of principles. Yeah, I call them algorithmic principles. And that's where we kind of go back to engineering that, that are describing what the brain does on multiple levels.
[00:32:33] Speaker B: Is that the brain's version of laws in physics?
[00:32:37] Speaker A: I think so. I think so. I like to make a connection with sort of computer chips that you can model, of course, electron conduction through wires in a chip and through semiconductors and so on.
But to really understand how it works, you need to go to different level of abstraction where you think about logical gates and registers and so on. And that's the level which is central to the function. And so I think we're lacking that level of description in the brain.
[00:33:16] Speaker B: Okay, this is the main reason why I invited you on today is your work conceptualizing single neurons as controllers in a very particular way. And I've had multiple guests on who utilize control theory as a conceptual basis for understanding brains in general, the whole person in general. But you have taken it, taken it down to the single neuron level. So we should talk a little bit about how single neurons have been conceived of throughout history and then how your conception of them as controllers differs from that in certain ways.
[00:33:58] Speaker A: Yeah, so, you know, I don't know how far back we can go.
You know, I think that the modern age starts probably with McCulloch and Pitts.
[00:34:12] Speaker B: Right. Which is the modern age of, of artificial intelligence also.
[00:34:16] Speaker A: Exactly, exactly. Who, you know, all those people, McCulloch, Pitts, Rosenblatt, Hab and so on. You know, they're scientific heroes, there's no question about this. You know, that they were able to abstract those simple models from whatever they heard about brain research is just incredible.
Right?
And their model is obviously very influential because with minor tweaks, it's the same model that is being used in almost all artificial intelligence systems today.
[00:34:56] Speaker B: It's astounding. It's really astounding.
[00:34:57] Speaker A: Including ChatGPT. Right. What's under the hood? In the end? The individual units are those McAllen, Pitts, Rosenblatt, Hab units that have been conceptualized in the 40s and 50s and 60s.
[00:35:11] Speaker B: Just a few minor tweaks. Yeah.
[00:35:13] Speaker A: Right.
And so that was, of course, very influential discovery. But since this is a brain inspired podcast, we have to acknowledge that neuroscience wasn't standing still over this last 70 years.
[00:35:32] Speaker B: Right, but let me stop you, because you said discovery, and I would say it was an engineering feat rather than a discovery. The single point, McCulloch, Pitts, neuron. I mean, am I quibbling with semantics here?
It was a modeling conceptualization.
[00:35:50] Speaker A: Well, true, but, you know, as a physicist, I would say they conceptualize the model. Right. So it's like discovering they discovered a model.
[00:36:04] Speaker B: Okay, sort of.
[00:36:06] Speaker A: My analogy would be that they discovered the math.
[00:36:11] Speaker B: I'm hung up on this discovery term. I'm sorry. So they more or less discovered the mathematical principles underlying an abstract mathematical conceptualization of what neurons might be doing based on physiological data.
[00:36:25] Speaker A: Yes.
[00:36:26] Speaker B: Yeah, sorry, I don't mean to hang us up on this point here, but you said discover.
[00:36:33] Speaker A: Okay, I'm willing to negotiate on the exact words.
[00:36:37] Speaker B: It's fine. It's just terms.
[00:36:39] Speaker A: You know, model conceptualization. Fun.
[00:36:42] Speaker B: Yeah. As you're saying, neurophysiology didn't stand still over those. Over these past, you know, 80 years or whatever.
[00:36:49] Speaker A: Exactly. And, you know, now I think from a biology perspective, we know that this is a rather primitive way of looking at neurons. And, you know, it's important to say now that at least my goal is not to include all the biological details. Right. Which there are a lot. And people have discovered a lot of amazing things in the level of ion channels and even protein signaling and gene regulation and all those things that are, of course, necessary for the brain function. But in my analogy to computer chips, I don't want to go to the level of, you know, conduction of electrons in semiconductors and the band theory of solids and so on. I want to go on the algorithmic level, like logic gates. And that's where there hasn't been much progress.
But it's clear that there are several ways in which McCulloch and Pitts and Rosenblatt and Hap view is a major oversimplification.
[00:38:08] Speaker B: Well, you. So I'LL just pause here because you were, you kind of just transitioned from the messy details about how neurons work. There's their biological implementation, the implementation level, if you will, of Mars famous levels. And then you went to logic gates, which is because during, while you were speaking, I was thinking about there is the implementation stuff, but then there's the question about what neurons are doing, what their function is. And then that's. And then you went to the logic. And I was going to say, well, that McCulloch Pitts, like that they are logicians.
[00:38:43] Speaker A: Yeah.
[00:38:44] Speaker B: That they, that they are producing binary logic signals. And that's kind of the McCulloch Pitts approach. Ones and zeros originally. And so that was kind of the original functional story about what neurons are doing. And that's what you're saying has not advanced much.
[00:38:59] Speaker A: Exactly, exactly. And so, you know, what are the missing parts? Of course, we never really know until we get something that works.
But generally my view is that the real neurons are smarter than the McCulloch and Pittsburgh neurons. Okay. So sometimes I refer to a new conceptualization of neurons as the smart neuron. Okay. And in which ways? So one of the things that I think most biologists will, or physiologists will immediately agree with me is that, you know, macalloche Pitts neuron is an instantaneously responding device. So if I provide certain kind of inputs, they instantaneously compute an output by weighted summation and then non linearity and output it. Right. And we know of course in neuroscience that the neuron does not process inputs instantaneously. It has all kinds of timescales in its dynamics and there are short term, long term memory effects. And you can characterize them in a variety of ways, such as measuring the linear temporal filter by sort of spike triggered analysis.
And this is I think very important because it's telling us that the neurons care not about correlations between its inputs, between different upstream neurons, but also in the temporal correlations in the same inputs they get.
And that's completely missing from the, you know, McCullaghan Pitts inspired units.
[00:40:58] Speaker B: So even McCulloch and Pitts in their original paper, you know, drew these loops. Right. So they didn't just conceptually, the sort of, the classic story is like they conceptualized everything as a feed forward network only, but they actually drew the loops and they alluded to how hard it would be to, to determine, you know, how to incorporate those recurrent loops within their models. Right. So they knew it was a problem and knew it would have to be addressed.
[00:41:26] Speaker A: Yes.
[00:41:27] Speaker B: And that's what you're talking about, is this historical recurrent context that neurons have to deal with.
[00:41:33] Speaker A: Right. So that was actually my second item, is the existence of those loops, which actually, I agree with you that they realize that. And you can look at the figures in their papers that, you know, that they realized they were there. But I think that's one of their genius, is that they ignored them. Right?
[00:41:53] Speaker B: Oh, yeah.
[00:41:54] Speaker A: Network that, you know, that originated from them.
We're ignoring this. And that's why it was possible to make progress.
[00:42:04] Speaker B: Oh, that's interesting. I've never heard anyone celebrate them ignoring it. I like that.
[00:42:09] Speaker A: Well, I mean, that's how we do. Right. In physics, when you build a model, all models are wrong. Some are useful.
[00:42:15] Speaker B: Right?
Yeah. Okay.
Right. So that's one of the things. And let me just also bring in as we discuss this. Historically, there have been, speaking of function, there have been different conceptions about what the job of a neuron is to do. Right. So. And you write about this in your recent paper that I'll link to. You know, there's the efficient coding hypothesis. There's the idea that, you know, the grandmother neuron, that an individual neuron represents an individual thing in the world. You know, things like that. There's the predictive coding. So as we're talking, maybe you can situate the neuron as a controller within those contexts as well. Here's a different way to approach it. Like, how did you come to the conception of the neuron as a controller? Like, you know, there must have been some train of thought leading up to that.
[00:43:09] Speaker A: Yes. So I was basically very influenced by two things.
One is I was. I'm a big fan of efficient coding theories, and I spent a lot of time thinking about those and predictive coding in particular.
And I still think this is very important.
But the problem with those approaches, as I'm sure has been brought up by other people, is that when you do efficient coding, it kind of works at the early sensory stages because it makes sense that you want to represent the world. Right. But. And so those theories have been successful in explaining many properties of neurons, such as their, you know, temporal and spatial receptive fields of retinal neurons, even, you know, the edge detectors in V1.
But it failed when people wanted to march further into the brain.
[00:44:16] Speaker B: Yeah. And the history of neuroscience is mostly dominated with sensory cortex.
[00:44:22] Speaker A: Exactly. And then, of course, you know, the primary reason for that was experimental. It's an easy stimulus to control the stimulus. And so, you know, you can do reproducible experiments but in terms of theory, efficient coding and predictive coding appeal to me as a physicist. You know, it's such a beautiful theoretical foundation that you can use, you know, information bottleneck, but then it kind of fails as you want to march deeper into the brain. And so if you start thinking, why, of course, well, you, you want to get closer to action generation, to motor control, decision making. So it's not just about efficient coding. So you have to have other things in mind in terms of the objective. And then from connectomics, of course, I knew that loops are everywhere, Right. That each neuron, even in C. Elegans, you know, each neuron, almost each neuron, belongs to multitude of loops.
Its output can get back to its input by going through, you know, 1, 2, 3 synapses.
[00:45:32] Speaker B: Was cybernetics any influence on you in this regard also? Or did you. Was that something that came in later or.
Because I'm getting at the control theory aspect of it. Yeah, because that's all about agency and control and sort of motor output, what you're talking about, right?
[00:45:51] Speaker A: Yeah, probably was, because I read some of this work when I was in school. But the issue is that if you have loops, then the dynamics changes completely. Right. Because there is a danger of getting runaway excitation and instability.
And so it's important to have a framework that pays a particular attention to that. In my mind, that's the domain of control theory.
So control theory, on the one hand is a way to sort of generate certain desired output. On the other hand, it is a.
[00:46:35] Speaker B: Way to deal with loops, deal with the feedback.
[00:46:37] Speaker A: Deal with feedback, Right? That's correct. That's the correct word. And so from those two considerations, it seems like a good framework to apply to the neuronal circuits.
[00:46:50] Speaker B: What is it that a neuron is trying to do in your conception? Every single neuron is trying to do this.
[00:46:59] Speaker A: Right. So very good. So the neuron as a controller framework basically takes the efficient coding ideas a step further. Because efficient coding would say, well, the neuron processes its inputs to represent them in efficient way encoded in its output.
[00:47:21] Speaker B: So that's sort of a transformation of information that's coming in.
[00:47:24] Speaker A: Exactly.
[00:47:25] Speaker B: And this is what deep learning is based off of. Oh, I have a representation coming in, I'm going to transform it to pass it on to the next level or to some motor output or whatever. And that's really about the information transformation of what came before and what I'm.
[00:47:40] Speaker A: Exactly. And the predictive code in says takes it maybe a step further. It's not just a representation of the past inputs, but it's an attempt to predict, maybe predict or encode information relevant to future inputs.
[00:47:58] Speaker B: Yeah, right.
[00:47:59] Speaker A: And so again, you're trying to predict the future inputs and that's what you encode in your outputs.
[00:48:04] Speaker B: Oh, so that must have appealed to you quite a bit.
[00:48:07] Speaker A: Right. And so I spent a lot of time working on that. And you know, I think there is some validity to those ideas.
But then what does the feedback do and how do you leverage this to get to action generation? And that's where control theory comes in. Because then maybe neurons, outputs don't just predict their inputs, but maybe they can influence their inputs.
[00:48:42] Speaker B: I mean, this is this directly. I'm going to ask you this about this eventually. But you know, active inference, right? Active sensing, this is a direct analog of that, which is a more whole brain view of like what brains are doing that we move to get. We have a goal of what sensory input we want, and that's the action that we're taking is to get that sensory input. And this is like the predictive brain idea as well, but that's on the whole brain level. And so you're saying that every neuron is doing this?
[00:49:14] Speaker A: Yes, and I actually. So the reason I think it seemed like a good idea to me is because when I started out in neuroscience, I understood our idea of understanding how the brain work is to transform how information gets transformed from its inputs to its outputs. Right. And only later, you know, because of the work by other people such as, I don't know, Yehudahsar and Paul Cishek, maybe others that, you know, it's all feedback loop, it's all active sensing. Right. That we do things, that the actions we perform, actions, the effect of those we can observe.
[00:50:06] Speaker B: So what, what, what are brains for?
Are they from, from moving? Some people say they're for moving, some people say they're for sensing. Some people say they're for subjective awareness. Some, you know, is there. Do you have like that, do you have that control theoretic perspective on whole brains is essentially what I'm asking.
[00:50:24] Speaker A: Right. Well, so, you know, for the whole brains, I mean, it depends which level you want to ask it from the evolutionary perspective. Right. You know, evolution maximizes the fitness. Right. And of course, you know, you can play with the genome and improve the fitness by tinkering with the genome, but eventually what happens, as has been argued by many people, is that genes are kind of operate on a very slow timescale. Okay, Becoming a physicist again. Right. That genes only, you know, you know, whether you survived or not, and years have passed. Right. But you want to have some feedback loops on a shorter time scale.
[00:51:07] Speaker B: Okay?
[00:51:08] Speaker A: Right. And that's why the genes in the course of evolution invented the brains because, you know, they needed some kind of stand in to modulate feedback loops on a shorter time scale during the lifetime of each organism.
[00:51:28] Speaker B: They're like, I'm too slow here, guys, help me out, help me out while I'm in the background slowly changing.
Okay, I think so. All right, so then let's bring it back because we're going to kind of go up and down scales here. So let's bring it back to the single neuron level. So the conception is that the every neuron is doing its own control process. So maybe describe that a little bit more. And then I know that there are problems with the sort of common way to approach control problems. The neuron has to do too much. And that's where this idea of what is it direct, what is it direct data driven control comes in. So we'll get to there, but tell us more about what the neuron is doing and what it needs to do in order to accomplish it.
[00:52:14] Speaker A: So, you know, it's easy to accept that the brain acts as a controller in the feedback loop with the environment.
[00:52:20] Speaker B: Right, right.
[00:52:21] Speaker A: And it's certainly good view, but from my perspective again as a physicist, I like to focus on simpler problem first. And the brain as a whole is very complex, even in C elegance. So how far down should we go? And that I think is a personal choice. Right. There is no universal level. But for me the level of neuronal physiology has been always very appealing because there is just so much data, people have worked on those so much.
And my postdoc in neuroscience when I made a switch was in the synaptic physiology lab of Chuck Stevens. And so this level is very appealing to me.
So I thought, well, maybe on this level I can think about these issues just as well. And actually it turns out that there is a lot of data that, that I can leverage. I don't think there is anything special about the neuron being a controller. I think that, you know, if the whole brain can be viewed as a controller and neuron can be as a controller, there are intermediate level of, you know, brain areas, nuclei that are controllers, of course, and this has been famously argued by people like Robinson and such that, you know, sort of the eye control system, you know, this, this is well known.
So there are controllers on multiple levels. And actually going the level below, I think that you can think of Each synapse being a controller.
[00:54:03] Speaker B: Okay.
[00:54:04] Speaker A: Right.
[00:54:04] Speaker B: You're just wild about control. It's just everywhere.
[00:54:08] Speaker A: Yeah, I think evolution is wild about control, and I'm just determined that that's the case.
[00:54:14] Speaker B: Okay. All right, well, so then what does a neuron have to do to. So it had, you know, it has this output, and then there's the loop. It goes to one neuron. It also sometimes goes back. Then it can go directly back on itself from that one neuron. It can also go to three neurons and then go back on itself. It can also eventually end up affecting motor behavior. And then it comes through the senses when we get new sensation. So there's this. There's this hugely rich feedback signals that's coming into this single neuron. So how does it cope with that? And then what is the neuron's objective function? Like, what is its goal? What is it controlling?
[00:54:52] Speaker A: Yeah, so these are all good questions, and I don't have all the answers.
And I should also say, you know, in just a full disclosure, that the neuron as a controller is still a hypothesis that although I think that there is some evidence that this could work, there is no smoking gun experiment that would confirm that. Right.
[00:55:17] Speaker B: Why not?
[00:55:21] Speaker A: Because it hasn't been done.
[00:55:22] Speaker B: Oh, you don't mean it's impossible. You mean it just has nothing.
[00:55:26] Speaker A: I think it's impossible. And I think there are ideas floating in the neuroscience community. And, you know, one of my favorite ones to test this idea of the neuron being a controller is the following. You know, you want to somehow perturb the feedback loop.
[00:55:46] Speaker B: Yeah.
[00:55:46] Speaker A: You want to cut the feedback loop and see the consequences of that. But you know how to cut loop. Well, you could just silence the neuron and see what happens. But that is a little bit of. I think it's too much of a perturbation. And if you silence a neuron, it will know that something is off and it will kind of go crazy. Even if it's just a feedforward device.
[00:56:15] Speaker B: Yeah. Okay.
[00:56:16] Speaker A: So you want to kind of break the feedback loop in a way that the neuron doesn't really know about.
[00:56:25] Speaker B: You want to nudge it.
[00:56:26] Speaker A: So you want. Yeah, so you want to, like, tinker in the back there so that the neuron doesn't notice it.
But, you know, it will see this. If there is feedback, if the neuron is listening to its own output through the feedback loop, through the circuit, then it will start adapting itself. And so one of the great ideas, I think, how to do that is just to silence, not the neuron, but the synaptic transmission downstream. Right. So there are now amazing molecular genetics tools that allow you to just silence the output synaptic output of one neuron in the circuit.
[00:57:09] Speaker B: Okay, so you're talking kind of. So you want to keep it relatively simple then? Because I immediately think of super complex, complex brains where the signal branches out almost infinitely, you know, and so then the signal, this is the classic problem, recurrent neural networks also. It's like how much of that, the influence of its output is actually going to come back to it. How can it possibly know?
Calculate some error correcting mechanism or some control signal based on such a diluted feedback signal?
[00:57:41] Speaker A: Exactly. So it's not a given that it's done. Right. Like I think it's on the theoretical level, it's possible. But this kind of experiment would perhaps provide some evidence one way or another. Because if we were able to silence the output synapses of one neuron in a circuit, then, okay, the spike generation of the neuron wouldn't know about that manipulation. Right.
[00:58:09] Speaker B: So are you talking about, you have your, your, the neuron of interest and then let's say it branches to three neurons and then those neurons branch to three other, and then it comes back or something kind of a fairly simple circuit. Are you saying you want to silence the activity of one of its targets of the three, and then see how that affects it or the output of the neuron that you're studying?
[00:58:33] Speaker A: Yeah, you can do either way. I think more sort of precise manipulation and more subtle manipulation would be just to silence the outgoing synapses of that one neuron.
[00:58:45] Speaker B: Okay, okay.
[00:58:46] Speaker A: And to monitor how its response properties are changed over time because then it's.
[00:58:52] Speaker B: It'S not affecting anything downstream.
[00:58:55] Speaker A: Right?
[00:58:56] Speaker B: Right. Okay.
[00:58:57] Speaker A: So something like maybe a hippocampal pyramidal neuron, right. That has a place cell receptive field and you silence its outputs. Is it's going, is, are we going to see changes in its response? Is it place field going to go wild and start searching for greener pastures? Right.
[00:59:16] Speaker B: It's so subtle. Do we have sensitive enough techniques to measure? I mean, I could see a scenario given the degeneracy of brains where we don't detect anything, but perhaps there is something going on.
[00:59:30] Speaker A: Well, there is always possibility of a negative result. I mean, that's the nature of experimental research. Right. But if we were to see unexpected changes in receptive fields of that neuron on certain timescales, then we would say, aha. And then of course, we can rescue this synaptic Transmission and see the receptive field restored or become more sane. Right.
[00:59:58] Speaker B: Or change in a different way. Change in a different way.
[01:00:01] Speaker A: Right. So I think it's important to have an experiment where you actually introduce a perturbation.
[01:00:10] Speaker B: Okay, I guess you would, would you do this in a dish? Would you do this in vitro or.
[01:00:16] Speaker A: I. One could, but why not just one.
[01:00:19] Speaker B: In a mouse with. Yeah. Or. Yeah. Some organism. Or, or C. Elegans perhaps?
[01:00:23] Speaker A: Yeah, yeah, yeah. I mean it's. There are many options, but.
[01:00:29] Speaker B: Okay, so, so then conceptually, sort of the common conception and the thing that you address with this direct data driven control is you take the burden off of the neuron from modeling everything that's going on in that external loop and then. And simplify things. So how does direct data driven control simplify the neuron's task?
[01:00:55] Speaker A: Right, so the way I thought about this is, you know, if it's so natural, at least for me, it's natural to think about the neuron as a controller. You know, why hasn't anyone said that, you know, there are a lot of smart people who, you know, thought about neurons before me. So what's going on? And you know, knowing how control theory is taught, it's actually not that surprising because most of traditional control theory is model based, which means that you start by describing the environment or the plant as they call it in control theory, with some kind of dynamical system description state space model.
We assume that this model is known and then we add feedback control to change its properties. For example, that model could be unstable.
That would be bad news in any real world application. So we add a stabilizing feedback.
[01:01:59] Speaker B: Gotcha.
[01:02:00] Speaker A: So that's usually negative feedback case. And so then you compute using one of the methods of control theory, what would be the right feedback dynamics that makes an unstable system into a stable.
And to think that a single neuron could do this operation, which actually for linear systems has a closed form solution, but it involves really complicated matrix multiplications and inversions. And how can a neuron do that?
I mean, I don't even think neuron could represent the sort of the state model of a dynamical system like that. So that I think is why people never thought of a neuron as a controller.
[01:02:46] Speaker B: Just asking it too much.
[01:02:48] Speaker A: That's right, that's right. That's too, too much work to lay on a neuron. But what I realized, and that's a relatively new development, that there is now another way to do control theory, which is called data driven control, which is very much in the spirit Of, I would say, data science and, you know, machine learning.
[01:03:11] Speaker B: Oh, yeah, listen to the data.
[01:03:13] Speaker A: Yeah, listen to the data. Right. Where you go directly from the observations and map them onto control signals without going through constructing the model. And of course, you have to learn that mapping somehow. But again, that mapping is learned based on the prior history of the paired observations and control signals.
[01:03:42] Speaker B: So you have all of the data incoming. The cell's job is to map that incoming data onto its set desired reference signal. Right. The control signal. And then. And then the control signals. The neuron's job is to change the control signal to. To produce outputs that better map. Sorry, to produce eventual re inputs that better map the inputs to the control signal.
[01:04:13] Speaker A: Yeah. I would slightly rephrase it by saying, please, that the neuron produces the control signals based on its inputs that drive the environment towards a desired state.
[01:04:26] Speaker B: Okay, good. So it's. So it's the neuron's job to change the environment so that it's getting the right input and it knows it's changed the environment in a certain way based on the inputs it's receiving from its activity.
[01:04:37] Speaker A: Exactly.
[01:04:38] Speaker B: How does it. I'm just going to jump to this. And there's not a. I'm not sure that there's an answer. So I had Henry Yin on, and one of his big things is that the control theory perspective in brains in general has it wrong in neuroscience because the reference signal is always like, outside the brain.
So how does the neuron get that objective? How does it get its reference signal? Where does that come from? Is that from the DNA? How does it know what it wants to hear?
[01:05:07] Speaker A: Yeah, so that's a very good question. And I'm afraid I don't have full answers for that. And I think this is really important questions.
So at the moment we have two examples where I think we understand partially how we can do it. And one example is, I already mentioned the issue of stability.
So once you have a feedback loop, it has to be stabilized. It cannot be unstable, otherwise it blows up.
And much of control theory is about making an unstable plant stable.
So just this kind of desideratum already produces some predictions that I think, you know, neurons may have to respect.
Is this where.
[01:06:03] Speaker B: Because you've, you've. There are three or four different.
[01:06:05] Speaker A: Yes.
[01:06:06] Speaker B: Experiment sets of experimental data that this approach explains the results of, which. Is that what you're referring to?
[01:06:13] Speaker A: Yes.
[01:06:13] Speaker B: Yeah.
[01:06:14] Speaker A: Yes. So, so anyway, so stability is a must, and any feedback system must do that.
And so we can already generate some predictions. Of course, that's not the only goal because it would be boring. Right. If it just wants its stability, we'll just lie down and die. It's very stable.
But that's not of course, the only thing we have to do what the brains are delegated to do by the genes, whatever that is.
And so that needs to be figured out. We were talking about the feedback loops and the closed loops and open loops. And the interesting thing about a spike in neuron is that most of the time it's an open loop because if the neuron does not produce an action potential, there is no releases in the synapses. Well, if you ignore the mini events. Right. And then basically it's an open loop. So whatever input the neuron gets is just held there until there is, it's a closed loop.
[01:07:17] Speaker B: You mean it's a closed loop because.
[01:07:19] Speaker A: It'S, it's an open loop because nothing comes out of the neuron until there is an action potential. Right, so that's where the loop is broken.
[01:07:27] Speaker B: That's where the loop is closed and therefore open. Right. Well, okay, wait, I'm thinking, I'm thinking of a wiring circuit. When you close a switch, then it's an open loop, right?
[01:07:38] Speaker A: Yes. Okay, so. All right, okay, so but control theorists would say a closed loop when.
[01:07:42] Speaker B: Oh, okay, I'm sorry, my mistake. It's the terminology.
[01:07:45] Speaker A: No, no, it's good to clarify those things because sometimes you know, colloquial meanings that they collide with.
[01:07:51] Speaker B: Oh, so if the switch is open, the circuit can't run.
[01:07:53] Speaker A: That's right. So okay, so when the neuron is silent, right? There is no action potential, the loop is open. Right.
[01:08:01] Speaker B: That was my, that's what I've rookie. I mean I even know that and I, I just switched them. I'm sorry.
[01:08:07] Speaker A: Yeah, yeah, yeah, no, no, I, I, it's good to clarify. And so, but at that one millisecond, the extent of the action potential, the loop gets closed and that's when you have the feedback. Okay, so that's a spike in neuron. It's actually very interesting that it goes between those two states because if I think about the neuron as a data driven controller, it actually has to solve two tasks. It has to solve a control task, whatever the desideratom is, but it also has to solve the system's identification task, which is how in the traditional control, the model of the plant can be generated.
[01:08:49] Speaker B: Right, so what is that task? Is that task to say when is it open, when is it closed and when to.
[01:08:55] Speaker A: In the traditional control Theory. You know, we start with the model of the plant, and then we figure out how to get it to the state we want it to be in. But then someone may ask, well, what if the. You don't know what the right model of the plant is, which is like what happens for a neuron? Of course, it's not reasonable to expect that the genes program the neuron in exactly the right way, how the environment is. Because it may actually be changing over time and the neuron has to adapt. Right. And so basically the control theorists would say, wait, but that's a different problem. Right. Like it's not my department. Right. To get the model, you have to use another subfield, which is called systems identification.
[01:09:43] Speaker B: Okay.
[01:09:44] Speaker A: Where you use prior observations, perhaps along with the record of control signals that accompany them, to build a model.
So the data driven controller performs both of those tasks at once without constructing the model explicitly.
[01:10:06] Speaker B: I see. Okay.
[01:10:07] Speaker A: Okay.
So basically the issue, why spiky neuron is such a great idea, I think, is that it's hard to do systems identification in closed loop.
[01:10:20] Speaker B: Oh, okay. I see. So it's almost like.
So I'm just going to restate this in a very layman's way, I suppose. So it's almost like it doesn't want to open it. Sorry. It doesn't want to close the loop much because closing the loop makes it more difficult to adjust its control signal.
[01:10:41] Speaker A: Exactly.
[01:10:42] Speaker B: Okay.
[01:10:43] Speaker A: So it's like, think about it as a. Like a radar that has to shut down its amplifiers the moment that it generates the pulse.
[01:10:54] Speaker B: Yeah. Or like when I'm yelling, I can't hear someone talking to me.
[01:10:58] Speaker A: Exactly.
[01:10:58] Speaker B: Yeah, exactly. There we go. I got the layman's there a little bit more.
All right, so talk just a little bit about how this approach accounts for some of the experimental findings in neuroscience that have been accounted for individually in various ways. But then there's kind of a collection of things that you've been.
You've been searching for that. Now you say that this approach accounts for.
[01:11:26] Speaker A: Right.
So there are several things that we think fit the control theory view.
None of them is, of course, a proof.
In general, in terms of scientific methodology, it's usually impossible to prove that the theory is correct. You can only falsify it. Right.
[01:11:50] Speaker B: I want to ask you about that next, though, like how you would falsify it. But.
[01:11:54] Speaker A: Yeah, right.
So, so, but, but there, there is some evidence. So I think that that kind of supports this, this view. So it's already cited, of course, the ideas of action generation and the existence of feedback loops in the brain as supporting views. But I think that what was interesting is that we were able to derive learning rules from the idea of a stabilizing controller that can potentially map on something like spike time independent plasticity.
[01:12:44] Speaker B: Is this something where you thought, okay, so here's the conceptual idea. Now I need to find some data that supports that idea and here are some open questions or how did you stumble upon applying this to spike timing dependent plasticity as a. I actually did.
[01:13:03] Speaker A: Not know beforehand that this would account for spike timing dependent plasticity.
But this is one of the neuroscience facts that I'm obsessed about because it seems so counterintuitive. Right, of course, you know, maybe just to sort of, as a matter of given background, spike timing dependent plasticity is a learning rule where the synaptic strength changes depending on the relative timing of the spikes of the postsynaptic and the presynaptic neuron.
[01:13:40] Speaker B: Yeah, okay.
[01:13:42] Speaker A: And so if the presynaptic spike precedes the postsynaptic spike, then the synapse gets stronger.
[01:13:49] Speaker B: Precedes it by a very short time period.
[01:13:52] Speaker A: Yeah, exactly. And that's key, right? It's a very, very time sensitive mechanism.
But one can say, well, that this part, this sort of the potentiation window of STDP is actually maybe viewed as an extension of the HAP postulate. Right. If one neuron repeatedly causes the spiking of another neuron, then the synapse gets stronger.
Right? It makes sense. It kind of represents some kind of a causal interaction.
But there is another side to stdp, which is the depression window, which is if the presynaptic spike follows the postsynaptic spike in time by a very small time delay, as you said, then the synapse gets weaker.
And this part is very difficult to see. Certainly doesn't follow from hab, because of course, you know, people would say, yes, you can just facilitate the synapse. There has to be some kind of normalization. Right, but why does that normalization has to be so time sensitive?
Right? There is no reason for that. It's just an issue of stability. There is no reason.
So I'm really obsessed by this observation because it has an appearance of an anti causal interaction.
The presynaptic spike that has no way of influencing that postsynaptic spike has already happened, affects the synapse, which is a causal mechanism. So why would that be in a control theory view? Because you have a feedback loop. It offers an immediate resolution that, well, because there is a feedback loop, this interaction that seems anticausal can actually be viewed as causal. You just have to traverse the loop outside of the neuron. Right? You go from the postsynaptic neuron back to the presynaptic side along that loop. And that is of course a perfectly valid causal interaction.
[01:16:11] Speaker B: How does the neuron know what time window to pay attention to? Right. So it has this spike, sorry, spike timing component. And then it's going to say, well, if I get this, if I get this feedback signal at this time, it means I need to adjust in a certain way if. But then three minutes or not three minutes, you know, but 100 milliseconds after that I need to adjust in a different way. And this is how I know. I mean, I guess that's where the data driven approach solves that problem.
[01:16:42] Speaker A: Well, so you're not asking mechanistically how does a synapse keep track of the spike?
[01:16:47] Speaker B: No, no, no, no. But there's the problem of the timescale window of like, well, when do I pay attention to in order to adjust my signal?
[01:16:55] Speaker A: Right. And that is of course crucial. And that comes to the issue that you already brought up, what the goal of that neuron is.
And if you assume that the neuron has to care about the stability of very short loops, basically like one or two synapses to traverse the whole loop, then the time scales are of order of milliseconds, which is close to the spike time independent plasticity timescale. Right. Of course, you know, there are other loops there, both through the brain, but also going through the environment that take longer to traverse.
Then the prediction would be that if the neuron cared about the spike alignment based on the traversing of those loops, then the window of spike time independent plasticity would be much longer.
And amazingly, there are cases when this has been observed physiologically.
So there are examples. One is in the cerebellum, it's the work of Jennifer Raymond, where they see the STDP window which is like around 100 milliseconds delayed. Then another example is in the hippocampus, whether I don't remember the exact, but it's tens of millisecond window. And so the control theory's view would be that then maybe there is a loop feedback loop that takes 100 milliseconds to traverse, that this tunes the synapse.
[01:18:36] Speaker B: So we talked about a handful of things that the direct data driven control, feedback control accounts for. One of the ones that I enjoyed is that this approach requires stochasticity or variability in the firing rates to sample the Space, essentially to carve out, I don't know if manifolds the right word, but to carve out a plane in the dimension so it knows where it can go in the dimension. Maybe you can simplify that. That was very abstract what I just said.
[01:19:08] Speaker A: Yeah, absolutely. So this harks back to the old exploration, exploitation trade off that, you know, feels like reinforcement learning and built on.
But here it has a very simple mathematical formulation which is basically if the dynamics of the environment or the plant is linear, then when the control law is settled and is fixed, also linear, usually.
So the control signal is linear function of its inputs, but a fixed linear function of its inputs, then the whole system, the whole feedback loop, ceases to explore the dynamical state space.
[01:20:04] Speaker B: And that removes its possible readjusting within that state space.
[01:20:09] Speaker A: Exactly. And this has been realized by control theorists. And the foundational development that allowed the development of data driven control is avoiding this problem by maintaining what they call persistency of excitation, which is having the control sufficiently diverse so that it can still probe the environment. Okay, and this goes back to what we already mentioned, which is that the data driven controller is called the controller, but it also has to solve the system's identification problem at the same time. And by converging on a fixed control law, you give up the ability to solve the system's identification problem.
[01:21:03] Speaker B: So in this conception, is that why we have variability in spiking? So there's lots of different theories on stochasticity, quantum indeterminacy, you know, sampling, exploration versus exploitation, like you mentioned. But in this perspective it is a desired engineering principle, essentially that you'd want to build in.
[01:21:24] Speaker A: That's right, that's right. So that sort of tells you that for this data driven controller to operate, there has to be some source of variability.
[01:21:34] Speaker B: Yeah. So noise is classically an engineer's nightmare. Right. So it's something that you want to avoid, but in this case you want to build it in.
[01:21:42] Speaker A: Exactly, exactly. And that I think is a natural fit to biology because everything in the brain is very noisy. Almost everything. Right. Whichever level you use, synaptic physiology tells us that the probability of synaptic transmission per presynaptic spike is low. You know, it could be 10%.
[01:22:07] Speaker B: Right.
[01:22:08] Speaker A: For Central synapses, then, you know, the spike generation, well, in theory could be very precise, but that is very strongly driven by the exact inputs.
[01:22:20] Speaker B: We talked about the history of neuroscience, you know, and the history of concept conceiving of single neurons, back to McCulloch and Pitts. Right. And even before that and then you had with Barlow and folks like that, the neuron doctrine, which among other things gave a lot of import into the function of a single neuron. I mentioned grandmother cells. Right. Or Jennifer Aniston cells. Right. So every single neuron is representing something that's, in a nutshell, one version of the neuron doctrine. These days with the advent of recording lots and lots of neurons at one time, Even more than 10, like you alluded to earlier, now we're all in. Now we're in the population doctrine era where a lot of people think that abstracting at the single neuron level is a level too low. And we need to think of everything in terms of population, of activity and the dynamics that those population give rise to. And what you said earlier would make me think that you're okay with that because you see every scale from single neuron up to whole brains in the control theoretic perspective.
But a, but the population doctrine folks might say, well, we don't need to worry too much about the function of single neurons because they are cogs in this larger machine for. Sorry about that analogy, but that we don't need to worry about their, their implementation of how they're doing things and why they're doing them that way because we just need to pay attention to the population. So where do you sit in that? Because this is very much at the single neuron level.
[01:23:59] Speaker A: Yeah, I mean, I agree with everything you said. You know, it's important to understand how the population of neurons function, but I don't see that as necessarily very distinct from what a single neuron does.
You know, as, as I said, you can think of each synapse being a controller and somehow together they combine into a neuron and doing something useful. And so the reason I'm focused on the neuron as a controller is just because there's a lot of data and I know how to conceptualize it at this point. I see I know what the inputs, the outputs are and how to describe that and so on. So that's why I'm doing it. But again, you can think of a brain nucleus or cortical area or brain stem part that being a controller made out of multiple neurons. So I think the similar methods would work on different levels.
But there is a million dollar question here that I think you're alluding to is how to go between levels.
[01:25:11] Speaker B: Oh God. Yeah. I mean, that's the dream.
[01:25:14] Speaker A: That's the dream.
And of course I don't have an answer here. I think this is probably the most fascinating question. In neuroscience, because I already brought up intersection of physics, engineering, biology, and here we get, I think, an intersection with things like game theory and economics and mechanism design, where you have independently operating agents.
[01:25:43] Speaker B: Oh, you said agents. We're going to move on to that in a second. Go ahead.
[01:25:46] Speaker A: Yes, yes, right. And so why do they have to be, do neurons have to be agents? Well, because there is no central authority that tells each neuron what to do. Right.
The brain is decentralized yet, so they have to pursue kind of their own objective, their own desiderata, but they have to work together towards a common objective, because in the end, the brain has to produce one behavior.
[01:26:16] Speaker B: So they're individual agents, every single neuron, and yet they're in a symbiotic agreement with all the other agents.
I mean, do you think of it that way, or do you think that an individual neuron is just selfishly doing its own thing, surviving, and then it happens through development, evolution, that they've come together and our own whole person agency is an emergent property of those individual components doing their own thing? Is that how you conceive of it?
[01:26:49] Speaker A: Yeah, that's my way of thinking about it.
I think a valid analogy is market economy, that you have a bunch of agents that pursue their own objectives. Yet if you set up the system correctly, and that's where mechanism design comes in, you have to have some kind of rules by which even though each individual pursues its own objective, they act, as you said, symbiotically, together to generate some common good towards achievement of some common goal.
[01:27:32] Speaker B: This is kind of a silly question, but has this changed your perspective on sort of your respect for the single nerve, you know, single cells?
[01:27:41] Speaker A: Right.
[01:27:41] Speaker B: Because they're each individual, alive components. And so we tend to think of the brain as being composed of these things that are doing things in the service of us as a whole person. Right, but. But this way of viewing, it almost gives a little more reverence for the individual neuron.
[01:28:01] Speaker A: Well, yes, it does. But you know, as all, well, usually multicellular organisms, you know, deep inside each cell there is the same DNA. Right.
[01:28:16] Speaker B: Mostly. Most of the same.
[01:28:17] Speaker A: Most. Mostly the same. Yeah. So. So there is a shared kind of rule book.
[01:28:23] Speaker B: Oh, brothers and sisters around me.
[01:28:25] Speaker A: Exactly. They have to follow. Right. And so, yes, the genes, again, cannot control each elementary, each spike that the neuron makes. But the genes write the rules by which each neuron generates its spike.
[01:28:47] Speaker B: Mitje, why does a computer scientist working on AI need to pay attention to any of this?
[01:28:54] Speaker A: Well, I mean, they don't have To, Right. As you said, if you measure in billions of dollars, they're doing really well.
It's a tough sell. But I think that the question is, where do we want to get to eventually? And if the goal is to achieve AGI, as we mentioned before, and I define AGI as equaling or exceeding human.
[01:29:24] Speaker B: Intelligence, what the hell does that mean?
[01:29:27] Speaker A: Whatever that means.
[01:29:29] Speaker B: Okay.
[01:29:31] Speaker A: But my argument is that maybe there are various paths to AGI. Right?
But taking the brain inspired approach is the only path for which we have the existence proof.
[01:29:53] Speaker B: Right. It could be a better way of doing it, maybe.
[01:29:56] Speaker A: I don't know. But it may also be a dead end.
Yeah, right. I mean, when they talk about building data centers close to, you know, a nuclear power station because of the energy demands, and a human brain operates just fine, consuming 20 watts, then you start asking questions, you know, is this really the, the best path?
[01:30:26] Speaker B: Do you see some. What are you hung up on? Are there limits or obstacles in your way? Are there limits to the control feature theoretic perspective of neurons and, or, you know, what is. What are you spending all your time thinking about these days?
[01:30:40] Speaker A: So the big question for us, you know, aside from like, how do you combine many controllers together to do something useful, that you should be able to.
[01:30:50] Speaker B: Just put them together and given their, their objectives, it should be an emergent property, Right?
[01:30:57] Speaker A: Exactly. But we don't know what the objectives really are.
[01:31:00] Speaker B: Okay, that's a big.
[01:31:01] Speaker A: Yeah, that is real, real difficulty for us. And we are, of course trying to find some objectives to work with.
But while we're searching for objectives, we realized that there is a way to skirt this problem a little bit by working on the sensory periphery again, or early sensory processing, which is what efficient coding people got so much mileage out of. Because if you think about a retinal neuron, like a retinal ganglion cell, it's not very useful to think about it as a controller. I mean, on some level it is a controller, right, because it tries to control the downstream part of the brain to perform certain actions.
But the question is like, does it get the feedback through this whole big feedback loop, including the rest of the brain and, you know, and getting back the visual signal, and that's, that's the issue because there is a big time delay. And so can you even do credit assignment over this timescale? So that's, that's certainly, you know, is a question there.
So. But amazingly, the control perspective helps in this case, where there is no obvious sort of plan to control. Because we think that you can think of a retinal ganglion cell as being fooled into thinking, okay. That it actually controls its inputs.
[01:32:41] Speaker B: Oh my God. Okay. Talk about anthropomorphization. Yeah, okay, good.
[01:32:45] Speaker A: Exactly. And that is actually led us to very nice re examination of the predictive coding and the effective coding kind of framework, which again, because of the control view, places much more emphasis on the dynamics.
And we view those sensory neurons as analyzers of the dynamics of the external world.
And that kind of changes the perspective for us. So even though it's not control per se, but it's a very control inspired view of what the neuron does.
[01:33:33] Speaker B: Okay, so this conception of single neurons as controllers is itself variable across different brain areas and depending on the needed functions cognitively of different brain areas and neurons.
[01:33:52] Speaker A: Yes, yeah, so that is of course different. Like, if you think about a motor neuron, it's very clear that it's very heavily, you know, sort of influential in the control. Yes. It controls immediately, you know, some muscle and may get a feedback on its performance through proper sensory mechanisms or whatever.
And as you march back towards the sensory system, okay. It's much more about.
About analyzing the inputs, you know, much more heavier involved in the system's identification side of the data driven control and less in terms of like controlling things. And that I think view is very nicely aligned with the recent experiments that, where people see motor signals in the primary sensory areas, you know, like in V1.
And so, you know, that is aligned with the perspective from control theory that the brain kind of performs a transformation from the sensory input slowly into the control dimension.
[01:35:17] Speaker B: I see.
So this has been a lot of fun for me. Is there, is there anything that we didn't cover that, that you wanted to discuss that we haven't discussed?
[01:35:27] Speaker A: Let's see.
I think this is a lot.
[01:35:34] Speaker B: Yeah, that's what I hear all the time about my podcast. It's a lot.
[01:35:38] Speaker A: Yeah. Well, I should also say, actually, since you asked about being a lone practitioner, it is true that, no, to my knowledge, we are the only ones who suggested the neuron as a controller.
[01:35:55] Speaker B: But that's crazy.
[01:35:57] Speaker A: There are multiple people now who think about control theory approaches in the brain. So on that level, there is a small community of people to the extent that we are having a two day workshop at Cosine 2024.
[01:36:12] Speaker B: Oh, nice.
[01:36:14] Speaker A: Devoted to the dynamics and control theory applications in the brain. I don't remember the exact title, but that's what we'll spend two days arguing about.
[01:36:26] Speaker B: Oh, cool. What is maybe the last thing I'll ask you is, given that it is a small community, what do people, maybe even in the neuroscience community, when you're describing to them what you're interested in or how you think about these things, what is the most difficult for them to understand?
What do people get hung up on?
[01:36:52] Speaker A: I mean, I think that it depends what subfield of neuroscience the person comes from. Because the motor control people, of course, neuron does control.
[01:37:05] Speaker B: It's all control. Yeah.
[01:37:06] Speaker A: Then people, you know, who's like midbrain, like hippocampus, maybe you could see that if there are loops. Yeah. But then the sort of. The sensory neuroscientist. What are you talking about?
Yeah.
[01:37:20] Speaker B: Okay.
[01:37:21] Speaker A: So. So you know, so you have to fool the neuron into thinking it's a controller. Right. That is, you know, that is a lot to take on.
[01:37:31] Speaker B: All right, Mito. Well, I guess we've maybe perhaps we've fooled billions and billions of neurons through this conversation. We'll see. But thank you so much and continued success in this line of work.
[01:37:44] Speaker A: Thanks, Paul, for having me. It was a pleasure.
[01:37:52] Speaker B: Brain Inspired is powered by the Transmitter, an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advanced research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives written by journalists and scientists. If you value Brain Inspired, support it through Patreon to access full length episodes, join our Discord community and even influence who I invite to the podcast. Go to BrainInspired Co to learn more. The music you're hearing is Little Wing, performed by Kyle Donovan. Thank you for your support. See you next.
[01:38:51] Speaker A: Sa.