Support the show to get full episodes and join the Discord community.
Henry and I discuss why he thinks neuroscience is in a crisis (in the Thomas Kuhn sense of scientific paradigms, crises, and revolutions). Henry thinks our current concept of the brain as an input-output device, with cognition in the middle, is mistaken. He points to the failure of neuroscience to successfully explain behavior despite decades of research. Instead, Henry proposes the brain is one big hierarchical set of control loops, trying to control their output with respect to internally generated reference signals. He was inspired by control theory, but points out that most control theory for biology is flawed by not recognizing that the reference signals are internally generated. Instead, most control theory approaches, and neuroscience research in general, assume the reference signals are what gets externally supplied… by the experimenter.
Henry 00:00:03 So how come neuroscientists have failed so badly yet explaining behavior? So obviously they’d been trying to do this for at least a century and a lot of smart people have worked very hard, but the result is I would say very disappointing. I think it’s healthy in the sense that, okay, so there are people who are perhaps mainstream and they believe that the paradigm is healthy. You know, they, they, they want to maintain the status quo, others, you know, people like me perhaps in the minority, but, um, we think there’s a crisis. Um, uh, we would like to start a revolution. Uh, I think it’s exciting and I’m quite optimistic.
Speaker 0 00:00:54 This is brain inspired.
Paul 00:01:08 Hello, it’s Paul. And today I bring you Henry in Henry runs his lab at duke university where he studies learning and behavior in rodents using techniques like optogenetics and electrophysiology. But that’s not why he’s on the podcast today. He’s on the podcast because he’s written a few pieces in which he argues that we need a new paradigm in neuroscience to explain behavior that essentially we are barking up the wrong tree, trying to study the brain like an input output device, which creates representations of objects in the world, and then programs the body to act accordingly. Instead, Henry looks to control theory and suggest that the brain is basically one big hierarchical set of control loops, each of which is trying to control its input so that the input matches a set of internal reference signals. So control theory came out of the early cybernetics work, but Henry argues that they made a key mistake due to their engineering like approach.
Paul 00:02:10 And the mistake was that they failed to consider that the reference signal, the signal that the system needs to control itself to obtain is generated internally by our autonomous biology instead, the cybernetics approach and in much of the rest of neuroscience, Henry argues places, the reference signal outside of the body in the hands of our experimental control, but we need to be looking inside. Okay. So that will make more sense when Henry explains it more. And I linked to his work in the show notes at brain inspired.co/podcast/one 19, where you can join the awesome group of Patrion supporters as well. Thank you, Patrion supporters, also just a quick announcement. I am finally going to be releasing my brain inspired course, which is all about the conceptual landscape of neuro AI. So all the topics that we talk about on the podcast, but in the form of a series of video lessons.
Paul 00:03:05 So next week on my website, I’ll be releasing a limited time video series. And in those three short videos, I’ll discuss why this marriage between neuroscience and AI is so exciting. I’ll show some examples of what to expect from the course, some of the topics that the course contains and then the details of the full content of the course and how to purchase it. But the videos will only be available from November 17th to November 20th. So if you’re interested head to brain inspired.co/bi-workshop, like brain inspired dash workshop B I dash workshop. And I’ll put that link also in the show notes for this episode. And it’s during that time span, November 17th through 20th, that the course will actually be available to purchase while the video series is live. So check that out if it sounds interesting. All right. Here’s Henry Henry. Thanks for being here. So I know that, um, by day you are not a crisis counselor, but, uh, the main topic I suppose, that we’re going to talk about today is, comes from a chapter that you wrote, um, in a book on perceptual control theory and the title of the chapter is the crisis in neuroscience.
Paul 00:04:15 Um, so there’s a lot to talk about actually, it’s really interesting stuff, but, uh, by day, uh, can you tell us a little bit about what you do on, in the empirical side?
Henry 00:04:25 Uh, yes, by day I’m a systems neuroscientist. Uh, I work on the role of the basal ganglia circuits and behavior, um, in particular instrumental goal-directed behavior. And I use mice for the most part in my research. Oh yeah. So I’m an experimental neuroscience.
Paul 00:04:47 Was it your research that brought you to think about these things that you write about you’ve written been writing about these topics since at least 2013? I’m not sure if you wrote about it earlier as well.
Henry 00:05:01 Uh, yeah, I’ve been thinking, I’ve been thinking about these topics for probably, uh, since graduate school. So for probably 20 years or so. And I started writing about, um, uh, as you said in 2013 or a little earlier than that. Um, yeah, so it’s been awhile.
Paul 00:05:25 Okay. Well, uh, let’s not wait any longer than, um, what is, and, and there’s a lot to unpack, so I don’t expect you to summarize the entire chapter here, but can you give us the overall picture of what the crisis in neuroscience is that you write about?
Henry 00:05:40 So I used the word crisis, um, in a Kuhnian sense, those based on, um, Kuhn’s book con scientific revolutions. So the idea is that as we all know, there’s, um, uh, scientific, which is a set of common assumptions that most scientists in the field, uh, take for granted. And, um, when you have a crisis, uh, it’s usually due to discrepancies between, uh, new observations and the accepted model, uh, these assumptions that everybody accepts. So then the idea is that you can either maintain the status quo or you can start a scientific revolution. Um, so that sort of the, the nature of the crisis in general for, um, in science and, uh, the question that I raised in the chapter was so how come neuroscientists have failed so badly at explaining behavior? So obviously they’ve been trying to do this for at least a century, and a lot of smart people have worked very hard, but the result is I would say very disappointing after a century of work, there is no accepted model of any behavior.
Henry 00:07:14 Um, the things that we have learned about the brain don’t seem to explain how behavior works. So that is, um, disappointing and surprising in my opinion. And, um, so I think that’s basically the crisis. Um, um, I think the reason of course is not because the brain is too complicated. It’s not because as people normally say, you know, the brain is the most complex object in the universe and therefore it will take forever to understand how it works. I don’t think that’s the reason, although that’s the, you know, the common excuse. So I think the problem is that the, the, the accepted paradigm in neuroscience and in psychology is wrong. And I will call this paradigm, the, the linear causation paradigm in which essentially you accept that the organism X receives inputs and generates outputs, the input is sort of sensory in nature, and the output is motor.
Henry 00:08:29 So the output is behavior and there is a causal relationship so that the inputs are somehow responsible for the outputs. And so the goal in neuroscience is simply to discover the function that will, um, that will link the inputs with the outputs. Okay. So the input will be the cause the output would be the effect. And, um, according to this paradigm, the, the brain or the nervous system is somehow respond responsible for a sensory motor transformation. Uh, it will, you know, compute various things. It would probably take many steps, but somehow, uh, the, the product is your behavior. And, um, so I argue in the chapter that this assumption is basically wrong, and that’s the reason that people have failed to explain behavior. It’s not because the brain is too complex.
Paul 00:09:33 How did you get into control theory?
Henry 00:09:37 Yeah, a good question. So I talked about control theory as an alternative explanation. So that is the, the model that I would use to explain behavior because there is only one class of systems in the universe that actually does not obey this kind of linear causation, um, um, model. So cause effect, explanations do not really apply when you have a closed loop, a negative feedback control system. And that’s why I talked about control theory,
Paul 00:10:22 But it’s interesting. And we’ll, we’ll talk more about, um, the control theory approach. Uh, you do an analysis on the cybernetics of old Norbert Wiener and company, and, uh, describe what they got wrong. So, um, I mean, cybernetics, it seems like is having kind of a comeback, but, uh, I suppose you’re worried that it’s coming back and it’s still wrong. What, what, what was, uh, what’s the difference between that old cybernetics approach, um, of control theory? What did they get wrong? What were they missing that, uh, that you argue for?
Henry 00:10:58 Well, I think, uh, let’s start with the basic control loop model, right? So the control, um, the basic control loop is quite simple. In fact, I think it’s simplicity is part of the problem because everybody assumes that they understand it when they don’t. Um, so the basic control has essentially three components. You have this input function, you have a comparison function, you have the output function. And, um, the comparison function will take the input and compare it with, uh, some reference input, reference signal, and generate, um, you know, the error signal, which is really the discrepancy, the difference between these two signals and then the signal is used to drive the output. And, um, if there negative feedback, then the output will have a certain effect on the input. And so that closes the loop. I think the problem with, um, cybernetics and with, you know, with Wiener’s model is he was actually under the influence of the linear causation paradigm.
Henry 00:12:14 And, um, so his approach was very much the standard engineering control theory approach. Um, and the problem is not the, you know, with the, with the mathematics, with the equations of control, the problem is what I would call, um, a systems identification problem, where they are accustomed to treating, um, the reference input as the input to the controller. So, in other words, if you are a user of some serval system than the, you know, let’s say you’re using, for example, a thermostat, then of course, you, as a user, you would set the temperature. And it seems like the setting, when you set the temperature, that’s the input to the system. And of course the output is whatever the AC will do to control the temperature. Now, this is quite misleading because the reference signal in a biological organism is actually inside the organism, right? Literally it’s inside of your brain. It’s not something that you can inject into the system as if you’re gone.
Paul 00:13:33 Um, but we do generate those signals, those reference signals in a sense we autonomously generated.
Henry 00:13:40 So the key is autonomy. Um, the key is that the reference signals are extremely important, um, in control systems and they must be generated, um, within the system. And that’s actually, uh, according to my model, at least that’s the job of the nervous system. So essentially what you have is a hierarchy of neural control systems, which can generate different reference signals at each level. And these reference signals are usually changing all the time, uh, with the exception of a few or relatively few homeostatic control systems, which are important for things like body temperature.
Paul 00:14:33 Yeah. So I guess the big picture is that you conceive of the brain or brains in any species, I suppose, as a set of hierarchical, uh, control systems with, um, each level having its own reference signals. Right. I that’s straightforward to understand, um, for homeostatic, uh, mechanisms, like you mentioned, uh, our internal thermometer, right? Our, our, uh, our temperature reference signal. Um, but then you extend it to behavior, right? So you make the comparison between the classical, uh, neuroscience, which you model, which you’ve already described, where we see something happen. Uh, and then we have some sort of internal representation or model of that thing. Uh, and then we act on it. But in, in your scheme we have no internal model of what we’re, uh, acting on or, or do I have that wrong? Do, do you have room for a model, um, that’s generated through these hierarchical control processes or is it control processes all the way down or
Henry 00:15:41 Great question? Um, I think first we have to be clear on what a model is just talking about representations, then yes, we have representations, we need representation. Um, but I think, uh, what you’re talking about, um, it’s very common in the field of motor control where they’re using all these models, um, which in my opinion are completely unnecessary and their model, their models are actually very detailed models of the external environment. Right. Right. And that’s actually due to a mistake in the analysis of the interaction between the system and the environment, um, which I mentioned in the chapter. So this is actually a direct consequence of the cybernetic model, uh, this need for just sort of computing the, the environmental properties. I think it’s imaginary. Um, and it doesn’t work very well in practice. So for example, in robotics, if you rely on this kind of inverse and forward computation, um, the computational challenge would quickly overwhelm you. And that’s why we still don’t have very good
Paul 00:17:07 Robots, especially when the wind blows.
Henry 00:17:11 Yes, exactly. Especially in any kind of unpredictable environment when the, when the disturbance cannot be, uh, you know, computed ahead of time.
Paul 00:17:24 So we’ve talked about homeostatic mechanisms and, and you’ve talked about the motor domain. Um, do you think of, I guess my question is how much of, well, let me back up, it seems that there is room, and I know you said the brain is not too complicated, but it is pretty complex. And it seems like there’s room for a lot of different kinds of representations and a lot of different algorithms being run. And I’m wondering how much of the brain do you see as devoted to, uh, control systems?
Henry 00:17:59 Well, first the principle is that in a control system, uh, what is controlled is the input and not the output. And if you accept that, you have to understand that, um, basically you can only control what you’re able to sense. So that means to the extent that, you know, you, you need to control some complex variable, you must have a fairly good representation of it. And so that means, uh, uh, the input function. So I, I mentioned earlier that you have three components. I would say that the input function is by far the most complex part of a control system in the brain. Um, the comparison function is relatively trivial because you’re just doing the subtraction.
Paul 00:18:52 Yeah. You found that there’s a linear relationship,
Henry 00:18:55 Right. And, uh, and the output function, there could be some complexity, but for the most part, it would probably involve some integration or maybe differentiation or a combination. Um, but the input function is tricky because you do there, you do need to represent, uh, whatever variable that, that you’re trying to control. A simplest example of course, is temperature. And that’s not a big deal, but of course, if you trying to represent, if you’re trying to control something more complex, for example, you know, if you’re trying to follow someone that you have to represent this person somehow, right. Um, and that’s not as easy, um, because you know, the sensory representations and the higher level object in very in representations are needed. I think essentially then you just do perform control on that variable.
Paul 00:19:57 And then, so as that person ducks around the corner, you, uh, actively try to, um, I guess your reference signal would be to have them in zero degree line of sight right. Directly in front of you, let’s say, and when they turn a corner, uh, your brain’s job, uh, as a control system then is to move your body so that it returns it to a direct line of sight.
Henry 00:20:20 Exactly. And, and, and then you, you would have problems if, for example, there’s, um, something that’s blocking your view. Right. You might need some memory.
Paul 00:20:33 Yeah. Right. Well, yeah. So first of all, watch out people, uh, if, if Henry’s following you, um, it’s really just a scientific experiment. It’s, don’t be creeped out, but so you just mentioned memory. And what about the case where, uh, and, you know, I don’t want to, uh, Badger you, uh, all day with this kind of questioning, but, um, you know, what about the case where you’re not actually following the person, but you have to imagine where they might be going based on, on their history, right. Or, you know, they love ice cream. So you imagine they’re going to the ice cream shop. Um, but, and you have to sort of, then you have to, do you not have to then represent, let’s say the path to the ice cream shop, right. To, uh, to then close in on that person. Is that still a control problem? Do you see that as a control problem? That that is just a hierarchical version of it?
Henry 00:21:25 I think what you’re suggesting is that can you predict where they they’re going to be the future and then act accordingly? Of course it’s a control problem, but I think that the difference is that, you know, the, you don’t, you’re not getting direct sensory input. Um, but instead you’re trying to predict based on your experience and learning, um, where they’re going to be, uh, that’s a slightly different problem, but in principle, the control problem is still the same, but the prediction problem. Yeah. I mean, I think the prediction problem, it’s not excluded from the model. I would say that, you know, in the control hierarchy, you do have so-called imagination, which is basically when a control loop is able to send its output to it as to its own, um, input function without going through the world really environment. So that’s called imagination. Um, and then, you know, for these higher functions, you do need memory, as I said, so I don’t, I don’t, I think these can be viewed as additional functions that you add to the control inward to help you control. But to be honest, I’m currently, I’m not concerned with these questions because I think they are sort of advanced, they’re not necessarily difficult, but I think it’s more important to understand the basic function of the nervous system, which I argue is to control various inputs. That’s my perspective.
Paul 00:23:10 Yeah. So this is in some sense, this is a unified, uh, grand theory of the brain, I suppose.
Henry 00:23:16 Uh, yes, indeed.
Paul 00:23:18 Okay. All right. Great. So, uh, you mentioned the word, um, prediction, and you were, you were talking about that for a moment there. So I want to go ahead and I’ll interrupt us and, um, play you a question from a listener actually. Uh, this is the person who recommended you come on the podcast. So then you can answer the question and then we’ll get back on track here.
Henry 00:23:39 Okay.
Speaker 3 00:23:41 Hi, Dr. Yan, my name is Jeffrey Short and I’m a mechanical engineer. Who’s just started the field of neuroscience. I really appreciate the thought provoking perspective. You shared about chapter as I try to get oriented to the field. My question is around the potential role of prediction in hierarchical control system model, the writing, as I’m sure, you know, there are other models involving minimization of error resulting from comparison of top-down and bottom-up signals. Many of the other ones I’ve seen so far though, emphasize prediction. For example, Paul recently had a Neal on the podcast. We spoke about a predictive control based model. I didn’t see any mention of prediction in the model you put forward to the chapter though. So can you comment on why you favor a model that doesn’t emphasize prediction? And if there are any experiments that could be, or have been done to lend credence either. Thank you.
Paul 00:24:25 Do you feel like taking a stab at that?
Henry 00:24:28 Yeah, I suppose I can do that. So, yeah, as I said, I’m not against prediction. I think there’s a role for prediction in this type of model, but what people often call prediction is actually not prediction. So at least it’s not achieved by predicting the future. What people are usually talking about is can be achieved at least, um, by controlling a different set of variables. So in other words, what through learning what you’re trying to control, uh, changes. Okay. So you’re trying to control another aspect of the environment that of course is maybe causally related to the, to the variable that, um, initially you were trying to control. And the classic example is Pavlovic conditioning where you have, you know, uh, let’s say a bell and, um, food, right. Um, meat powder. And there, I think what’s happening is that you are reorganizing the input function of the control system, so that you’re no longer trying to control for the impact of the food on your, you know, in your mouth, um, dry meat powder. So normally you have to salivate, um, and instead, uh, you, you know, the input function now is incorporating the auditory input. And so whenever the auditory input in the neutral stimulus is presented, now you’re turning on this kind of, um, meat powder, controlling system.
Henry 00:26:14 So that’s a very good example of prediction and people traditionally have, um, viewed Pavlovian conditioning as sort of a simple example of prediction, right? And there, there are a lot of models that tend to, uh, explain Pavlovian conditioning. Uh, but I think according to my analysis, it’s really representing an attempt to control a different aspect of the environment. Um, so I’m not against prediction, but I think, um, there is a very important alternative that people have not really thought about, which is, um, just online control of a different set of sensory variables that are actually predictive. You know, so that’s, for example, when you see a dark cloud, then you turn all your avoidance, you know, control systems in order to avoid the rain. Right. Um, but that’s after learning the causal relationship, the predictive relationship between the cloud and the rain.
Paul 00:27:21 So in essence, um, you spend your life learning a large part of learning is generating new reference signals or, and, or adjusting your reference signals. Do I read you correctly?
Henry 00:27:35 Yes, that’s correct.
Paul 00:27:37 So in the, in the chapter, you talk a little bit about how we can, uh, move forward, uh, without giving, you know, a full blown, uh, research program for instance, but, and I know that, you know, in your own research that you’re using these principles and applying them to study behavior. So I’m wondering if you could just summarize what you think the way forward, uh, is,
Henry 00:28:02 Well, you ask very difficult questions. So my vision for the future of neuroscience, in other words,
Paul 00:28:12 Well, because you outlined like first steps in the chapter, right? And, and some principles that you could follow and, you know, essentially you give three, three steps of what we’ll need to be looking for and how to test for control variables, right?
Henry 00:28:27 Yes. I think to begin with, we have to first identify the control variables and, um, I would start with very basic variables that, um, are not learned or perhaps don’t require too much learning because they’re easier to study. Um, and then you have to apply the test for the control variable being ordered to study those. And of course, then you would have to discover the different components of the control system and how they’re implemented by the brain. Yeah. So the tests for the control variable is simply a test that is mandatory when you’re analyzing biological control systems and you first have to come up with a hypothesis about what the control variable might be. And because we know that, um, the output of a controller will systematically mirror the environmental disturbance. So once you know, what the, um, control variable might be, then you can introduce disturbance to that variable so that, um, you would see if there’s any resistance from the control system. And, uh, if you’re correct, then of course you would have compensatory, um, outputs that will resist the effect of the disturbance. So you basically, you’re, you’re applying disturbances that would affect the, uh, control variable as if it were not, um, if it were not under control. And if you’re experiencing some sort of compensatory output, then that is probably the right control variable. If not, you have to start over and repeat this whole process. So you have to come up with a different, uh, hypothesis about the control variable.
Paul 00:30:28 So, uh, yeah, so initial steps toward, uh, a whole new neuroscience, but one of the things that you write, uh, in the chapter toward the end is if the above analysis is correct than a disturbingly large proportion of work on the neural substrates, that behavior will have to be discarded. So is the chapter being well-read and if so, what kind of feedback are you getting from the neuroscience community and or other communities? Oh, no.
Henry 00:31:03 First of all, I don’t think many have read the chapter, but obviously you have, and, um, you know, I get this uncomfortable feeling that maybe after this more people will read it, um,
Paul 00:31:17 Likely so, but you, you wrote it, so it can’t be that uncomfortable. Right,
Henry 00:31:22 Right. But you know, these book chapters, they’re not usually read by a lot of people. Okay. And, uh, so, so far I haven’t received much feedback, um, from other neuroscientists, at least I’m not sure if most of my colleagues, even though that I wrote this. So, uh, it’s hard to anticipate what people might say. I don’t know. I mean, what is your reaction? You were once upon the time you were a neuroscientist.
Paul 00:31:52 Right. And, um, w we could use some of my own work as, uh, needing to be discarded, uh, for example, under this, uh, um, proposed paradigm. Um, so I have multiple thoughts. I mean, so that’s, that’s why I asked you how much of the brain do you think is devoted to this control aspect? Because it’s hard to reconcile, for example, what I consider my own rich subjective experience. Right. My thoughts and my, uh, my imagination. It’s hard to reconcile that with a control, uh, system approach. So it’s, it seems like there needs to be, and I don’t know how you get from a hierarchical control system to what I consider my fairly rich subjective experience. Do you, do you see a path forward through that? And you know, that, that’s just one example. There are, of course other examples, like, you know, different areas of the brain being devoted to different cognitive functions, et cetera. But, but to you, these are all in the service of control.
Henry 00:32:58 Right. Um, I’m not sure exactly which aspects of your subjective experience is
Paul 00:33:06 Well, like right now I can
Henry 00:33:08 Control, I, I’m sure you’re talking about a lot of sensory experience. Like you, you see that desk over there, you’re not actively trying to control it. Right. Um, but I will say that, well, yeah, I mean, this seems like your sensory system can provide you with a lot of options and each of those perceptions might in principle be controllable. Um, so remember the principle is that you can only control what you can perceive. So w you know, of all the perceptions that you have right now, it could be a very rich, subjective experience. Um, I’m not you, of course. So I don’t know for sure. Um, and, uh, some of them could be controlled and, um, and of course we can demonstrate that. So the question is really what happens when you try to control one of these perceptions. So, for example, if I don’t like that desk, it’s offensive, I can turn around or I can walk out, or, you know, or if the temperature in the room is too low, if you called the, you can leave, or you can put on the sweater, and these are all behaviors that, um, I think are generated by control systems.
Henry 00:34:30 Um, but I’m not sure if, um, the richness of your subjective experience per se is incompatible with the control hierarchy. Anyway.
Paul 00:34:42 W what about, um, let’s say let’s okay. So I know that these words are fraught, but, um, the concept of mental representation, right. So I can close my eyes and we talked about imagination, uh, earlier I can close my eyes and I can imagine my future house, a giant mansion on a hill, um, you know, in Costa Rica or something like that. So it’s that kind of subjective experience as well, just being, um, it, it, it feels like, uh, I have a rich representation of not only, you know, my immediate perceptual experience, but of possibilities, and I can, you know, uh, memories, um, et cetera, those feel like they are mental representations. And, you know, the concept of representation I know is, uh, uh, philosophically tricky as well.
Henry 00:35:33 Actually, I don’t think representations are tricky any way. Um, uh, I think they’re just literally true because you have signals in your brain that represent things, uh, including this big mansion. You know, for example, if that’s a real goal that you have, and the, the you’re working very hard, you’re interviewing all these people, and let’s say your podcast becomes the most popular show. And then of course you can reach that goal, right. If, if you were actually doing this for, for that, for that house now. Um, so in that sense, yeah, I mean, I think goals, especially in humans could be relatively, um, abstract and fancy. Um, but that in itself, I think is sort of independent of whether you, you can exert control over it. In fact, I think, you know, some goals, obviously you do try to control, and that’s the definition of goal directed behavior, but the behavior is just a control process. And we say that because you always control you go, you always comparing, you know, your ongoing inputs with your desire state. So let’s say you imagine that you’re, you know, there’s this nice house that you like, but your current house is too small and that’s something that you’re working towards. So that’s what I mean by a control process. Um, you know, whether you can imagine something 40 years from now, or have fantasies about, you know, anything in the world, I don’t think that’s so relevant because that in itself does not falsify any control model.
Paul 00:37:20 I guess my recurring question is just, you know, how much to think about, um, the brain’s functioning as being devoted to control processes.
Henry 00:37:30 As I said, I think the input function is the most complex part and all these rich representations that you mentioned are really part of the input function. Even when you imagine things, uh, you using the perceptual channels, they’re just vague sort of perceptions. And the, the source is not coming is not in the external world, but coming from your own brain. Right? So that’s the major difference. And that’s why imagination imaginary inputs and actual perceptual inputs will compete because they use the same perceptual channels at the higher levels. So that’s why, when you’re daydreaming, you can’t perceive what’s in front of you. Um, so I think that actually supports the idea that, um, um, you know, even imaginary, um, imaginations can be used as some sort of input to a higher level of control system.
Paul 00:38:29 Uh, I actually, I buy that, um, before we move on the word, you, you drop the word teleology, uh, in, uh, the chapter. And I believe you’ve used it in the past as well. And actually my last guest, um, Yohanis Yeager talks a lot about how we need to, um, and a lot of other people seem to be, uh, talking about this, although this is my bias, I suppose, as my own interests have taken me down a path that is, that is, uh, uh, crowded with, uh, T um, teleology, um, advocates. Um, but can you talk a little bit about why, uh, we need, uh, to reinvigorate the notion of teleology and accepted as a valid, uh, scientific concept?
Henry 00:39:17 Yes. I think teleology simply means goal-directed, uh, so of course there’s a long history of teleology, but, you know, tell us is basically the goal or end state. So that has always been a dominant concept used to explain behavior, but I think something happened, you know, after Galileo and Newton. So in physics, um, modern scientific revolution, right? The first scientific revolution, the findings of Galileo and others, um, appear to falsify this notion of teleology because the Newtonian physical laws do not contain any element of the final cause. So final cause is that for the sake of which, um, so for example, you know, Aristotle’s example was, um, I’m running in order to become healthy, right? And so, uh, what follows in order, um, in other words, the state of being healthy is the goal or tell us. And, um, so that’s the purpose of your behavior and your behavior is explained, um, by this purpose now, according to modern physics, that can’t possibly be true because, um, again, as I mentioned, if there’s linear causation, so, uh, F equals I made, there’s no final cause there.
Henry 00:40:56 So it seems like the, everything in the universe can be explained by these simple physical laws, and you don’t need to explain behavior, and therefore people will have reached a conclusion that you have to abandon teleology. In fact, the whole history of modern psychology and neuroscience is the history of, uh, various attempts to, uh, abandoned teleology, both in the vocabulary and also in the, you know, the mechanistic explanations. Um, and so in my chapter, I argue that this is simply wrong. This is a huge mistake because, um, it’s very simple, it’s that teleology is the main property of control system. So you have everything in, you know, in the, in the universe follows, basically follows Newtonian laws, okay. With some minor exceptions. But the problem is that there is this class of systems called feedback control systems, which are sort of the exception in that you have to use circular causation to describe their properties because, um, um, at the same time that the input is affecting the output, the output is also affecting the input actually.
Henry 00:42:21 And, uh, and the way that the output is affecting the input is quite different. Um, but there’s certainly simultaneous. And because you have these two equations, um, things are changing simultaneously. You can’t use linear causation to describe the, um, properties of this type of system. So that’s the exception. So basically that means that in physics you study, you know, uh, open loop or things with no feedback, if you will. Um, and, and in biology, everything as feedback, everything is Tilly teleological. Um, so I would say that, yeah, in that sense, Aristotle was right. There is final. Cause as long as you’re talking about trust system first, he didn’t know how it worked. Right. So that’s the distinction because the way that it was used by people like Aquinas and a lot of people in the history was to S to, to argue that, okay, this is sort of a religious argument, right?
Henry 00:43:30 So this is the basis for, um, how God is all knowing and, uh, knows the purpose for everything on earth. And the reason that the rock is falling is because the God, you know, God intended the rock to fall, right? So that’s sort of another type of misunderstanding in my opinion. Um, and, and that’s also why there is this conflict between the so-called scientists and people who believe in teleology, um, because teleology is considered to be unscientific, right. Um, so I’m not advocating teleology per se. I’m just saying that, uh, the properties of teleological systems are basically the properties of controls systems. Um, and, uh, if you, if you think that the nervous system, uh, is a control hierarchy, then obviously you have to agree that it’s teleological because, um, yeah, but it’s literally true because the way these things run is that you need this internal state, that this internal reference signal to be there first, before you can generate the right behavior to reach the desired state.
Paul 00:44:49 Okay. So, um, the telos is not the reference signal per se, but it is the, uh, end result of controlling for the reference signal. Is that fair to say?
Henry 00:45:00 I think before you understand, um, control systems, you always get confused about the consequence and the, and the purpose as if the sort of the same thing, but of course there are not because one is just the signal inside of your brain and you, you might fail you being, it’s not like you’re guaranteed to succeed in your attempt to control. Right. Um, and of course, this also explains the difference between accidental and intentional behavior and all that. Um, so I think, yeah, traditionally people get very confused about the concept of purpose, of consequence of goal, but once you understand control systems, it’s not a big deal. It’s very straightforward. So anyway, that’s just my take, I know that a lot of philosophers will have a problem with this.
Paul 00:45:53 Yeah. Oh, all right. Great. I liked the attitude, um, before I, I, you know, I want to ask you about AI actually. Uh, but, um, one more thing on the neuroscience side, or at least one more thing we can talk about more, if you like, um, thinking about these circular causation in biological autonomous agents, uh, one of the things that you advocate is that actually what we need to do is instead of studying, you know, 40 different, uh, animals and averaging their behavior, or looking for effects through that, that what we need to do is it would be more fruitful to study one individual, but to do it for a long time and, uh, study it continuously in a continue on a continuous timeframe because of the circular causation, because, uh, the inputs are affecting the outputs and the outputs outputs are effecting the inputs because of the closed loop control circuit.
Henry 00:46:47 Yeah. That’s a tough question. So traditionally, as you know, very well, uh, for example, a monkey work
Paul 00:46:53 Two is the golden number in monkeys. Yeah.
Henry 00:46:57 Yeah. So it’s funny because a lot of people, a lot of neuroscientists, they like to criticize a monkey research because the end is too low. Right. I hear that a lot. They use two animals or maybe three monkeys. And how can you believe the data? Because there are so few animals, um, I think that’s completely misguided because it’s not really the number of animals it’s actually in the way, it’s the amount of data that you collect. And the more importantly, the quality of the data. So in the traditional analysis, you’re basically doing some sort of input output analysis. You’re manipulating the input because the input is so-called, uh, independent variable. And then you have output behavior output, which is dependent variable. So you’re always testing the effect of variable X on measure, Y essentially. Right. Um, and, uh, so what you’re trying to identify is the function that will connect these two.
Henry 00:48:01 So if I vary the amount of reward, what happens to the firing rate of the cell, right? That’s kind of, um, or if I manipulate the attentional demand of some task, what happens to the firing rate and that sort of research, and this is difficult. And, uh, yeah, if, if I, if what I suggest is correct, then all this work is, um, not worth your time. And the problem is of course that the, the, the variable that you’re manipulating is, um, not necessarily the input now, usually people are trying to identify some effective stimulus, but the effective stimulus, um, as traditionally defined as something that will rely, really produce the behavior that you like to just study. And then in reality, it’s actually not input from the perspective of the organism, but it’s the sort of the inferred input from the perspective of the scientist observer on the third person perspective.
Henry 00:49:18 And that’s very dangerous because there is a, um, illusion, uh, what I called, what I think bill powers called, uh, the behavioral illusion, which is that. So basically if you treat the disturbance, which is the input from the, from the, from the eyes of the observer, as the input and the behavior as the output, and it looks like you have identified the organism function or the neural function, that’s expressing the behavior of outdoor neural output as a function of the, uh, input, right. But that’s the illusion, okay. This is not true. In reality, this function does not describe the property of the organism. It actually describes the environmental feedback function. It’s mirroring the environmental feedback function. So when the disturbance is considered the independent variable and the output, the dependent variable, um, the, this function that you discover is not the real input output function, whenever there is control, it actually reflects the inverse of the environmental feedback function. I know that that’s not very easy to understand, but basically, um, you, what you think is a property of the nervous system, if you use this approach is actually a property of the environment, right? So this is probably the most vicious.
Henry 00:50:50 Yes. The vicious trap in the history of neuroscience.
Paul 00:50:56 All right. So, um, anything else from the chapter? So, you know, we didn’t cover, you actually give a lot of examples from history. You talk about Sherrington, um, sharing yeah. Sherrington and, uh, his experiments and Adrian, and lots of people from the history as well. Um, give me examples of, um, how they, um, how some famous people got it wrong from this perspective. And, uh, there’s a lot more in the chapter. Did we miss anything that you think we should cover here? Or do you think you’ve, um, dug yourself a deep enough hole?
Henry 00:51:32 Oh, uh, I think one of the things I suggested if I remember correctly, uh, for future research as this, um, the concept of using continuous measures. Right, right, right. I think you mentioned that. Um, and just, sorry, I have to use the monkey experiment again, you use the monkey example, as you know, you do chair training, the monkey is, is restrained. Um, and then usually only a limited set of behaviors are measured. Um, let’s say hand movements, are you pressing a button or moving a joystick or eye movements cards? Um, but the most important problem, the most important limitation is that the measures are discrete events. There are timestamps. And then what you do is you require your, a single neuron activity. You get the single units and you plot these parent event histograms. Right. I’m sure you did this. I’m quite familiar.
Paul 00:52:33 Yes.
Henry 00:52:35 And so, so there’s this strange assumption that essentially the only thing that matters in behavior are these timestamps, these events, which are actually of course created by the scientist. It’s not, uh, I think a reflection of the actual behavior of the animal it’s, whatever the scientist, um, considers important or relevant in this particular behavioral task. And then what you’d look at is the neuroactivity before or after, or P you know, Perry, uh, this event, and then you reach conclusions based on various manipulations. Right. Um, so I think that is very problematic and this has nothing to do with, um, the theory control or, or anything like that. I’m just saying that this is a clear limitation of, um, the experimental approach that you’re not even attempting to measure behavior. Um, so I think that’s a big problem because, um, traditionally, uh, whenever you look at the relationship between neuroactivity and behavior, you use this kind of approach and your conclusion, I think, is going to be very limited because you’re not looking for, you know, you’re not measuring behavior continuously.
Henry 00:54:03 You might be recording your activity continuously. So for example, in our work, one of the things that we found was that when we measured behavior, uh, behavioral variables continuously, for example, kinematics, um, and we actually allow the animal to move, then there is a remarkable linear relationship between the neuro activity between the firing rate and kinematics and this kind of correlation as much higher than anything ever reported in the history of neuroscience. So I think that in itself is a major discovery is the nature of this correlation, because it’s completely unknown. Um, you understand that for many decades, neuroscientists have been trying to find a relationship between your activity and behavior, right? And for the most part, they failed whenever they come up with a correlation or coding or in coding. So to speak, the relationship seems, um, quite, let’s say subtle. I mean, there is no clear relationship.
Henry 00:55:11 The correlations are low. And in part, because of these failures, they have largely given up, right. But our results suggest that in fact, every time you measure behavior properly, there is a remarkable linear relationship between certain variables, behavioral variables, um, and the neuro activity. And this is not that surprising because behavior is continuous. Um, even though we might represent behavior, um, as discrete events at a very high level, for example, that might be, what’s what you’re consciously aware of. That’s not necessarily the case. Um, when you’re measuring the actual behavior generated, there is a duration, there’s a start, you know, it starts at some point, it takes some time and then it stops, right? So calling this a discrete event, I think is misleading. And at least our results show that you can get very interesting data. If you simply measure, if you attempt to measure the behavior. And I think once you get these, um, novel results, then you have to, um, explain them, right. So how do you explain the fact that you have newer activity that actually slightly precedes the kinematics that’s achieved by the body? And it’s basically a direct representation of something that hasn’t been achieved, but is, you know, with the short lag it is being achieved by your body. Right. So how is that possible? How do you achieve the desire positions if the signals are not literally the representation of the descending referencing?
Paul 00:57:07 So Henry, uh, by the way, I love, I love the chapter. Um, and I, you know, also recommended of course to everyone else. Um, can I ask you about what, how this relates to current artificial intelligence is? So on the one hand you have reinforcement learning, um, and in this sense, this is, um, and I know that you’ve made robots or a robot, um, using, and I’ll link to that paper as well, using this kind of control theory approach and the robot, uh, I know, is made of very cheap parts, but actually performs really well in this continuous manner. And it’s a system of hierarchical control processes. What I’m curious about is how this kind of approach could help inform, uh, artificial intelligence.
Henry 00:57:52 It’s a great question, but it’s too big. Um, probably require a separate session, I guess the short answer is that, um, yeah, I don’t, I don’t think current AI is very useful. And so the main problem is actually the same problem that I talked about before. Um, so for example, reinforcement learning is just another example of the classic paradigm, uh, is an attempt to do, um, to explain teleology without using teleology. And so that’s why reinforcement of the concept of reinforcement is circular. So actually that’s sort of my background. I did learning theory, reinforcement learning. So, so we can talk about that in the future, maybe. So, so yeah, obviously there are limitations there. Um, I will say that, um, what people don’t realize is, um, how bad these systems are, like how bad current AI is, how bad, uh, reinforcement learning is. Uh, and that’s because they never think about the computational power and, you know, the energy environment and things like that.
Henry 00:59:09 Um, so obviously there has been progress, so it was better than let’s say, um, you know, 20 years ago. Uh, but I think a lot of it, a lot of the, uh, the progress is just in computational power or if you basically used computers from 20 years ago, as you’re forced to use those computers run these, um, you know, the current AI, it just wouldn’t work. Um, and, but, you know, I, I don’t think for example, that the biological brain has a lot of computational power. It’s significant, but it’s not even close to what, you know, these digital computers can do. So I think in a way, the current approaches in AI and robotics are irrelevant. Um, but, but again, that’s why I, I don’t know if I’m comfortable talking about that. It’s a really big question. It’s complicated. And again, you know, I don’t want to offend everybody. Of course there are AI people who care about efficiency, but they just don’t have enough constraints.
Paul 01:00:24 Do you know whether there are, uh, AI competitions that respect, um, power usage, power consumption, and sort of normalize for that, or where there, you know, like, um, not that you can aim to, uh, use a system that uses the same amount of power as the human brain, for instance, right. Or something like that. But there could be,
Henry 01:00:47 I should, honestly, I think that you should try to do that. And if you have such a constraint, then you probably come up with smart, um, design. And so at least, you know, something in the ballpark, I would say. Yeah. Which is interesting because everybody cares about energy these days. Right. Um, but you know, AI is where they don’t care about electricity. So, so let me put it this way, uh, in terms of AI and robotics, when you can ask any expert in robotics, whether it would be trivial to build, let’s say a robot with more than 50 degrees of freedom, and I guarantee you that they will say that it’s extremely difficult. At least if not impossible, to my knowledge, nobody’s done it, but using our approach, it would be trivial. That’s the major difference. And it doesn’t even require much computational power. It doesn’t require anything that’s significantly different from, you know, what we used in the published paper. Um, but that’s, I can tell you, but you can ask an AI expert or a robotics expert or how difficult that would be. And, um, and I imagine they would have that. It’s impossible.
Paul 01:02:18 All right, Henry. Um, there’s been a lot of, uh, pessimism I’ll say, right, but I want to come back to, um, as, as a last moment, I want to come back to Kuhnian, revolutions and crises, right. Because on the one hand crisis, that sounds bad. Um, on the other hand, I hear a lot of this sort of talk in neuroscience for one reason or another. Yours is a specific, unique take actually. Um, um, which is, you know, why it’s so interesting, but it’s also a sign potentially of a healthy field, right? Because if people are turning inward and thinking, oh, we, we we’re doing this wrong. Um, because what happens after a crisis is the revolution and then a new paradigm. So what I’m wondering is whether you feel optimistic about the future, or if you feel like we’re going to be mired in this crisis, um, moving forward for, uh, another century or so
Henry 01:03:17 I would say that overall I’m very optimistic. Uh, I think there is going to be a revolution. Um, that’s the short answer? Uh, on the other hand, I do think there are a lot of obstacles in part because a lot of people, uh, don’t think there’s a crisis, right? A lot of people that are also optimistic, but for the wrong reasons, they think the current paradigm is good. And now that we have all these new techniques in neuroscience, you just have to use these new techniques. You can generate big data. Um, obviously the brain is so complex, so we can map everything. We can map all the connections. All the snaps is we can record all the activity from every cell, that sort of thing. Uh, obviously as I mentioned in the chapter, I think that’s a misguided approach. Uh, you never make any progress in science that way, you know, for the same reason, Galileo did not measure every object in the universe.
Henry 01:04:19 Oh, you didn’t drop every stone in the world and measure how long it took them to fall. Right. So I don’t, I don’t think that’s the right approach, but I think it’s healthy in the sense that, okay, so there are people who are perhaps mainstream and they believe that the paradigm is healthy. Uh, so they want to stick to, you know, they, they, they want to maintain the status quo. Um, and there are, um, others, you know, people like me, you know, perhaps in the minority, but, um, we think there’s a crisis. Um, uh, we would like to start a revolution. So I think that’s healthy because there could be competition so we can see who will get there first. So yeah, I think it’s, uh, I think it’s exciting and I’m quite optimist.
Paul 01:05:09 Oh, that’s okay. That’s a great place to end it. Henry. Thank you for coming on the show and thanks for your, uh, thoughtful, uh, work.
Speaker 0 01:05:17 Okay. Thank you. Thanks for having me.
0:00 – Intro
5:40 – Kuhnian crises
9:32 – Control theory and cybernetics
17:23 – How much of brain is control system?
20:33 – Higher order control representation
23:18 – Prediction and control theory
27:36 – The way forward
31:52 – Compatibility with mental representation
38:29 – Teleology
45:53 – The right number of subjects
51:30 – Continuous measurement
57:06 – Artificial intelligence and control theory
K, Josh, and I were postdocs together in Jeff Schall’s and Geoff Woodman’s labs. K and Josh had backgrounds in psychology and were getting...
Russ and I discuss cognitive ontologies - the "parts" of the mind and their relations - as an ongoing dilemma of how to map...
Show notes: Paul Humphreys’ website.Zac Irving’s website.Emergence: Emergence: A Philosophical Account. (book by Paul)The Oxford Handbook of Philosophy of Science. Mind Wandering: Mind-Wandering is...