Speaker 1 00:00:03 The ban of neuroscience has been that most methods record from the cell body. And so I think we are all so matter centric. I like to say, which means that we all think that the spike at the cell body is somehow more meaningful than anything else that happens in the neuron. And I think from a computational point of view, it's the least interesting part of the neuron. The question is if having just seen a movie and somebody recorded all your action potentials, if they replayed all the action potentials very faithfully at every neuron in your brain, would you have the experience of seeing the movie again,
Speaker 0 00:00:50 This is brain inspired.
Speaker 2 00:01:03 Welcome to brain inspired. I'm Paul, our D rights conceptually useful. That's the title of a recent perspective piece from Matthew Larcom whose voice you just heard? Matthew runs his lab at Humboldt university of Berlin, where his group studies, how DED rights contribute to computations within and across layers of the neocortex. Over the past few years, Matthew has published many theoretical proposals and many experimental results that argue for a better appreciation of the role of ding rights in our cognition like perception, memory consciousness, all of these ideas fall out from the unique structure of parametal neurons. In our cortex, parametal neurons are the majority of the neurons in our cortex. And although we name them by where their cell bodies are in the cortical layers like layer five, parametal, neurons, or layer two, three parametal neurons, Matthew argues, it's more useful to consider which layers their ding rights occupy because their dendritic trees stretch out in different directions to receive incoming signals from different areas of the brain.
Speaker 2 00:02:14 For example, layer five parametal neurons have a set of ding rights at their base, which receives mostly feed forward projections from earlier brain areas. And by earlier, I mean, closer to sensory areas. So as early visual cortex projects forward those projections land on the basal ding rights of the later area, those same layer, five parametal neurons have ding rights, that project up into layer. One of the cortex called ale, ding rights, and those ding rights largely receive feedback projections from later brain areas. That's simplistic, of course, because there are other lateral projections and connections across layers of the cortex and so on. But the story remains that these two sets of dendrites from the same neuron are signals from fundamentally different brain areas. One of Matthew's early key findings was that these two sets of ding rights are electrically separated. And depending on what's receiving signals at any given time, the neuron will either be silent.
Speaker 2 00:03:13 If only the ale ding rights are receiving signal or will fire at low levels. If only the basal dent rights are receiving signal or fire in a bursting mode, if both are simultaneously receiving signal, this led Matthew to realize this coincidence detection type mechanism makes for a great way to associate feed forward sensory like information with feedback, memory, or context like information, and might therefore be a fundamental principle of how the cortex operates. A big question in neuroscience. So we discuss, uh, a handful of the ideas and experiments that, uh, fall out from that. Like I said, related to learning and memory and consciousness, and more, we also discuss how Matthew's work has made him appreciate how a bottom up approach or examining the implementation level in terms of Mars, famous levels of analysis can inform what algorithms and computations might be possible as opposed to starting with the computation and figuring out how it must work in the brain.
Speaker 2 00:04:15 We talk about how the principles derived from dendritic computation might improve deep learning. You may remember many episodes ago. Uh, I had Blake Richards on, uh, that was episode nine and Blake used Matthew's work as inspiration for his neural network models to solve back propagation in the brain, for example, and toward the end, we discussed a recent thought experiment, Matthew and a couple of colleagues offered asking whether action potentials or more broadly brain activities cause consciousness. Oh, and there are a few guest questions from max shine, a previous guest on episode 121. Okay. Sorry for the long intro. But I thought it would be useful to set up the background a little, uh, even though we cover it in more detail in the episode, find links to Matthews lab and the papers we discuss in the show notes at brain inspired.co/podcast/ 138. If you find value in this podcast, consider supporting it for just a few bucks through Patreon, or if you wanna dive deeper and learn more about the conceptual foundations of a lot of what's discussed on the brain inspired podcast consider taking my neuro AI course would, you can learn more
[email protected] all right.
Speaker 2 00:05:26 Here's to the ding, right. And enjoy Matthew Matthew, pleasure to have you on. Thanks for being here.
Speaker 1 00:05:35 It's wonderful to be here. I've really enjoyed the podcast. It's been for me, a revelation, my, my PhD student put me onto it. I think around about the time you interviewed Yogi BJA. And since then, I've, I've, I've just faithfully listened to it nearly
Speaker 2 00:05:50 Every week. Oh, that's so funny that, uh,
Speaker 1 00:05:51 For me it's a fantastic thing.
Speaker 2 00:05:52 Oh, well, thanks for saying that. It's funny that, um, I, I hear that quite often that someone's PhD student put them onto it. <laugh> just, as you said, so I guess it's working in that respect one. I was trying to figure out how to describe what it is that you do. And one way to describe that would be that you are wreaking havoc on the notion, uh, the, of the primacy of the neuron cell body and the action potentials produced there and, and championing the role of D rights. Uh, that's a very simplistic way to say it, but how, how would you say it?
Speaker 1 00:06:29 Oh, I I'm glad to put it like that actually. Cuz I would like that message to come across. Um, I don't know where to start in, in, in the trajectory of explaining why, why, how you could come to that, to that opinion. Well, from a historical point of view, from a methodological point of view, I think that the bane of neuroscience has been that most methods record from the cell body. And so if you think of like a extra cellular recording, it'll pick up spikes that are near the tip of the electrode. And if you do it, obviously if you do a targeted wholesale patch recording or, or even a, uh, a juxta cellular patch or, or if you, if you use other methods like calcium imaging, which is now one of the main methods in neuroscience, they all are much better at picking up signals from cell bodies.
Speaker 1 00:07:19 And, and so I think we are all so meta centric, I like to say, which means that we all think that the spike at the cell body is somehow more meaningful than anything else that happens in the neuron. And I think from a computational point of view, it's the least interesting part of the neuron. And I think that, I mean, if you would that do this exercise, if you would take, uh, if you would collapse all of the branch points in a neuron, including the cell body and you'd make them just tiny nodes, right? So that, so there's no big sack of, of lipid, which represents the cell body anymore, but, but just little junctions where things, uh, where things meet, obviously now your electrodes would not think the cell body is special in any way. Uh, the, the axon is where you could say the output is.
Speaker 1 00:08:08 And it's also where the output is generated, um, which happens to come out of the somatic node, but who'd know that if it was as small as every other junction point in the, in the neuron and now you would be saying the, this entity, the, the collapse neuron would be transforming inputs that come everywhere more or less everywhere else, but the cell body not, not completely, but, but to a first approximation and transforms it in the axon to outputs that also go all over the place. You'd never mentioned the cell body and, and, and yet this is the place that is central to, to describing a neuron and to describing the computations that neurons make. So I think in the first instance, it's, it's almost irrelevant.
Speaker 2 00:08:51 Well, I know that, uh, I, I've heard you talk about how Ramon hall, um, appreciated the role of, well, the, the potential role of D writes, uh, and in, in his beautiful drawings, you know, it's interesting, I, I don't know if in neuroscience it's still taught, um, with the sort of cell body as the primary focus, but when you look at his drawings, you know, of course there are all these different, beautiful structures of ding rights and axons, and, um, you you've quoted him in, um, some of his work appreciating and, and sort of speculating that there might be, uh, a computational kind of role for those ding rights. So do you know, is, is it still taught that, uh, I don't know if you teach neuroscience, uh, to I do. I do. Yeah. Well though, I know that I'm sure the way, the way that you teach, uh, sort of diminishes the role of the cell body, but I guess writ large in textbooks, it's still, uh, so matter centric, as you said,
Speaker 1 00:09:45 That's right. I start with a, when I'm, when I get to the subject, um, I start with a slide showing about 10 different pictures from textbooks and, and they typically almost all of them have a, a, a sizable cell body looks like an orange or something, and it's got little, tiny, little sort of hairs coming out of it, uh, which are the D rice, but they're really deemphasized. And then if you're lucky, it's got one meated axon coming out and then, and then a sort of hand structure at the end, which represents what could be the processes of the axon and, and basically the D rights. You could forget them if you didn't look carefully at, in, in the textbook. And yeah, I, I guess I, I, I complain and usually in, in my lecture that, that this is my favorite subject and the D writes are missing, but there's usually a few titles. And then we get on with just describing the way you have to basically, uh, the way a neuron operates at, at if, at least if we're talking about basic neuroscience, if I get high level course, then I, I get the chance then to, to do all my, uh, propaganda on, on what I think the neurons are really doing.
Speaker 2 00:10:56 So you're just disgruntled for a few moments there and then move on. Huh
Speaker 1 00:10:59 <laugh> so,
Speaker 2 00:11:00 Yeah, uh, we were talking before I hit record that the, the arc of the story of your research is, is quite long, and I'm afraid that we're not gonna be able to get to a lot of the things maybe we could start. Um, I'd like to start before 1999, but, uh, a lot of the, when you start often, I think that you send people the 1999 paper in which you're recording mm-hmm, <affirmative>, uh, you know, ale, ding rights and, uh, from the basal ding rights as well. Uh, so maybe we could start just what the ding right hypothesis is. And then I would love to hear how you came about searching for these sorts of questions, doing these experiments and asking these questions,
Speaker 1 00:11:44 You know, I think it's gonna work better the other way around, because the, the only way that dite hypothesis makes sense in my mind is to, is to step back and, and look at the cortex, the way cortex I came to look yes, right. The way I came to look at the cortex. Um, first of all, actually in the end it's it's, um, we, we, I think more recently, at least in my lab exploring the way the cortex interacts with sub-cortical structures. And, and so it's perhaps a bit rash to, to just say cortex, but nevertheless, I think it's Quin essentially a, a hypothesis about the cortex.
Speaker 2 00:12:21 Is that how you began thinking about it, or is that where the, the recordings and experiments came from is thinking about cortex and the, the multilayered structure, et cetera.
Speaker 1 00:12:30 Right. Absolutely. Um, so I found myself in a, in a laboratory was Besman laboratory in Heidelberg, uh, of which I arrived at in 1997 as a, as my first postdoc. And so this is the, the guy who's invented patch clamp, and, uh, won the Nobel prize for it. And, uh, and by that point, the, uh, the lab was humming with, with, they had, uh, almost a decade at this point of, of fabulous findings on, on, after being able to patch gender rights for the first time, at the same time that they could record from this whole body. So, so basically do the first experiments into how signals were generated in the D rights and how they, they propagated around the neurons. Um, so I came into sort of a second generation of researchers looking at this after the first amazing revolutionary experiments that have been done. And, and back then we, we took slices of cortex from, from rats
Speaker 2 00:13:33 Mm-hmm
Speaker 1 00:13:33 <affirmative>. And so my first confrontation with the whole thing was to look at the cortex side on, in the dish, as it were under a high powered microscope. And, and what, what you see in that situation is mostly parameter neurons. And, and if you've done a nice preparation, you see that end rights quite nicely. And, uh, and so what you are effectively looking at is a forest, which of tall trees and, uh, and where the, where the, the cell bodies have roots going down, um, and the, and they all oriented the same way, like a forest. And, and then there are tough D rights at the top. And I don't think you can look at that structure, not ask why the hell is the cortex made of these really peculiar, but specialized neurons. Um, and I think it's natural to, and Kaha, as you mentioned, also did the same thing.
Speaker 1 00:14:30 He looked at these neurons and said, whoa, there's gotta be something special about these neurons, but by then we were finding out that not only were they morphologically special, but, but physiologically or BI physically, so they have a whole compliment of specialized channels that are distributed in specialized ways that make this a, a much more complex machine than the sort of neurons you find to this day in either models of the cortex or models or artificial neuro networks and so on. So they're just obviously on their face, much more complex, but of course you don't know, you dunno, apriori what, what this complexity serves and, and, and, uh, and what it's doing. So, so when I got to the lab, the, the, the talk of the town was the latest recordings and publications of Jackie Sheila and Greg Stewart that had shown that there was not just a spike in the axon, but there, there was a, a, a special dendritic spike.
Speaker 1 00:15:34 Um, we subsequently found there were more than one type, but the time there was a so-called calcium spike, or sometimes called a cat calcium plateau potential in the right, at the top of this tree, if you can think of it like a big Oak tree or something that, that mm-hmm <affirmative>, or, um, there's right at the, of just before the branching out into the Tufts, the there's a, there's another zone, an initiation zone for an incredible spike. That's more like the spike you get in heart muscle, and it's actually due to the same channels. It's, it's the long lasting calcium channels that sustain a plateau, uh, in this dite. So it's all together more amazing and incredible than, uh, than the spike that we all know and love that comes out of the, the axon, the, the so-called sodium spike. And, uh, and at the time people were, uh, in particular Saman, I think was the, so, so the, the boss, as it were, um, was skeptical about whether or not the, the calcium spike would be of any use, because it had a high threshold and, and there, he was difficult to imagine what would actually make it fire.
Speaker 1 00:16:42 And so it, it, there was, it wasn't completely clear whether it was, it was just an artifact or anecdotal byproduct. Yeah. Byproduct. Yeah. Um, and I, I remember I was kind of explained to my wife who was, as I say, as a musician, and sort of takes some time to get through these, these nutty details. Um, and, and I had to explain to her, uh, on a long walk once, um, what, what I was doing and, and what the point of this custom spike was. And, and, uh, and it, at the other point that had been discovered just before that was that the, this sodium spike that everyone knew went down the axon and signaled the next neuron also went back up the, up the big trunk of the tree and
Speaker 2 00:17:27 Also a byproduct. Right. That's the idea
Speaker 1 00:17:29 Maybe yes, yes. Right. Um, and, and people were speculating what the good of that was. And so it all hit me in an instant that it would, could be that the, the, the signal that's generated at the bottom of the tree basically primes the top of the tree, such that you can reach threshold for inputs coming into the effectively leaves at the top, the, the, the, the Dand Tafts
Speaker 2 00:17:52 That idea just came to you in a, in a, of a sudden
Speaker 1 00:17:55 In, in yes. Yes. It was earlier. Wow. So Eureka moment on a Sunday, as we were walking through the forest in Heidelberg, and, and I, I, by Monday afternoon, I had the first result, which was that indeed, this is the case. And, and it's even more exciting that because the, what it does is have the threshold for, for the, the big plateau potential, the calcium spike in the D rights. And that in itself leads to a kind of ping pong exchange between the top of the neuron and the bottom of the neuron that causes it to burst fire. And, and so the, as effectively, you can, you can arrange it such that innocuous inputs that appear to be doing nothing on their own, make the whole neuron explode. And, uh, and, and all of a sudden, I think it was clear to me that this was an associative, uh, device.
Speaker 1 00:18:44 The, the, the neuron was basically associating, whatever came to the top and the bottom of the neuron. So mm-hmm, <affirmative> the next question was what goes to the top and the bottom of the neuron. Right. Um, and to some extent, we are still trying to work that out, but I think in broad strokes, the, uh, to a first approximation, you can say that the top of the neuron receives a lot of long range input. That's, that's predominantly feedback in, in, in some hierarchical sense, meaning that, that the, uh, if you've, if you got something coming from a, a higher, let's say, you said secondary, um, visual cortex or semi century cortex going to primary cortex, it'll, it'll target the top of the, of the cortex and higher order thalamus targets the top of the cortex, um, and lots of superior OUS targets. The top of the cortex's we've now subsequently learned that lots of memory structures target the top of the cortex. Um, and, and in the, in the event that you, if you saw that architecture in the first instance, without knowing about this physiological property, you'd have to wonder what the hell it's doing, going to the top of this tree, particularly when you do the actual electrical recordings, and you find that it has no influence at the cell body. It's, it's basically a flat line. If you, if you put in input at the top of the tree, there's basically nothing to see at the bottom of the, of the neuron. And, and
Speaker 2 00:20:07 That's because of the it's electrically decoupled, essentially because of the long, uh, dendritic tree structure.
Speaker 1 00:20:14 That's right. That's right. I mean, this word decouple comes into the picture, right. In the most recent stories that we've had, because it seems that the, it turns out, and this is jumping right to the end. Now that, that the, the brain has a handy little switch to, to couple the top, to the bottom of the neuron. And that, that means that it's possible given that I should have, I should have completed the story and said that it's, we think the bottom is receiving predominantly feed forward, although, um, let's just, let's just leave it in simplistic terms for the moment. Um, mm-hmm, <affirmative>, there's, there's complexity in everything and, and, and exceptions to everything. But to first to a first approximation, you've got, you've got a feed forward drive to the bottom end and feed back to the tar. Um, and, and if you would, if you, if you would have the neuron decoupled, uh, such that the top can't influence the bottom, then you've basically taken feedback out of the system with one fell SW.
Speaker 1 00:21:14 And, and that's what the cortex has as a handy switch. Um, and, and something that would be fantastic to try out in a model. Um, and, uh, and I guess it's too early at this point, because that's a fairly recent discovery, but in, in the dish, so in vitro, they tend to be decoupled. And when you put them in a, in, in vivo, in the awake state with neuromodulation and so on, they tend to be coupled, but coupled here is so the way the, this term coupled is, is gonna be confusing at this point of the story, because the it's still the case that the, that the bottoms can signal the top and cause a kind of, uh, um, cooperative interaction between the two, the two regions and that I think is still a fundamental way of explaining what the purpose of the cortex is and what it does, which may sound like a bold claim, but I I'd have to deconstruct that, that claim to, to try to convince anyone that that's the case, but now I'm, I'm completely convinced at this point that, that the key to understanding the cortex is understanding this neuron and the, and the architecture of long range connectivity.
Speaker 2 00:22:25 But so the, this neuron is a layer five parametal neuron, and the cortex is made up of multiple layers. Why, why this neuron, because there are other layers that of course, you know, um, stretch out, um, among other layers in the columns stretch their ding rights and Exxons out, and of course have lateral connections. So, so why, why the specific, cause I know that you're also looking at layer two, three, um, parametal neurons, which also have that kind of structure, but the, but why the layer five?
Speaker 1 00:22:54 Yeah, I mean, I, so I would, in the first instance, I'd say it's the parameter neuronal type that should be used with this description. They they're all subtly different from each other. Um, the main point being what we call the layer five neuron is the neuron with the cell body in the Soma, which by the way, is as if, if my claim is right, that the cell body's not relevant, then there's nothing layer five about this neuron in particular. But, um,
Speaker 2 00:23:21 No, let's, let's, let's pause there because you, you kind of reformulate that that's, that's kind of the way that we think of traditionally the cortex or the way I was taught is, um, in layer five, there are the cell bodies of the per neurons, but you're, you're kind of reformulating a, a, a perspective on how to think about the cortex because all of the action taking place is essentially outside of the layer. Well, a lot of it is outside of the layer in which the cell body exists. So that's right. Yeah. Maybe, I don't know. Can you elaborate on how, how your perspective on the cortex differs then?
Speaker 1 00:23:54 Sure. So, so I mean, I, I think in, in terms of describing what a neuron does, I think the easiest thing to say is that it's transforming inputs into outputs. Um, that's probably maybe too boring, but, but if that's the case, then you have to look at where the inputs are and where the outputs are. Um, and maybe you, the, the last thing you wanna ask is where is where, what, where is the transformation happening and which could be everywhere. Um, I'm claiming that the last place you wanna look is the cell body for this transformation, but, but, um, but in principle, the transformation starts where the, in where the inputs arrive and, and involves the entire neuron in terms of, because signals are going up and down, back and forth and, uh, and causing different kinds of signals in different places, um, in a very, very complex way.
Speaker 1 00:24:44 And none of which has anything really to do with where the cell body happens to be. And then in the end, the axon itself typically goes all over the place. Um, and, and you can't, you can't easily predict where which layer or even whether or not the, the output remains in the cortex. So, so actually there's no, there's no particular layer that this element, the neuron is, is confined to if you well, okay. So, so if you, if you ask yourself, what is the, um, what is the function of this neuron you're gonna have to essentially find out what are all of the interactions that can occur within the DIC tree? Um, mostly within the D tree in principle. There's some things that could happen in the axon. And then you're going to have to, uh, describe this in terms of the long range. So where, where the inputs to the cortex arrive and, and what they represent, what kinds of, um, what classes of information this represents,
Speaker 2 00:25:46 You know, I, I, I realize we, we're talking about, um, on a smaller scale than what, when, when people usually formulate hypotheses about what the cortex is doing, right. They talk about the cortical column and as if it's a unit, uh, operation mm-hmm <affirmative>, and then there are different, you know, high level theories about what each column is doing, and then that's repeated, uh, over all of the different columns essentially, and just specialized based on where the column is. Um, so is your idea about, it's less about the cortical column so much, or maybe you can correct me. Um, I'm trying to think of like, how, how your ideas fit within the, you know, higher theories of, of cortex, so to speak. Yeah, because we're talking about a very specific kind of cell structure and the way that it is decoupled and, or coupled, depending on which kinds of inputs, and I'm kind of jumping the gun here. I know, but
Speaker 1 00:26:46 I think it fits well into the view of the cortex. Uh, the sort of Mountcastle view that, that there are lots of vertically oriented columns that basically, uh, are all dare I say, feature, um, encoders or, or, or detectors. Um, and very much with a, a kind of hierarchical view of that, such that some of these, some of these critical columns are receiving more primary information. That means in the stream of information from the outside world to the inside world there, early on, and then there are other cortical columns that are higher up in that hierarchy, meaning further along the processing line. And then, and then with lots of information going in the other direction. And, and so in that scheme of things, if we just stick to primary sensory cortex for a minute, then you would be claiming that any given that the role of the parameter neurons in this sense, first of all, the, the PR the LA PRAL neuron is the only neuron that spans all of the layers.
Speaker 1 00:27:53 So collects input from all of the layers. And it's the only neuron that goes completely out of the cortex, uh, with the exception of some layer six that also go to the thalamus, but the layer five is, is an output neuron in the sense that it's projecting both within the cortex long range, but also outside the cortex, very long range. They're the neurons, for instance, that would, that would go to your spinal cord and, and directly, um, sign up onto motor neurons so that there there's some kind of, uh, encapsulation, if you like, of what the column is doing, which you'd like to think anyway. Um, it's hard to imagine that, that that's not the case. And, and so one, I, I presume that the circuit of a column at a minimum is serving to, uh, encapsulate some feature that's that's, uh, either in sensory space or in, at high levels are some more complex features.
Speaker 1 00:28:49 And that, that if you would record from the lay five neuron, you'll have the best idea of what, of, what, what the brain or cortex currently thinks is, is a feature in, in cognitive space, let's say. Um, and, and in that world, if that's, if that's a good way of, of looking at the cortex, I think there are two categorically, two kinds of information, at least that, and, and two really important types of information. One would be the feed forward stream, which would represent the information coming from the external world. And one would be the feedback stream, which would represent information that's been generated in the cortex itself. Um, and, and it would make sense of why you want to have neurons where, which are elongated and have a large collecting, a large set of gender rights, collecting inputs at both ends. Um, and essentially at least two compartments two, you can imagine two boxes or, or two neurons, in fact, at both ends.
Speaker 1 00:29:54 And, uh, and that one is predominantly collecting one kind of information and the others predominantly connecting, collecting the other kind of information. And in fact, since, since these are very difficult words to use feedback forward, feed forward, or top down, bottom up, and, and typically this is the stumbling block for many conversations, you could just take the attitude, why not talk about a basil and an ale ale being the description of the top of this neuron and, and basil being the bottom of the neuron. And then we could worry about what, what terms we want use later on for this. But essentially you could say this neuron is allowing you first of all, to separate those two streams of information, and then bring them together in special ways, under special conditions and, and have a, a way to, to manipulate those conditions and the way, the way you combine them.
Speaker 2 00:30:49 Well, yeah, I, I was gonna ask because it's, it's kind of easy to wrap your head around it when you're talking about, let's say, early visual cortex, right. And where you're having most purely sensory data coming in, and you're looking at a, you know, features of a tiger or something. Yeah. I think you often use a tiger in your examples. And then in the, in the, so that's coming into the basal, uh, D rights and then all these, and, and, um, again, I'm jumping the gun, but these I'm gonna say feedback, um, connections coming into the, uh, ale DED rights, which are, you know, memories of tigers and, uh, models that the brain has built, et cetera. Context is, uh, what you refer to mm-hmm <affirmative>, mm-hmm <affirmative>. So it's, it's kind of easy to think about it like that, but then as soon as you go up, uh, another hierarchical layer that feed forward information is not sensory anymore, or it's less sensory, but now it is it's still feed forward. Right. But then it's with it's, it's like the sensory information that's been transformed, uh, by the context and memories. And we'll talk maybe about memories in a little bit. So then it's, like you said, if those are tricky terms, feed forward and feedback, but all of the sudden you're already in a very kind of complex, uh, model <laugh> as opposed to like the simple sensory versus feedback, um, story.
Speaker 1 00:32:12 I agree, but I think that's the beauty of it. I mean, I, the, the point would be that what is context for one region could be data for the next and, and, and vice versa. And, and this will depend on where you are in the, in the processing, but, but it could easily be that the, the, that, that a means that this is loops upon loops in some sense mm-hmm <affirmative>, and that you, although it, it, although there would be a way to describe, let's say a direction of flow of information in the end, this is all gonna be very much, um, this is gonna get very complicated very soon in terms of how you would describe what any given column that's not at the top or the bottom is actually doing. I, I think the other thing that, that maybe is occurring to you in goes to most people, is that, um, what happens at the top, because it turns out that the other thing to say, which I should have started with really is that, that the, the whole cortex is full of these neurons ubiquitously everywhere. Um, and they're not in other structures, they're like 70 to 80% of the cortex and you don't find them elsewhere. So, so there's something special about this neuron, for sure. And, and they're ubiquitous throughout the cortex and you find them at the top of the so-called hierarchy, just as much as you find them at the bottom. And, and with this simplistic way of describing it in terms of feed forward feedback, you know, what's, what's feeding back to the ACAL Tufts of neurons at the top of this hierarchy you might ask, right?
Speaker 2 00:33:47 Oh, I just, I just drew, I was drawing this out and I, and I have a big question mark right there in my diagram.
Speaker 1 00:33:53 Right. That's what I, so in, I think when we get to this point, um, you want to, that's where I think the words Fairless and where the, where the language is really just a hindrance. Mm I, what I'd rather say is that we've got a, that the critical column is, is, has architecturally speaking, at least from the point of the neurons and the, the, you know, the, the six layers and, and so on is more or less the same in, at the top of the hierarchy, as it is at the bottom of the hierarchy. And, and I presume, well, we know that the properties of these neurons are, if not identical, roughly the same. And, and so we do know that inputs coming to the top of neurons in prefrontal cortex, uh, is doing roughly the same as, as inputs to the top of neurons in sensory cortex. So that's where I say, it'd probably be better to reframe what a cortical column does with, with respect to the top and the bottom of these neurons, which ends up being particular layers, rather than, rather than trying to find an English word to describe the, this, the information categorically such that it's always the same.
Speaker 2 00:35:02 There's probably a good German word for it. No,
Speaker 1 00:35:05 <laugh> probably, yeah, no, well, actually, when it comes to these words, I think they're, uh, they use the English. I'm just trying to think whether, whether there's a specialized German word for, for feedback and feed for
Speaker 2 00:35:19 That, don't, they just stick words together and
Speaker 1 00:35:21 Yes, they do. Yes. Neo. Yeah. Yeah. We could probably, yeah, definitely. Yeah. In any case, the, the, um, I, I, I think the fact that it's, um, that it's the same everywhere is in a sense, the guiding principle for, for claiming that, that there's something special about this neuron about what it's actually doing, um, in the first place. And then, then you can ask yourself, well, I mean, essentially what the, the word you brought up, the word context, which is the best word I think one can find for as a best English word, I think to describe what might be coming to the top. In other words, if you imagine that the, that the output of this neuron is, is encoding some, either low level or high level feature that a feature is, is everything that you're trying to in it's encapsulate and propagate, and, and context is everything other than that, that relates to it.
Speaker 1 00:36:21 So when, when, if you're talking about the tiger, for instance, which is what I always go back to, um, then, then if you're talking about the color orange you might want to associate, so you are the, you are the orange column, let's say, and, or let's say you are talking about the neuron. That is, that is giving information about whether or not orange is in your cognitive space. Um, then other things that are not orange, but relate to orange, particularly in the case of a tiger, um, would, can then be, uh, linked if you like. And, and the, if you asked, where are they linked? I would say they're linked at the top of the neuron, that, that the top of the neuron can get other, other things that tend to happen when, when orange is in your cognitive space, if you ever think about tigers, then you'll have learnt to associate stripes and, uh, and grows and movement and, and anything else that pertains to that.
Speaker 1 00:37:10 And, and I would expect to be finding that kind of information affects the top of the orange column in your brain. And, and then if we, if we now transfer that to, to higher critical areas, it's still the same operation. So, so especially doing some complex decision, there'll be some output that is relates to the decision and the type of decision you're trying to make and so on. And there'll be other things that often occurred during that decision that are not the decision itself. That would, that would, you could call context that come to the top of that neuron.
Speaker 2 00:37:41 Is this a good time? Do you think to talk about consciousness and the dendritic integration theory? Because what, you know, we're, we're talking about, you've used the word feature, right, where each, um, that's what gets passed along. But then when we think of our experiences, um, coupling the features or the, the input with the context, and then a feature is passed along, uh, and that's happening at every hierarchical level. And as we just said, mm-hmm, <affirmative>, there's a big question mark about the top and how it's the wrong way to think about it. Yeah. But one of the things that you guys found have found is that under anesthesia, uh, it's, it's decoupled, the, the parametal neuron, uh, is decoupled and when we're awake and a conscious state it's coupled. So however, if we're in a conscious state, the coupling would be, you know, all over the, the cortex, right. Everything is potentially at least coupled. So then I, you immediately think, well, how does this, well, maybe you can explain what the dendritic integration theory is. And then, uh, maybe we could discuss like how, how to wrap our heads around where our experience arises. <laugh> essentially, you know, this is terrible, it's a podcast and we have to use language and language is no
Speaker 1 00:38:55 Good. Right, right. <laugh> but I ought to be able to do this. Um, I'm glad to started with the, with the anesthesia, because I think in a, in a way we, we, we don't, it's a very nascent theory of consciousness. I, I, I think it's more a theory of, of loss of consciousness at the moment. Um, and it's, it's giving us some, hopefully some information about consciousness and, and the sense in which we are proposing a theory of consci is not so much that we think we solve consciousness, but we are looking at the lead, the other leading theories of consciousness and saying, how does this mechanism that we found that clearly is, is at least correlated to loss of consciousness, if not the mechanism for the loss of consciousness, how, what that actually informs us and, and how that relates to the, to leading theories of consciousness.
Speaker 1 00:39:43 So yes, it's, as you say, the, if you, if you record from the, the, the self body of the, or so close to the axon of, of a layer, five neuron in an awake in this case mouse, and you, you opt to genetically, uh depolarize or, or excite the, the top of the neuron. You see what we see under many situations in vitro where the, the, the top signals to the bottom and causes it to burst fire as I seen before. And, and, uh, and, and this is very clear to see, and then if, if you're anesthetize the animal with, with different kinds of anesthetics, uh, you find that it flat lines that, that, although you continue to, to excite the top, because we are, we are basically imposing that optogenetically, uh, that's to say, we, we express channel adoption in the neurons and put lights specifically on the top of the neuron.
Speaker 1 00:40:42 Then you under the anesthesia, you get no effect at all. Um, in terms of self firing of, of this neuro and well, that was for me astonishing in the first instance, because I mean, you might think, well, anesthetics are anyway, subduing the network and so on. And, and sort of, maybe you should expect that, but, but this is a biophysical claim because we, we are taking single neurons and we are forcing depolarization at the top of the neuron. And we're saying that depolarization is not doing what it did just a minute ago. It's not getting to the bottom of the neuron. So it's a claim, a biophysical claim about the neurons. Of course, we do think that that circuit elements are impinging on that neuron and, and preventing the, the coupling from the top to the bottom. But, but what it, what in that instant we could see is that, well, after a few more experiments, maybe I should say, because, um, what, let me just, uh, flesh that always completely there, the, the, the, um, the, we also found that if you suppress the higher Themus that projects to that particular region that you're recording from, um, that, that has the same effect on the coupling that meaning if you shine light at the top of the neuron and you record from the bottom of the neuron at the same time that you're suppressing higher oth Amus, then it's also decoupled mm-hmm <affirmative>.
Speaker 1 00:42:06 Um, and then we found that, uh, so we knew from Mary Sherman's work on several others that, um, that the higher order thalamus, when it does project to the cortex projects, both to the top end, to the middle of the cortex. And that there's a lot of metabotropic, uh, receptor activation involved in about 50% of the work being done by, by the ceramic inputs is through metabotropic receptors. And so, which nobody really knew how to interpret. And, uh, we therefore just tried blocking the metabotropic receptors in this case, both glutamate and colonic receptors and both of blocking either of those receptors, um, had the same effect of, of decoupling the neuron. So information that you were, that you were impinging or, or, um, imposing on the top of the neuron ceased to influence the bottom of the neuron. Um, and, and I, I suppose the, the upshot is that we, and, and I, I suppose I should also say that these are the neurons, or one of most, these neurons have a, a large input to high rod autonomous.
Speaker 1 00:43:10 And, and so there's a loop between these neurons, the high rod, autonomous and back again, um, in controlling the coupling of these neurons. And, and so if you would break that loop anywhere, what's what we would, uh, claim now is that it's, it's actually taking away cortico cortical feedback, because those things that come to the top of the neuron from other areas of the, of the cortex are now have no influence because they get to the top of the neuron, but have no influence on the bottom of the neuron from which the axon comes. So basically it's a handy switch, as I was saying in the beginning for taking away all in, in this case, ubiquitously across the whole cortex all feedback. So once you've got to that point, then you could posit that that's possibly the mechanism for anesthesia. Our next, I hope we've got another, we've got now got a five year plan to, to see if we can prove that that's true.
Speaker 1 00:44:06 But, um, but on the other hand, we are already speculating that if, if that is the case, then, then you could, you could speculate that consciousness is in fact, the reintegration of feedback via these neurons somehow. And in that engages the fat cortical loop. Um, or at least I shouldn't say it is, but, but requires. And, uh, and then, and then you can look at the different kinds of claims about what consciousness is in the major theories. And I think there are, there's basically, uh, well, three main genres, maybe four main genres. I think Ann, Seth, who you interviewed recently just had a, a, um, a review paper on, on the different categories of, of consciousness. But one, one sort is these, the sort of interconnectivity type, um, theories of consciousness, like the integrated information theory by turn, or the global neuro workspace theory by Sean and Dehe.
Speaker 1 00:45:06 Um, and these are theories that posit that there's, there's long range interactions going on. And, and that interconnectivity in some sense is the be all and end all of, of consciousness. And you get to a certain level of inter interaction, which in T's IIT theory would be described by a number with fire in, in the global neuro in a workspace theory would be some threshold ignition point where, where you get now, uh, let's say, uh, an explosion of, of activity around the, the brain, um, in this case, I think it's clear that either of those two theories would be well explained if you like the mechanism would be well explained as being a decoupling of the parametal neurons, because this would instantly, uh, take away either it would instantly lower fire, or it would instantly bring you below the ignition point of the global neuro workspace theory.
Speaker 1 00:46:10 Now, another class of theory is the higher order theories of consciousness. And, and in these, these are more if I say content driven in the sense that, that there's, you poit, that there's some higher level process going on that, um, is responsible that says exists in, um, in cognitive space somehow that, that, uh, vees if you like on the, on the rest of the low level activity that's going on. Um, and, and it would, I think be easy to deposit that, that that's exactly what's embodied in feedback, uh, that, that if, if it's gonna be embodied anywhere, it's gonna be in the, in the kind of information that you generate, as opposed to the kind of information that you are receiving, um, from, from the rest of the world. Although the, I think the, in the, the interesting claim going on here is that it's neither nor it's it's the, in the end perception or, or what the cortex is actually doing is, is comparing your to internally generated information with the external information. And it's not until those two things match in the correct way that you have, that you have a perception. And, and, and I guess the, if, if it claims anything, gender information theory, claims that you perceive nothing when you don't combine these things, and if you would decouple them, you would stop perception. And that would be, that would be like being unconscious
Speaker 2 00:47:46 What happened. Okay. So when, when you're unconscious, they're decoupled, when you're conscious and you have content they're coupled, what about when you're in a mindful meditative state?
Speaker 1 00:47:57 Oh, nice. Yeah. <laugh> or, or you could ask when you're dreaming or, or sure. Um, uh, the, and, and we'd like to know the answer to all of these, so, so I can only speculate at this point, and, and I, I guess the, the other really close question would be what happens when, when you, when you take some kind of psychedelic drug and so on, right. And, and another related question would be what happens in, in the various kinds of pathologies, where, where you see cognitive changes and so on. I think they all can be, you can think that's the, in a sense, the beauty of looking at the, the cortex through these glasses, that let's just say through the Derich integration theory glasses that you can now say, well, what will happen if we, if we you've got basically two dials, you've got the, uh, re uh, what would you say, the receptiveness or the activity state of the top of the neuron and the bottom of the neuron, and you can turn it up or down, let's say, so you've got, um, you've got four degrees of freedom, if you like, or, or two, two knobs that you can turn in both directions.
Speaker 1 00:49:03 And, uh, you could imagine for instance, that, that when you are in a meditative state, that you are, you are, uh, you, you are turning down your receptiveness to the outside world and, and turning up your, your receptiveness to the inside world that, that your're basically, um, or, or vice versa, it'd be
Speaker 2 00:49:23 The opposite, right? Yeah. Mindful, at least mindful, you know, I'm not an expert or anything, but you you're turning off your internal world and, and it's like pure perception, right. Without judgment or without,
Speaker 1 00:49:35 Right, right. Yes, yes, yes. Right. I, I guess I, at, at this point, I, so I don't know the answer particularly, and we haven't tested the answer, but, but, um, but I think that's the nice way of that simplifies the question enormously. What might, um, turn out is that when, when for instance, you are doing something, uh, like meditation, that it might be a particular part of you, you might be switching off your frontal regions and, and, uh, let's say turning down the, the top on your frontal regions and turning up the, the, the, the, the bottom on, on the sensory regional, who knows. Um, but there, there could be that kind of nuance to the question, but, but essentially it's still much simpler question all of a sudden, and, and it seems tractable to me, first of all, it's tractable, because you could describe what you expect really easily.
Speaker 1 00:50:30 And secondly, it's tractable because we have the tools to explore this now. So with, with at least in, at least in rodents, we can, we can now basically answer the question. We can use various different tools to mm-hmm <affirmative> to work out what's going on in humans. This of course is more difficult, uh, because we, we can't get, we can't do the same kinds of sophisticated things like optogenetics and so on, in, in a human, I still feel like there's, there's a path to, to getting to explaining this in humans. Uh, I think it would start with, with doing this in, in mostly rodents, but, but perhaps other animals, and then, uh, looking for noninvasive ways to see the clues of, of what you're seeing in, in animals, and then take these noninvasive approaches to in the human case. And we've started this in many different ways. And, and I think it, uh, it, it looks promising to me at this point, it albeit at the beginning of a long, long road to, to get there.
Speaker 2 00:51:31 So Matthew, I, this is kind of, um, a pause or orthogonal, but we've just been talking about high level concepts like consciousness. And we may go into talk about the ideas about memory and learning in layer one and, uh, kind of these big ideas. But, um, this all started from recording the nitty gritty recordings in the DED rights, you know, of parametal neurons. So, so, you know, sort of bottom up, right. And then you've extrapolated to, uh, the larger ideas. So I elicited a few guest questions, and I wanna make sure that I, I play them for you. They're both, uh, by the same person here. So I'm gonna play this question for you with that, um, background that I just said. And then, and then we'll, uh, move on to other higher level topics.
Speaker 4 00:52:17 Hi, Matthew, this is M nice to see Paul getting a few more Australian accents on the podcast, the way it should be. So one of the things that I really love and admire about your work, Matthew is your ability to conduct really precise technical experiments at the micro scale, but then extrapolate the importance of those experiments and the outcomes of those experiments to a broader functional interpretation. And as someone who typically starts from the other end at the systems level and tries to peer down into the micro circuits, I'm curious whether you have any tips or tricks for people working on the macro scale that can make the results of our experiments more palatable and more profound to people like yourself who are working down at the microscopic level and trying to extrapolate out to the macroscopic
Speaker 2 00:53:09 That's Mac shine, asking for tips and tricks.
Speaker 1 00:53:12 Well, thanks, Mac. Um, yeah. Uh, great question. It's, it's, it's so ironic to get that question, cuz I'm normally thinking about the difficulty I'm having, talking to people at, at max level, which I also really admire, um, and, and trying to convince them that they should care about the, the low level features. And, and I, I guess Mac is, is asking the other way around, how would he get somebody at the low level like me to be interested in, in high level? Um, actually that seems to be a theme of your podcast going through, I love the, the, the interviews you did with John Kreer and, and various people like this who I think are, are starting more or less where, where max starts and, and they
Speaker 4 00:53:52 Hate the low level,
Speaker 1 00:53:53 Hate it. Right. They hate the low level. Yes. Right. <laugh> um, and, and, and it's, so I'm, I'm find myself defending in the other direction. Mm-hmm <affirmative> um, in, in that context, but if, if you do wanna get through, if, if you're in, if you were a giant cracker of this world and you wanted to, um, to, to interest somebody at, at the low level, well, I mean, I, I, I listened to, to his argument and, and I totally convinced that you, you definitely need to imagine what the purpose of all of this is, but, but I know that a lot of, a lot of people get lost in, in the world. In fact, I I'm, I'm probably in one of those fields where, when we get together in, in our conferences, you know, conferences that are specifically about D writes, tend to focus on such minutia that you, if you were a observer to such a conference, you would wonder why anybody would be worried in this, that, or the other.
Speaker 1 00:54:53 And, and it gets down to all sorts of details that are hard to, to know if they should matter at all. Um, and I, I think it's also true that a, a, a significant fraction of the people really don't care about whether or not it matters at these, these higher levels. Um, I, I guess that I, I, if feel intrinsically, if, if, uh, if it wouldn't matter at the high levels, I wouldn't care about it at the mm-hmm <affirmative> at the low level. So I'm not interested in just the properties of D rights for their own sake. I, I, if, if it doesn't matter at the high level, then I don't think so, but what I guess I'm arguing is that, and, and this was in an opinion piece that, that came out recently. Um, I'm arguing that as, as much as it's, it's important to ask the question, what is the, uh, what are the consequences, if you like at the high level?
Speaker 1 00:55:50 I think it's, it's instructive the, the low level is in fact, instructive of the kinds of, it gives you a, a framework or a parameter to, to guess what's going on at the high level. So, for instance, uh, when, when we look at the, the, the, the question where we, we recently looked at memory, for instance, and we've for me, a light came on when I saw that the, the memory structures tend to go to layer one of the cortex as they it's one way or another they feedback information, if be it the hippocampus of the medial temporal loop lobe, or the, um, or, uh, let's say the basal ganglia through higher Oram regions, uh, or the amygdala just going directly to, to, to layer one of the cortex. That, for me, that I, I immediately say to myself, well, there must be a reason for that.
Speaker 1 00:56:40 And of course, with the, with the kind of goggles that I have on, and looking at this in terms of the, the, uh, the cortico column and the, what the parametal neurons are doing for that I'm asking, well, that I'm saying to myself, that must be influencing the top of the parametal neuron. And, and why would this be true? And so on out of that pops, the hypothesis that drove a whole, you know, five years of research, which is maybe the top layer of the cortex is, is cares about memory. If you like, um, or, or needs to get signals, that things need to be remembered. And if so, it would imply that the thing that you want to remember is context, um, which actually the more you think about it makes perfect sense. In fact, when I was saying this to a psychologist, the psychologist said, we already knew that you know that, yes, you, you remember context that's to say you, if I, if I tell you my phone number, you, you don't remember particularly, you don't wanna perceive the number six, any different afterwards, if you have a six column or I dunno, you, you, you, you want to associate it with me and, and, uh, and probably a telephone, number's the worst example there, but in any case, the, the, if you want to remember something, you want to remember the, the features of something and how they, uh, and what they relate to
Speaker 2 00:57:59 You're now in my tiger context, because of the examples that you use. Right. So <laugh>
Speaker 1 00:58:03 Right. Exactly. Yes. And, and so, so how would I get the, uh, high level person to, to interest the low level person? I would say that this has to manifest what, whatever your high level claim is, has to manifest at the low level. And if you are, if your low level is, is if you are steadfastly going to ignore any possibility for this connection, then, then you will you'll will constantly be in your own bubble as it were, which I think actually a lot of neuroscience is. And, um, it's also, I think there a lot of exceptions and a lot of them are, you tend to, I think, collect that kind of a mindset for your podcast. So I, I mean, I can think of notable exceptions of people who, who really don't stay in their bubble, see people like, um, BJA, for instance, who, who, who doesn't wanna be trapped in, in, in that bubble. But amongst I think being in where I come from at, in the level, we, we look at the brain I'm very used to that mil Melia, where, where, where it seems really important to talk about, you know, what, what subtype of what channel is in which, which are bleak then, right? And so on,
Speaker 2 00:59:18 Which, which makes the, uh, the ilk that you were just describing make, makes their eyes roll. Right. But the, let me, let me read something. You, you mentioned the, um, review that you wrote and it's, um, our D rights conceptually useful where you make these kinds of arguments. So I I'll just read a quick quote from that, because it, um, has to do with what we're talking about. There's every reason to suspect that better descriptions of sophisticated single cell computation will lead to better descriptions at the network level, blurring the distinction between Mars, algorithmic and implementation levels. So people, you mentioned John cracker he's he's one example, right? Who don't, um, care about the single neuron level, let alone ding rights, right? That's even worse. And because the argument is we have to stop looking at how these neurons are connected as a like box and arrow kinds of diagrams, because they're not telling us anything about the higher level cognitive functions, essentially. Mm-hmm <affirmative> are you fighting an uphill battle, uh, in this current, not amongst your, uh, low level D right. Friends, low level friends, uh, but <laugh> just in the neuroscience world, uh, at large. Do, do you, have you, have you found yourself fighting an uphill battle as a sort of bottom up, uh, experimentalist?
Speaker 1 01:00:39 Yes. I think, um, the, it's always very difficult to, to take somebody who hasn't considered these problems and, and even get them to pay attention, let alone, let alone, uh, take it seriously. And I guess the claim I'm making, the quote that you, you just read is that it's not just the claim that there has to be a through line from, from the implementation to, to the algorithm, to the computation mm-hmm <affirmative>. Um, but, but that the, the implementation is very suggestive of what types of algorithms and eventually what types of computations are being carried out. And, and that sounds like an unreasonable claim. I'm sure, to the John Gracos of this world, that, that doesn't seem plausible. But, um, but I think that what we'd just be going over is a case in point, I, I think there are things that we are suggesting such as, for instance, perhaps the, the top of the cortex is the place where you should look for semantic memory, that you can only come to by a synthesis of the, of this bottom up approach with a, with a, an eye to the top down types of questions.
Speaker 1 01:01:51 Um, and in the end, it's, I think is really instructive to somebody from a psycho, psychological point of view and somebody who, who wants to understand, let's say computation level in mass mass terminology. Right. Um, but, but, uh, in, in, in terms of getting back to looking, looking at high level features of cognitive features of the way the grain operates, it not only I think suggests, uh, how it operates, but why it, that way, it, it makes a lot of sense. I think that if you want to, uh, remember the way things are associated or the context for things, then you would like to separate that you'd be really useful if you could separate that as a compartment to handle it separately. And then in reintegrate it into the, the larger cognitive domain and, and Hey, Presto, that's what, that's what the low level is telling you is, is happening.
Speaker 1 01:02:46 And, and all of a sudden it, it's giving you clues about how you should talk about the, at the high level about what's going on. And, and it also tells you why this is semantic information, by the way, um, this, this is essentially semantic. We, this is a really difficult notion, you know, meaning or, or, or, um, you know, how, how, how do we describe this in the end? It tells you if, if this is the right way to look at it, then semantic means nothing other than, than the distillation of context. And, and so if you can distill the context of something, then you know, the meaning of it, and, and, and yes, that's a bold thing to claim, but I'm claiming it on the, on the basis of the low level description. And I'm, I'm saying that, that it appears that way from the low level description. So let's, let's, let's, poit it as a hypothesis at the high level.
Speaker 2 01:03:40 Well, the, so I guess a, a pro computation level, um, perspective would say, that's fine, but you didn't start with ding rights. Um, you actually had memory in mind, a high level concept, right? A computation in mind to then, uh, make a hypothesis about what the den rights might be doing. So to me, it seems silly to say, we need to go this way or that way, because we're, we're all working at all levels. It's essentially not, well, not me since I'm retired, but
Speaker 1 01:04:12 <laugh>, I couldn't agree more. I couldn't agree more. That's exactly right. That, that we, we should, this should not be a fight between which, which direction to think of things. This should be a, a call to arms for people at all levels to, to start talking to each other and to see how this all fits together. And I think that's, that's what I admire a lot in Mac Shine's work, because he's doing that. And there are very, very few people who, who do that. And, and a lot of people who are resisting it <laugh>
Speaker 2 01:04:38 Since I mentioned that paper and, and read a quote, um, are ding rights, conceptually useful. H how how's it been received? Have you gotten, uh, feedback positive or, and, or negative, did it do its job?
Speaker 1 01:04:51 I it's too early to say now it's only, it's only a few weeks since it came out, right. I've not had any, any really negative feedback, but, um, maybe it's, maybe it's still arriving in the, uh, um, I, I do kind of feel, um, I, I'm a little bit conflicted by this paper because it's a little bit polemic, at least for, for what I'm used to saying. I'm usually a little bit more Teve about, uh, saying, but what it's, it's trying to say essentially, that there, that there may be things that, uh, we've yet to understand about the way the brain operates that can only be, uh, approached if we would, if we would include some of the, the possibilities that we could learn from D writes and, and the analogy I have actually, which is in this paper, I, I look at the, the way I, I contrast the revolution in neural nets with, with a normal digital computer.
Speaker 1 01:05:44 And I, I think it's fair to say that that neural networks are conceptually in advance for neuroscience. Mm-hmm, <affirmative> no matter whether you think they're the bill and end all, but certainly deep neural networks are doing a lot of work at the moment in terms of our understanding of how neural of how networks might work. And I think this is on a conceptual level, and I think it's on a conceptual level in the sense that we, we run these, these artificial neural networks on digital computers, which are nothing other than chewing machines. So, so you could, so because maybe I should add the, the often the, the throwback that you get as a D researcher is, well, you know, that it's not really necessary to consider these D rights a as if for neural networks, because we can always build a more complex neural network.
Speaker 1 01:06:33 That includes all the properties of your D rights. And, and so it's not relevant, is it? And, and we'll be able to solve everything with point neurons, because if we needed something that the D rights do, we would add a few new point neurons to few new points. Yep, yep, yes. Right. Um, and, and I'm, I'm just saying, well, you could have said that about neural networks in the first place, you could have said, look, I don't need a neural network to, to solve this because all I need is a few more states and a longer, and a longer ticket tape to have to have more numbers on it. See, it's, it's just a curing machine after all. And I can prove it because I can run it on this digital computer. And, and you, you, you you'd in that instant lost all of the power of having collapsed.
Speaker 1 01:07:15 What I think is, uh, the, the real insight of a neural network is that you can do this fantastic statistical learning with basically three, three facts. If you, if you have a learning rule and you, you know, the, the cost function of this, and, and, and you have basically a particular architecture of, uh, you know, you've figured out your, your connectivity, that with those armed, with those facts, you can, uh, and, and, and, uh, a method for, for, um, communicating the, the de deviation from, from the goal that you want to, uh, to the, uh, to all of the connections you're done. And, and you, you can, you can do, you can beat the world's best chess player with these three principles, essentially with a few tweaks. And so obviously I think that's a conceptual advance, and it doesn't it's no, for me, I, I, it's absolutely not an argument against this, that you can run it on a digital computer.
Speaker 1 01:08:14 I, I I'm saying so. So what, you know, that it's conceptual of art. So the argument would be, we haven't spent enough time looking at the, the kinds of insights you might have over and above these, these, uh, these things we now know about, or this, this framework of looking at, um, uh, learning through the neural network. We, we haven't looked at gender rights closely enough to know whether or not there are more principles on those levels. And, and the first principle I would go for actually is the, the separation and reintegration of two different fundamental streams of categorically different streams of information and, and mess around with those kinds of principles. But I'm not claiming that that is the be all and end all. I'm just saying that, that seems useful, intuitive to, to me at the moment. And, and you'd wanna be really sure before you just threw away what biology is spent hundreds of millions of years, um, playing around with and, and, and just say, well, that's irrelevant. I think I'll stick with point neurons. That seems to me to be an incredibly stupid thing to do
Speaker 2 01:09:17 <laugh> yes. Okay. I like that. You said stupid. I, I recently had Elena gal on, um, talking about astrocytes, you know, for example, right? Yes. Thinking about things, there are not neurons in the brain. Um, and of course there are neuromodulators and I've had people on to discuss those, but it's, it just seems the more I think about it, the more I learn, um, about some of the lower level things that were carved through evolution, the sillier, it seems that, uh, the, a deep learning world is based on these point neurons, right? That are <laugh> essentially from the fifties, forties. And you, you know, earlier as by, by looking at the, the brain and deciding that neurons from single neuron doctrine, et cetera, were the end all, uh, that's where artificial intelligence or the modern version of it, the birth of deep learning, et cetera, that's where it all started. And it just seems silly that it's so archaic, um, based on the technology that we had on at the time and our experimental, uh, capabilities based on those technologies. So I'm with you that it, it, it seems, uh, useful to at least try out, uh, what evolution has suggested to us.
Speaker 1 01:10:29 Right. So, so I, I, I guess I wanna just reiterate the claim that I'm making. It's not that I imagine in the end that you're gonna need a sophisticated, real description of D rights. Right. Um, and, and that, that, unless you had a really complex machine with D rights, you wouldn't get the full functioning brain. <laugh>, I'm claiming that there are insights that you can get from D rights. Um, and until you understood what they're really doing, then you are, then you'll miss these insights in the same way that if all you had was a state finite state machine and a ticket tape, you're probably never gonna stumble across the idea of a neural network. The, that, that you you'll probably, if you only have a neural networks with point neurons, you're probably not gonna stumble across some of the insights that you can see with these higher concepts, whatever they may be.
Speaker 1 01:11:17 Mm-hmm <affirmative>. And I guess, to the Dawn crackers of this world, I would say, you know, just that that's, uh, an example of let's say going, I don't, that's where I, I'm trying to say we may be confusing implementation with algorithmic. I'm not quite sure how I would describe, uh, the, the point neural network in terms of where it fits on the, on the Mars level of, of, of description. But, but I think it's clear that it blurs that description because you can, you can get from a neural network now to beating a chess player and <laugh>, and, uh, and all the other face recognition, and God knows what, um, will come around the corner next, um, that, that, that there are insights that we're getting from that, that are crossing these boundaries as ma would put them. They're not so simple to, to see the line between, between what, what, in fact, in a way, this is perhaps more the problem that we're facing, because when you sit down and, you know, when you want to beat the world's best chess player, and you want to improve your network, it now comes the rub, actually, because it's, it, you can, you can train it on, on a hundred million chess games, and it's gonna be fantastic.
Speaker 1 01:12:30 But if you wanna improve it past that, you, you suddenly realize you don't understand how this neural network in front of you is solving the problem, or at least in the sense that you, you don't understand in the sense that you don't know how to tweak it, to improve it past, past, giving it more information and, uh, and training it more. But I think that's where that's, where dite are likely to come in and be useful because clearly the, what I think this is a, a, this is a hypothesis or, or a wild clay, if you feel like, um, that, uh, the, uh, uh, that neural networks point neural networks, steep neural networks are autistic SS. These are machines that can collect information and synthesize that close to perfectly, such that you get close to the best statistical description of the, of the input output function that you're you're seeking.
Speaker 1 01:13:23 Um, and in that sense, you can't do better. And, and what you are missing is context. What you're missing is a way to when, when we are talking about sort of general AI, that, that, uh, that you are missing insights in a, in a directed way that, um, that allows you to do what, what is difficult for autistic Avance to put some meaning to it all. So you might be able to count how many magics fell on the floor, but you've no idea what you're gonna do with this information. Um, and, and you can't, you can't put it in other contexts that you haven't been trained to, to do and so on. And, and what we clearly can do with, with one or two presentations of the data humans in particular, but I think mammals in general can generalize from situations and, and, and make conclusions that are startling with comparison.
Speaker 1 01:14:19 So the, and, and at the moment, we are all super impressed by something that is that as we would be, I think if you, if you, if you meet a, uh, an autistic Savan, then you're also super impressed by what they can do, and you, and I think your first impression is I wish I could do that, but I think it, I think it comes at a cost. And I think the, the low level implementation is telling you why it comes at a cost it's because if you've got, if you are, let's say that you could train this whole network in a feed forward sense by just using the bottom of every neuron. And, and now you would be punitively perfect in, or at least as good as possible in, uh, framing problems. And, but if you want to have context, it comes to the other end of the neuron and, and is reintegrated into the neuron.
Speaker 1 01:15:07 There's only one output from the neuron. So essentially the, the, uh, the top down signal is adulterating your perfect statistical, um, calculation, and you can't have it both, you, you can't be perfect in your statistical analysis of something and have your context at the same time. And, and so you're gonna have to put up with that. And my claim would be that from an evolutionary point of view, you wanna be something like the, the six layer cortex and, and, and a mammal that is imperfect because from an evolutionary point of view, every now and again, you're gonna come across life threatening situations where you're gonna have to think naturally and use all your knowledge to avoid it. And I think we come across them, actually, not just every now and again, but all the time, particularly say you're driving a car. And, uh, and today somebody dropped their hat in the, in the road or, or, uh, or there's some strange obstacle in front of you.
Speaker 1 01:16:04 Um, and the, and let's say the, an automatically driving car would just say, well, that's the first time I've seen that. I'll, I'll, I'll add that to my database and run over the little kid or whatever it is. Um, and, and you say, you say, no, this is strange. I, I, I, don't not seen this before, and I, I dunno how to handle it, but I can see that something's different. And I, and I know I can work out all of a sudden that, that all of these other things pertain to the problem. And, and although you've not seen it before, you can, you can deal with that. Um, I, I should have framed this in terms of the, the driver dying there's you could, could easily frame this in terms of, and, and, and
Speaker 2 01:16:42 Children. You're, you're, you're talking about children dying. Nice.
Speaker 1 01:16:45 <laugh> oh, sorry about that. But my main point is that I, you would be coming across situations throughout life, where if you didn't have this kind of ability to think on your feet and interpret it, you are gonna die. And, and let's suppose, let's suppose I told you that, um, that this autonomous vehicle was so good that in 10 years of driving, it'll be a hundred times better than you are at driving the car. But once in 10 years, it'll drive into a truck. You know, you don't get into that car because this, this is a, you you're gonna die in the next 10 years, and you don't wanna die in the next 10 years. You want to survive it, even if that means making hundreds of little errors that you're, that, that, that, uh, are not, not optimal. You would prefer to be ready for the, for the divergence. That is hard to predict. And, and, and you can't learn statistically. So that's what I'm guessing is the reason why you need this kind of general intelligence and from an evolutionary point of view,
Speaker 2 01:17:43 I had the thought, and, and you can, um, tell me why this is a ridiculous thought, but, you know, thinking about our mamalian cortex and while our human cortex relative to other mammals, uh, it is thicker. And these, um, you know, their descriptions of these layer, five parametal neurons, for example, have longer, um, ale D rights. They're, they're more decoupled essentially. And I was trying to think, well, you know, under your, uh, like the Dendright hypothesis, why would that be? And I actually thought maybe one evolutionary advantage and, and sorry, you know, you talking about the savant made me think of the, of this is to actually prevent us from, well, it, it, you said it for me, from like being a Sivant, it's actually to prevent us from learning too well, so that we have the capacity, uh, in those situations, um, over a mouse, let's say, or in this case, a Sivant, uh, <affirmative> I guess I'm just repeating what you said, but I, I thought of it in, in terms of, um, reducing what we're learning in given situations, even though we have a much higher to allow us to have a higher capacity to learn when we need to learn.
Speaker 2 01:18:55 Does that make sense?
Speaker 1 01:18:57 <laugh> yeah, I really liked that. I hadn't actually thought of it, but that's really nice. So I <laugh>, I, in other words, the, the, the thickness of the human cortex would have a tendency to be, be able to be Sivan, like in, because it's more decoupled is what you're saying.
Speaker 2 01:19:13 Well, the thickness would, would prevent Sivant right. It, it would, because you wouldn't just learn everything, uh, without the imperfection, without the feedback, right? The, the feedback in some sense is gating your ability to become a Sivant is what you actually, you said, right. But now I'm just, you know, it was
Speaker 1 01:19:30 Just a yes, that's right. It's what, but, but, but it, but, but I, the reason I extrapolated to that from what you said was that the, the, the thing, and actually the first findings in vitro from human neurons is that they're less coupled than, than the Rodan neurons. And, and they would seem counterintuitive at first. And
Speaker 2 01:19:49 That's what I'm saying is like to, to prevent the learning essentially.
Speaker 1 01:19:52 Right. And, and, but then, but then you would need some way for, to reintegrate that. So, and, and, and I think that I'm, I'm, I'm betting that the latest thing that we saw under anesthesia is giving us the answer here that, that essentially different kinds of neuromodulation affect the coupling. And I'm betting that if you took a human neuron with, in a, in a situation that's, that's more sophisticated than the in vitro situation where there's no neuromodulation that you've got all sorts of ways to reintegrate the TT. And as you say, now, you could be doing learning with, and without it, and, and, and I suppose that, that, I guess where I was going with saying that also first time, <laugh>, I didn't realize you, weren't saying, but <laugh> now, so we can, we can own this idea together that, that, uh, that, um, you, you would also in principle have the advantage of being able to decouple it under certain situations. So if you did wanna, just, if you just wanted the statistical part of this, if you just wanted to, let's say Ram, a lot of information in, and you could in principle, I dunno exactly whether it happens or how you would do it, but in principle, if you've got a mechanism to, to decouple the top from the bottom, and you were under control of that, you could for, for whatever time it takes decouple it. And maybe that is what it means to really focus on throwing a football or something that maybe that's
Speaker 2 01:21:18 What Ritalin does or something. Yeah.
Speaker 1 01:21:20 Right, exactly. And then, then you, for then you could punitively, uh, work up the, the statistical information and then, and then reintegrate the, the context. I guess that the way I, we had thought that this happens is, is in two stages. Um, but it's, it's just a, um, at this point, we, we are, I guess, so this is, this is really speculative, but, um, but one way you could imagine is what the critical period is during the, the development of a mammal, right, is a period where you are decoupled by the way, the, um, at least in ER, rodent during the, the days between, uh, so when you are born up until it's just after weeding, that's a few weeks, um, three or four weeks, you don't have a calcium spike and you don't have, you don't have this, um, nonlinearity. And then it starts to kick in at, around about four weeks. And, and then up to about eight to two, 12 weeks, it's really fully developing. And then you have really large plateau potentials, but so you could have, and, and that corresponds to, uh, an early phase where there's a critical window, where, where you're learning just the, the features of your environment,
Speaker 2 01:22:41 The statistical regularities,
Speaker 1 01:22:43 Right? And, and, and you basically shut down the top of the, of the neurons while you learn the statistical regularities. Now you've got the, the features in place in your cortex, and you're grounded in the world and all your, all your columns are grounded, and now you can start putting them together, associating them to each other. And I guess that I was positing that that's when the, the critical window ends and you stop being so good at learning statistical. And that would explain for instance, why it's difficult to learn a new accent
Speaker 2 01:23:15 Instrument,
Speaker 1 01:23:16 You know, pass ther instrument. <laugh> if you haven't got those, those that grounding with the world up at that point, that gets slowed down deliberately, because now you don't want to be bothered with, with that. Um, but, but as I say, why not? Why not have a, maybe you need Ritalin and, and, and you prevent feedback or something you could, in any case, you could imagine that, that, again, this is a, this is a handy conceptual tool for trying to understand the link between the implementation and these large scale concepts of, of in this case, learning statistical information versus, um, contextual information. So whichever way you look at, I think it's, I sort of might be wrong, but if it's wrong, at least you've got a framework for, for talking about it and, and testing it. And, and I, for me, that's, that's worth something <laugh> that, that gives you a, a, a way to treat this, what otherwise seems like an intractable mess of possibilities.
Speaker 2 01:24:20 So, so I've heard you, um, in your talks, and this is going back to artificial neural networks now, um, talk about how we don't need to model D rights, and you you've just been talking about that, uh, as well. We don't, you know, we don't need to model all the exact details of D rights, uh, to create a useful, uh, artificial intelligence, for instance, that instead we can extract the correct principles, uh, that the ding rights, just like we've been talking about how much detail do you need to build in it? Are there, are there lower level things that you think, uh, are ding rights, the lowest level that we need to consider, or are there principles that we can extract looking at lower level things like, I don't know, channels, right. Ion channels it's or, or whatever. I don't wanna plant a seed in your head, but are D rights the bottom level.
Speaker 1 01:25:07 Um, I, I don't know the answer. And, and I mean, when we, when we're talking specifically about D rights, there's, there's levels also in D rights, so you could be talking about SP heads or, or little branches or, or, or whole ization, so and so on. And it's, uh, this is a well known thing, actually. Um, I dunno if you've talked with Mike COER before, but, but he, he, and, but mill have a, a, a really good paper. It's now about two decades old asking exactly this question, whether or not, um, well, what level is, is appropriate. And it wasn't clear then I don't think it's clear now. And, and, uh, I mean, I I've got my favorite intuitions, but I don't think you're asking that. I, I think the, the larger question is how would you know, and, and what's the principle for, for, for deriving this, and
Speaker 2 01:25:57 You have to take a walk with your wife and it has to come to you in a
Speaker 1 01:26:00 Flash. <laugh> exactly. No, I, well, yes, actually, if there's one principle to come out of that you need a really naive person to, to say, well, wait a minute. And, and not to stop asking you to keep on coming back to something you thought was obvious and, and, and keep on having to explain why, why you think it's obvious. And, and actually my experience is trying to explain to somebody a really, a really persistent, naive person, why something is obvious.
Speaker 2 01:26:32 So marry a naive person is what you're saying,
Speaker 1 01:26:35 Something like that. But, but actually I think in the end that that's why I think teaching is actually really useful too, to students. Cause often they're asking you a naive questions and, and then you find yourself, actually, I don't really know the answer. I thought I knew the answer to that, but actually, and, and very often you realize that even the student doesn't realize what they're asking, <laugh>, it's not, it's not necessary that the naive person gets the, has the insight that you are having in that moment for you to suddenly realize that you don't really understand that. And, and I wouldn't claim that anything that I've said now is categorically true. It seems well, the good thing, the good news, I think is that what, if you start with this mindset that there's something to be learned from, from, let's say bottom up descriptions, that then you can always test it.
Speaker 1 01:27:27 Well, you can mostly test it. <laugh>, you've always got a, at least a conceptual way to test something. And, and that's, that's worth gold dust, I think in, in a, in a scientific context, because if you're starting from the really high level ones, you may or may not be able to test a high level theory, particularly if it's very, if you are on the wrong track, then it's going to be very difficult to get any proof at all that, that, that it'd be hard to, to get in there and, and, and actually test it from, but so that, I think that's, that's worth a lot, but the, the bad news is that it's, it's hard to stumble across the high level concepts that way. And so that's why I think you were right before everyone should be talking to each other. And, uh, and, uh, we shouldn't be doing one or the other.
Speaker 2 01:28:16 Okay. Matthew speaking of naivete and, um, relying on our assumptions and questioning our assumptions, let's get to the action, potential and consciousness, uh, thought experiment piece, do action potentials cause consciousness, this really, um, challenges, our assumptions about what's important. And, um, and the primacy again, of, um, action potentials. And, you know, I guess somato centric, uh, thinking in, in a sense, would you like to explain the, uh, the thought experiment and may maybe high level? Sure, sure. And then I also, then I have a question from max shine, and then we can, uh, discuss more
Speaker 1 01:28:56 Just before I, the caveat on, on, on the description of this is that the, um,
Speaker 2 01:29:02 That you hate neurons, you hate cell bodies and action potentials
Speaker 1 01:29:06 <laugh> well, first of all, first of all, the, the, the, the question as stated as you put it is supposed to be Provo provocative. Yes. Um, and, and so to me, even more provocative, I think, I would say, um, that does brain activity cause consciousness, and, and I, I guess I would be, I would be hoping that my interlock, like you would be, uh, would be, would have the spontaneous response. Of course, of course. So that's what, in a, in a sense that <laugh> thank you that, that, that that's where the starting point should be, but because then we, the, the thought experiment starts and I, and, and I, I should, first of all, um, say that, that this was put forward in, in our philosophy club, in our lab by a really fabulous, um, uh, now senior postdoc, Albert Giden, um, who proposed this one day.
Speaker 1 01:29:53 And, and, and it sounded so similar to what I'd heard in, in other kinds of contexts, um, that at first I thought that it wasn't new, but there's, there's something really important about it. And more recently I found that there are a few other people who've, who've either said similar things or, or the same things even. Um, but here it is the, so imagine that you, first of all, you, you take a, um, a subject to, and, and you sh you give them some experience. So I think it's easy to do this with, let's say a movie, so you get them to watch five minutes of movie. Um, and while they do this, you use some modern technology, um, where you record from every neuron in the brain.
Speaker 2 01:30:39 So future technology,
Speaker 1 01:30:40 Some future technology. Yeah, <laugh> right. Um, what we'd like to, if you go to some conferences, what you like to believe we are closing in on, but probably we are still long, long way from, um, in any case. So you record from every deal. So, you know, how every neuro on fire during the movie at every instant, and, and you can just, for the sake of this thought experiment, record from retinal neurons and, uh, and also any, any neurons in your spinal cord and, and so on, so that, you know, the whole deal. Um, but, but in principle, we're talking about what you were thinking when, when you saw the movie and, and the experience you had when you saw the movie. Um, okay. And now you imagine that you had a device to, to replay that, uh, that to the, to the person. Um, and actually, although this, to some people, this sounds really, really futuristic, and that's not futuristic, it's futuristic is the number of neurons you can do at the same time.
Speaker 1 01:31:38 Right. But, but getting one neuron to repeat the exact activity in the form of action, potentials is not even hard, that all you need is a really good amplifier and, uh, and a good recording of the initial conditions. And you can, you can do, what's called dynamic clamp on the neuron and, and literally create the same voltage at that point where the neuron is, which is unfortunately, as I say, typically at the cell, but, but nevertheless, um, you can make the cell body do exactly what it did before. Um, and since that is actually an nexus point, because most of the D rights are feeding into that point. And that's just before the axon, that's gonna have a pretty, uh, that's gonna really dictate what the output along the axon is of that neuron, if you can absolutely recreate. And not only that, but, uh, if you do a dynamic clamp in a, in a continuous way over, over the whole period of this movie, it won't just dictate what that, what that cell body should do.
Speaker 1 01:32:37 Uh, when it, when it fires an action potential, but every other part. So every sub-threshold deviation from that as well. So in other words, the, the cumulative effect of all the inputs as seen at the cell body, you can recreate with this, with this, um, socalled dynamic clamp, and now, okay, it's futuristic, but let's say you now do this at every neuron in the brain. The question is if having just seen a movie and somebody recorded all your action potentials, if they replayed all the action potentials very faithfully at every neuron in your brain, would you have the experience of seeing the movie again?
Speaker 2 01:33:14 And, and do
Speaker 1 01:33:15 You ever
Speaker 2 01:33:16 Do you well,
Speaker 1 01:33:17 Can I challenge you with that?
Speaker 2 01:33:18 <laugh> well, because I I've, I've kind of vacillated, so, um, I wanna say sort of <laugh> because I don't, it's not a, to me it it's, there are questions, right? Because well, I've ended up on no, but I, but I think that it's because, uh, I've come to think of, let's say our subjective experience as not being due solely to spiking activity. Um, so I was already kind of a no on this, but I, I definitely used to would've said yes. Uh, okay. Very pro Sping.
Speaker 1 01:33:53 Okay. Let's just take your used to, um, persona because there's a couple more steps. Um, so, so if you would say yes, of course I,
Speaker 2 01:34:01 Wait, wait, what would you say?
Speaker 1 01:34:03 I, I would say what you just said, actually, um, I'm, I'm, I'm agnostic, and I think I would jump off the train at the very first step here. Okay. And say, no, but, um, but I'm, I'm very conflicted. I don't know the answer. I, I, so, and it's, it's it's for me, this is torturous, but, um, but the next, I think a lot of people would say, of course, you know, Penfield used to stimulate the brain and you had experiences. So if you made the neurons fire, right. So you just had the experience that, that, that gives you, um, and really important to this thought experiment is that, although we, so we are not controlling everything else that's going on, including the glial cells and the, and the neurotransmitter and, and so on and so forth. But we are not, we're not interfering with it either. So there's no reason to not to posit that when the action potential goes down, the axon, it won't relate to the same release of transmitter. And so on at, at the other end, of course, that's a stochastic thing. And, and maybe that's important in time. Yeah.
Speaker 2 01:34:59 That's homeostasis and all sorts of things. Yeah.
Speaker 1 01:35:01 But, but, but on the other hand, if let's suppose homeostasis was coming in, you are, you are any, in every subsequent instant in this, you are also making the neurons do what they're gonna do. So, so you're never gonna know what homeostatic changes actually occurred, uh, to
Speaker 2 01:35:18 Yeah. I mentioned the astrocytes and glial and all the other parts of the brain right. In the, yeah.
Speaker 1 01:35:23 Right, right. Okay. I, I, I, I, I concede that, um, if, if you wanted, you could, you could extrapolate this, this, uh, thought experiment to clamping everything, but that, this seems too unnatural for the timing. So, so let let's, let's just stay where
Speaker 2 01:35:37 Step again. And, and then, cause I need to play this before we, you answer it already.
Speaker 1 01:35:41 Oh, okay. All right. So, so the next step is, um, what would happen if you now, um, blocked all the, uh, sodium channels? So you blocked all the, all the chances for the action potentials to propagate down axons, but nevertheless, you make every neuron fire the way it was doing. So if you got to this point, I think you should be saying something like, well, it didn't matter before whether or not the action potential influenced the next neuron, because my electrode is telling the next neuro what to do, not, not the action potential from the previous neurons. So it principle those connections are now the, the, the action potentials that going down the axons are irrelevant because what makes an action potential is no longer the, the influence from the so, so then you, but, but nevertheless, you would ask, you know, would, would that cause you to be unconscious and, and then the next step would be blocking neurotransmitter.
Speaker 1 01:36:38 And so you could put in, let's say drugs, that, that blocked the receptors, or we did it in a sophisticated way for, for, um, in the thought experiment with the optogenetics and so on. Just so that <laugh>, you could get around some of the it's believable, the tricky problems. Yeah. Yeah. Well, it's not so much as it's believable actually. It's, it's, it gets a little bit more fantastic that way, but, but mm-hmm, <affirmative>, I think it, it, it avoided some of the, uh, obvious, um, the obvious complaints somebody might have on a, on a sort of philosophical level, let's say, um, could do it more cleanly. Let's say, then just cut out, um, just transmission. Um, and, and then the next step in the argument is what if you take each of the neurons, oh, first of all, what if you make an incision in the brain and you cut one part of the brain from another, but you're still clamping every neuron.
Speaker 1 01:37:25 So essentially the information is getting around the brain in the sense that you are, you are imposing, it just like it would've happened in the original brain, but that they're physically not connected anymore, but they weren't connected anyway, um, with trans due to your blocking of neuro transmission. So why should that matter? And then the last step is to, is to take the, every neuron in the, in the brain to a Petri dish, spread it around the world, have different neuros researchers in there, different laboratories replay at their neuron, what they had to do. And, and then some set of neurons that used to be your brain does exactly what it did before. And I think by most, most people at this point are starting to object. They can't imagine that would be conscious,
Speaker 2 01:38:09 Not panists though. Probably. Right. Right.
Speaker 1 01:38:12 Right. And, and so, so if, and then you ask yourself, well, if, if you decided that at some point in this set of steps, you went from being conscious to unconscious, you have to say where and why. Yeah. And, uh, so that's where we get to, and then that's very challenging, I think, because now you have to, you have to be, you have to have a, I think a more, first of all, you have to have some better reason for, for explanation or what let's say imagination for what, what would make you conscious. But also, and, and this is where I think it's an important thought experiment is that, whereas say the Chinese room does something like this and, and various other ways of looking at, uh, what computation does and, and, and different ways to simulate what the brand does, uh, are, are good thought experiments very often.
Speaker 1 01:39:01 And actually John, so the, the originator of the Chinese argument, I think, does this, when challenged, he says, well, I know I'm conscious, so there must be something about wetwear that, that makes you conscious mm-hmm <affirmative> and, and it's, it's like a, the, the last retreat, I don't know why I'm conscious, but I know that I am conscious. And I know that I'm, I'm a set of neurons, so it must be something special about a set of neurons. And, and, and basically this is saying, that's you can't escape because it's still your neurons doing this. In fact, in the, in the, sort of, in the most idealistic way of stating this, you are in the first step, when you replay all this activity, you can poit that all your neurons did exactly what they did before. And, and that there's that the, and if you wanna say, well, there's no, they didn't, there's there's this there, or this, there, you have to now say why that should be the seed of consciousness. So, so I think I'm just saying that you can no longer retreat to saying, well, there's something special about the way you framed this and, and act, but I still, I still retreat to yes. Brain activity causes consciousness, um, right. Because you're being asked, why does brain activity cause consciousness? And,
Speaker 2 01:40:13 And that's the, that's what I was gonna say is the, the frustrating part of this, because I feel comfortable, I guess, like you jumping off at the beginning, but then I cannot articulate why. Right. So, which is the important part, what else then? Why would it be it's et cetera? Right. Okay. I'm gonna play this, uh, question for Mac, and I'm not sure whether you want to answer or, or what you'll think of it. So, uh, alright here, I'll play it.
Speaker 4 01:40:41 Hi, Matthew. This is Mack you, uh, Albert and yarn recently had a quite thought provoking thought experiment. And in the spirit of, um, playfully manipulating thought experiments. I wonder if you have thought about some of the other aspects of the thought experiment. And I was imagining a situation as someone who doesn't do a lot of cellular neuroscience, myself, that some of the patch clamping could go awry. And I started wondering just how much of that patch clamping could become impaired or could be inaccurate for an individual to have that same conscious experience. In other words, how robust do you think our conscious awareness of an individual moment is to individual variation in the firing or the, uh, activity patterns? The calcium dynamics of the cells distributed around our brain. Okay.
Speaker 1 01:41:31 Oh, thanks Mac. I love that question. Um, and I think you were you, I dunno if you were being triggered by that, but that was sort of what you were just alluding to before. I think that, that the actual potential that causes release is, is a very St process. And, and so that's one, one form of things that you're not under control of in this thought experiment. Um, and, but I think I like the question because it's framing something in a way that I've been sort of playing around with. I love to listen to Dan Deni and his, his kind of theories of, of consciousness that by and large, I, I, I find myself agreeing with him in a sort of matter fact way, but he also talks about free will. And that's another thing that I, I'd never linked free will and consciousness in particular before.
Speaker 1 01:42:21 But, but I start to see that there's, there's some link between these two topics and, and you know, that there's, there's famous famously, there are lots of neuroscientists who think free will doesn't exist, but nevertheless, you're conscious. And, and, and I, I, one thing in particular that struck me about, um, something Dan Dan talks about, and I think he's actually talking, I forget the originator. Is it John Aston? Or some somebody else was talking about it's, it's the put example where, where a patter comes to the green he's with his, with his colleague, he puts, tries to put the ball into the hole. He says, damn, I could have got that when he misses the put. And, and then there's this big philosophical, um, article about what, what do you mean by you could have got that. And do you mean that every, if every particle in the UN was exactly like it was before that this, that you could have got it on one re run reversal of, of time and not on the next reversal of time, um, which is effectively the same as saying, can you break the causal chain of the universe?
Speaker 1 01:43:29 Um, and, and, and if you can't, then maybe you are not, you don't have free will. And, and I, I think the same thing could apply to that's one way of framing, what consciousness is about that, that essentially that, and, and it comes back, I think, to maybe to the D right theory, because essentially what, one way of framing what's going on with the architecture of the cortex is that, uh, in order to have any perception, you don't just receive information from the outside world. In fact, almost to a larger approximation. It's the other way around you, you make a guess about what the outside world is, is telling you, and you compare this to what the outside world actually tells you. And, and, and it's in that interaction that you have perception that you, you, you have a sensation let's say, and given that the, the neurons that are gonna eventually com well, so represent this also project all the way down through your sensory space.
Speaker 1 01:44:27 They also go all the way, in some cases, all the way back to your sensory organ, certainly in hearing, for instance, you're, you're actually using your cortex to, in some way to modify the outer hair cells and the game of your ears and so on. And, uh, and you're affecting the input and so on. So, so you could imagine that that sensation that you are having of, of hearing and why it's different to vision is exactly because of the, the question that you pose to the distal parts of your gender rights, where you say, uh, is this the case, do I get what I expect in this situation? And if you're expecting something auditory, you, you actually not only have to expect an auditory type of sensation or, or, or effect in the feed forward direction, but you have to the, the, your, your expectation will become output that will interfere with the whole loop of information coming from, from your ears and so on.
Speaker 1 01:45:23 So you could, you could, that's a first pass way to explain why this view of the cortex is encompasses. Why seeing red is different to hearing, hearing the middle C on the, on the piano say, and now getting back to the, the variability, of course, um, this has to be for you to, to perceive, and in this framework, it has to be that there's some, that there's some way that you could be wrong. <laugh>, let's say that that, that, or, or that it it's in the deviation from what you expect, that you, you actually make the, the important perception that, that the extent to which this is, is, or isn't what you expected is the extent to which you assess what you perceive and the extent to which in the end you have in the experience. So the experience is all in the, the, the noise, if you like, in this sense, not in the noise so much as, uh, in stock acidity, but in the noise as in the deviation, from, from expectation.
Speaker 1 01:46:26 And, and so that would be a way I'm, I'm, I don't feel confident enough to claim that, and I'm not a philosopher. So I, I suppose I, I should farm that one out to people who more experienced than I am, but I, I, I say this for two reasons, one being the, the, this thought experiment and, and the other, this framework that anyway, the goggles that I have for seeing the, the cortex that that's, that's maybe an escape route that, that you can frame all conscious experience in terms of this interface, which might actually be at a, you know, sort of meta space, let's say not, not, uh, not in the nuts and bolts of what action potential is fired and so on, but, but in the, if, if there's, if this higher order, what should you call it if this, um, yeah. Okay. I'll say higher order, um, question that you're asking of, of an ensemble of, of columns around the brain has meaning it has meaning, because first of all, you are grounded in the world and there's the, you are asking questions of grounded, grounded columns, and second of all, because it can be that it's, it doesn't have to be the case.
Speaker 1 01:47:46 And you're, you're asking this question. And so in a world where everything is just a repeat, you will never see any deviation or never have any, so everything will, will be your simulation. And, and therefore it won't be an experience anymore. And if it's not an experience, you won't be conscious of it. So <laugh> that, that's the,
Speaker 2 01:48:05 That's fantastic. But
Speaker 1 01:48:07 Yeah, maybe I kind of feel like a philosopher will catch me out somewhere in that loop, but, uh, but, um, it feels, it, I guess I'm, I'm positing it. I get to, so this is, this actually is a nice way to close the loop because you you're asking, you know, why would you, how do I convince the John crack hours to see from the bottom up? Mm-hmm, <affirmative> I I'm saying that I'm asking a really now high level question, probably one that's probably more high level than he wants to, to go for. And, but I'm nevertheless looking at the, the implementation that I see and asking, well, is it at least plausible at the implementation level, given the, the, this broader perspective of why the implementation is the way it is? And I at least get, I at least get a loop that's plausible. Um, I, I agree that, uh, one needs to look at this closer, but, but if, if I'm the, so a that's, I, I say I came to this with some inspiration from the implementation level and B I claim that if, if ever I'm on the right track, I've got some things to test here that, that, that is there's also from my point of view, this is an important statement to be able to make that this isn't just, um, so I might well be wrong, but at least I've got the, the way to check it, or at least theoretical ways to check it.
Speaker 1 01:49:37 It's the conceptual ways to check it. Let's say let's hope that we get the, uh, the appropriate devices going forward.
Speaker 2 01:49:44 Well, Matthew it's it's time for us to close our loop. Um, and I still, this is a long episode and I still had plenty of things to ask you, but I, I appreciate you letting me put your goggles on for a little while. And, uh, I've really enjoyed, um, reading, uh, all your works and, um, continued success. I know that you have a ton of experiments left ahead of you, uh, a lot of ideas to test and it's in some sense only the beginning, right?
Speaker 1 01:50:10 Yes. Yes. So, well, I guess it's never ending, never ending, but I mean, it's such, such fun that, uh, that's probably a good thing. Not a bad thing. Right. So, yeah, we're looking forward to the next five years.
Speaker 2 01:50:21 Oh yeah. You have your five year plan. All right. Thank you, Matthew.
Speaker 1 01:50:25 Now. Thank you. It's been great.
Speaker 2 01:50:32 Brain inspired is a production of me and you. I don't do advertisements. You can support the show through Patreon for a trifling amount and get access to the full versions of all the episodes. Plus bonus episodes that focus more on the cultural side, but still have science go to brain inspired.co and find the red Patreon button there to get in touch with me. Email Paul brain inspired.co. The music you hear is by the new year. Find
[email protected]. Thank you for your support. See you next time.
Speaker 5 01:51:06 The.