In this second part of our conversation David, John, and I continue to discuss the role of complexity science in the study of intelligence, brains, and minds. We also get into functionalism and multiple realizability, dynamical systems explanations, the role of time in thinking, and more. Be sure to listen to the first part, which lays the foundation for what we discuss in this episode.
Notes:
David 00:00:01 Is there something special about brain mind, like phenomenon, a completely different from the history of scientific logical discovery. So they’ll always be outside of outreach and icon understand where that belief would come from.
John 00:00:17 I don’t think that out of our reach at all, I’m just saying that the coarse grained objects we use to describe mine phenomena will not be, will not feel the same way that people who like to look at eye movements and stretch reflex, and even the cerebellum where they feel like they can couch their understanding of their behavioral output in terms of their circuitry. And all I’m saying is if that’s what you want, you’re not going to get it
David 00:00:43 There on the table. Here. Three positions, at least one is that’s cool. It unfairly that sort of microscopic reductionist who says it has to be as low as you can go, which
Paul 00:00:56 Is what John thinks I am.
David 00:00:59 That person in the end just has the total physics envy and wants to do quantum mechanics.
Speaker 4 00:01:12 This is brain inspired.
Paul 00:01:25 Welcome everyone to the second part of my conversation with David and John Krakauer. I’m Paul Middlebrooks. This second part picks up right where we left off in the first part. And I highly recommend you listen to that first part to best absorb this second part. In this episode, we talk more specifically about brains and minds, how complexity thinking can help, uh, what it might look like to attain a satisfying understanding of various mental processes and how, or whether that understanding will include any account of brain processes. Uh, you’ll hear my own inability to communicate what would serve as a satisfying account of the relation between brain and mind, but thinking clearly about these things is it’s own goal. Uh, for me, that’s maybe the main goal and I’m going to keep pushing forward until hopefully I’ll get there. Speaking with David and John is a wonderful exercise toward that goal and the mode of thinking they execute, uh, makes it feel like we’re headed in the right direction. Makes me optimistic. I’ll get my own thinking to a satisfying place. We’ll all get there. Won’t we enjoy,
John 00:02:45 You know, it’s interesting. I don’t know, Paul, if you’ve read the new history of neuroscience, Matthew called book the idea of the brain.
Paul 00:02:52 No, but isn’t it just a list of metaphors? I I’ve not read it, so yeah,
John 00:02:57 Well, no. I think actually as a scaffold for thinking, and it’s very good. I love the, the history part and the early present. Um, I think once it gets into current neuroscience and prediction of the future, it gets more impoverished, but I don’t know whether that’s Matthew cob or whether the field itself is sort of asked them to, but it is a good book. I really do recommend it. It’s got lots of delicious, rich stuff in it, and he’s done a good job. It’s not easy to synthesize all that material, but I tell you, what’s fascinating about it is it, he has a section at the end of the book, um, where he talks about the future. And it’s very interesting that he begins by talking about emergence, but then drops it like a bad smell, right? It’s like, well, he, I think he said something like emergence is that on the satisfactory explanation before we get to the real explanation, right?
John 00:03:57 And then he moves on to where he feels like the real progress we made is let’s get back down to the circuits and the neurons themselves. Let’s study cognition in a fly where we have the sort of Sharon, Tony and connectivity map. And then we’ll do some sort of extrapolation cognition in humans. In other words, you see this tension in the field between not really wanting to talk about core screening and psychological terms and derive measures and saying, surely we can avoid that awful fate our field by going into a fly or a worm where we can have the same level of connectivity, detail, and intuition as we did for the stretch reflex. But now we can apply that understanding to something that we call cognition and then somehow extrapolate from that higher up than your access. In other words, you see that there’s this tension that just won’t go away and it’s like, David, it would be silly to do Navier Stokes worrying about the details,
Paul 00:05:05 But mind is a, um, historically fundamentally mysterious thing because it’s there. Okay. So let me see if I can articulate my own internal struggle with this sort of mapping and you know what I want, I want some sort of, it can be course level, but I just want a way to think about the mapping between them. I don’t need it doesn’t need to come from the circuit level, but it does need to connect them. And one of the things I was going to ask you both about is whether complexity, uh, holds promise for a connecting between these levels. Or if it, if complexity, like you just mentioned is, is assigned the liberation of levels and for us to somehow be happy with understanding things at different levels without the,
David 00:05:58 No, I think Bo no, I, no, I don’t think we should be happier. And John and I talk about this a lot. I think that, um, and that gets to these two kinds of emergence camps. There’s one group probably I’m in, that’s very interested in how you connect them. And there’s a cat that John made the image says what’s the best one to use in any given level. And I think that’s both necessary. So I mean, the, for us, you know, you know, the gold standard was the derivation of, you know, the ideal gas laws in thermodynamics from statistical mechanics. So that once you’ve got that equation, you don’t have to worry about the individual particles because it has that right. Property of sufficiency, but you didn’t know why. And it was, it was useful to know what I mean, some of us want to know about that, but the origin of levels, that’s sort of what I work on. And I think that that both are necessary. So I wouldn’t forfeit, um, one for the other, I hope we do get into brain mind because I have my own totally quirky ideas that I like to air. And, um, and I’ve never understood. I don’t know if now is appropriate, but John wants to jump in, but I would like to say a couple of things about that.
Paul 00:07:12 I just got, I mean, we’re, we’re going on our own course. There’s there’s no.
David 00:07:17 Okay. So I do want to talk about this cause I’ve listening to people I’m always amazed that they don’t do the empirical thing, which is to look for prequels or precedents. And I want to mention too, and I think I don’t have an answer all, but I just want to point out at insight, you mean historical yes. Or, or parallel in other fields, things that look like it. Okay. Okay. And, and I sort of want to be person and argue for a tragic perspective. And I, and it has come to the rescue of two other areas which suffered from the same problem. The first one, the one I know best because I work on the evolution of intelligence is evolution and it was how do you relate physical matter? Uh, and the structure physical matter, we’d put an adaptation to fitness. And for the longest time God was invoked. In other words, it was impossible. I mean, there’s no way to explain the structure of matter, uh, in its relation to function other than invoking an omniscient, omnipotent being okay. And Darren came to the rescue by introducing a third principle, which was natural selection and natural selection mediated the interaction between physical matter and fitness or replication or success. Okay.
David 00:08:38 Hardware and software. Okay. Hardware physical matter, right. Just functional things in the world, adds numbers together allows you to type, it allows us to have this conversation online and what mediates them is the algorithm algorithms configure or the operating system configure physical matter to allow them to be functional. Now, David Maher sort of was kind of getting there when he recognized the three levels, right? The sort of physical level, the functional level and the algorithmic level. But he didn’t talk about it in the Percy in terms of threeness as the resolving level. And you’ll see that they have two things in common, right? So that third party means of configuring the matter to achieve the function. Right? So natural selection is not present in the organism. It’s present in the environment. The programmer is not present in the human machine, but in the environment. And so I just want to say, I don’t think mind emerges from brain mind emergently is engineered by an environment. And that’s the thing that I’ve always found missing in the mind brain is the third part, which is, I think it’s pointless to talk about mind without talking about environment in the same way that in evolution, you couldn’t talk about adaptation and fitness without talking about selection. And I find that quite promising as an avenue. I don’t know how it would play out.
Paul 00:10:07 Is this like a Witkin Steiny in, uh, you must have someone to speak to else you can’t fake
David 00:10:13 Somewhat. I should. It’s interesting. I feel private language issue. It is somewhat related. Here’s was much more along the lines of the scandal of induction, right? You just, I don’t know what I’m pointing at. Um, but I do think it’s social, but not social in the sense of human to human, but the ecological sense of social to environment. And, uh, I don’t know what John thinks about that, or you think about that, but that’s the bit that I’ve seen missing from a lot of the philosophy of mind,
John 00:10:42 It’s such a huge area, right. You know, cultural production of mind, embodiment environment. Um, I, I definitely agree that the computation might be distributed far more than you think, but I do. I do feel like we have to worry about the brain fundamentally when it comes to the most impressive cognitive feeds that we see, for example, prospective memory that has been where you can make a cup of coffee with interruptions, you know, where you are on the sequence, you know, the sequence you have to go through to get an envelope and a stamp and then put, and then put the letter in the envelope and then you could stand with the envelope and then you walk from the post office and you put it in the postbox. I know this sounds very old fashioned, but these abilities are, are very much associated with the prefrontal cortex.
John 00:11:39 You know, Steve wise and Richard passing have spoken about the fact that the, a granular prefrontal cortex only exists in primates, right? And they talk about one shot learning and prospective and all the kinds of cognitive operations and the ability to model the world in a very elaborate way that you could see in primates. So in other words, even if it’s true that there are all these, I mean, Darwin in the Matthew cup book, uh, Matthew Cub says about Darwin that he, he wasn’t really interested in how the brain and the mind connected. He actually admitted that, but he wants you to know how you could have gotten there gradually right through, uh, evolution, right. Darwin actually deliberate, explicitly tabled, how you got mind from brain, but he just said, you have to get it from there because he really shouldn’t have to operate on physical stuff. Right. So, but, so in other words, I would say all this embodiment stuff and all this it’s
David 00:12:35 Different.
John 00:12:35 I’m just saying it doesn’t preclude the fact that if we’re going to really understand things like cognition, as we define it, we’re going to have to understand the prefrontal cortex.
David 00:12:48 Well, let me just, it’s a very interesting example of that. Just to draw your mole example to your brain example. So for you, it was sufficient. You didn’t want to talk about the genetics or the musculature of the mole limb. Um, you just want you to talk about selection pressures. Whereas developmental geneticists would say to, you know, you have to talk about the development of those limbs and, and, and the conserve structures in these, you know, regulatory kernels, and you’re doing the same for the brain. And I think you’re right, the both are required.
John 00:13:21 And he just to be clear, that’s where the analogy, I would say that the prefrontal cortex, what it does, depending on how you coarse grain, it is like the claws, the fur, and the snout on the mole, right? That if, and there are theories that have been given as to why one shot learning and other such things had to be done on sparse savannas, where you just wouldn’t have a chance to learn slowly associated learning would have just you die, right? So you have to come up with a way to quickly learn and flexibly make choices. And so I just think in the end, we’re going to have to describe that behavior, have a computational theory of that behavior. And then just get confirmation. I think through correlation with properties of the prefrontal cortex, we’re never going to look at all those millions of connections in prefrontal cortex and go, ah, one shot learning prospective memory. You’re just not going to derive it anywhere. That more than from a deep neural net, you’re going to work out what it’s doing, but I think it will have some confirmatory role about your algorithmic explanation for how you do those cognitive operations.
David 00:14:32 I, I think I agree with all of that. I’m just saying that, um, again, just by analogy that the dualities of matter and fitness and hardware and software actually have the same quantities, it’s like, well, how does software work exactly, um, uh, resolve by introducing the third element, the environment. And it’s hard for me to imagine a concept of mind without using, and it doesn’t, it’s not about so much embodiment or, but without using environmental social concepts, that’s the sentence in which I’m saying that you need to introduce this third element to bridge the two.
Paul 00:15:21 So it’s not environment as constraint and as an organizing principle,
David 00:15:26 No, it’s simply that it’s actually as a selection principle. I mean, it’s the way in which you mind, in some sense program’s brain, right? It’s sort of you, you, the, there has to be something that mediates causally.
John 00:15:45 That just seems to me very similar to the Lily crap and corroding view that we just want to know what was operating on the network to get it to its final consolidated performance. It doesn’t say I don’t see how, what you’re saying is going to get us to say, I understand how perspective memory works. I, I understand how these, you know, ability to task switch work.
David 00:16:15 No, it doesn’t, it doesn’t help without a chore. I agree with that. It’s a different point. It’s just, I don’t think the word, how does mind emerged and brain is, is that is complete. That’s all I’m saying. It’s just not a meaningful sentence to me without the third element, but you’re right.
Paul 00:16:31 Yeah. Do you think that you would have detractors of that view? It’s, it’s almost hard to logical, right? Because you, you have to have environment.
David 00:16:40 Well, it’s odd that people say it so often, isn’t it? I agree. I mean, it’s hard to imagine that there would be, but people use it all the time. How does mind emergent brain? And all I’m saying is I don’t find that a meaningful,
John 00:16:53 Well, they never, they do. I mean, I think that David, I think I don’t, I mean, they want there to be just like, how do you get stretch, stretch, reflex behavior from a circuit. In other words, you see a neurologist, bam, on someone’s tendon and the arm moves or the knee moves and they want to go, how did that movement arise from spinal tissue? And people will then say, well, there are these neurons, which connect. And I think what they mean is that they want, how does the organization of parts through their interactions lead to a behavior? And in this case, they want the parts to be neurons and their configuration to be their connectivity into a circuit. And they want that configuration of connectivity through the parts to lead to the behavior. And they want to have an explanation like that all the way up to what the prefrontal cortex might do.
John 00:17:53 I’ve offered, uh, a compromise by saying that if you think about it as trajectory through state spaces, derived from millions of neurons, through some sort of dimensionality reduction that you can visualize like a Fineman diagram, that you can have a functional flavored explanation that uses words and uses a NeuroLeadership object. And that is about as good as it’s ever going to get. If you want to do it in terms of connectivity. And you could argue that the Neosho Antonian project to people like Olaf Sporns is really to use connectivity metrics between macroscopic areas. The way that Sherrington talked about neurons, connecting in a reflex arc is just not going to work in my view.
David 00:18:39 No. And again, so everything you just said, I agree with, I think I’m addressing a slightly different question. Um, so no one says, how does software merge from hardware?
Paul 00:18:49 No one says that, but if you, um, if you had hardware and you shocked a particular part of it, and the software told you that it just had an autobody experience or a part of it and the software man, I movement and said, I intended to make that eye movement because it experienced an eye movement or a phosphene or something that we consider mind process. Where does the,
David 00:19:14 But that’s, but that’s exactly the point, Paul, that you’re right. That’s exactly what would happen. Right? If you perturb the hardware, you perturbed the software. It’s exactly right. Um, but we don’t use that language that software’s emergent from hardware because we know how we make software and we know how we make hardware and we know how programming works. And that’s, I guess what I’m saying that, um, that language doesn’t feel correct because there’s a missing third element. And the question, I guess I’m asking is that the causal efficacy of the environment in mediating mind brain, uh, would it lead to a similar change of language? It wouldn’t feel right to say, does it emerge from mechanism even though John’s narrative just now sounds totally reasonable at this point in time. I mean,
Paul 00:20:00 And I agree with that, but then we’re also living in the age of the brain, computer meadow. This is the most recent metaphor. I
David 00:20:06 Don’t think it would have made it for people to have said this to me before.
John 00:20:09 So I disagree with it entirely.
David 00:20:11 I don’t think that’s the right point. I think that what it’s interesting, by the way, I guess an interest of mine, what computers do. That’s so interesting. I think, and that’s interesting, partly because we built them is that they show how physical matter can give rise to properties like chili, ology, agency and function. And it’s the first one of the first significant devices in the history of human beings. It has those properties. And so I don’t mind if the steam engine was an earlier metaphor for some element of agency, right? It’s just that the computer has it in spades. And so it’s a useful one. I don’t think coding something, a metaphor discredits it because the computer does possess so many of these properties we care about,
Paul 00:21:04 But you’re mapping it onto hardware and software. And I don’t know that that is the correct
John 00:21:09 That may, that may be true. A lot of people have said that that’s incorrect and so much that they’re mixed inextricably in a biological tissue
David 00:21:18 As they are in the machine as they are in a computer.
John 00:21:20 Right. But, but, but, but I also get it in Illinois when people conflate computational with computer. In other words, of course. And I agree with Gary, Marcus makes very strong statements. I mean about this, which is cognition is computation over representations. And I just don’t know. Now you can be in the camp of extreme embodiment or like poor Chizek who’s in denial about cognition and just tries to find data, you know, and, and all those people who just want to somehow Chuck it away and deny it and turn it into sort of some sensory motor affordance. I mean, like, I think you had, um, is it micro schooler on your show,
Paul 00:22:03 Michael rescore, that
John 00:22:04 Very smart guy. And I agree with him that any attempt to do away with representation is an utter failure, right? And so once you accept that you have to represent things and we can have a discussion about what that means. David and I have talked about that a lot. And you just say that you compute a representation that allows you take symbols and you operate on those symbols and change them
Paul 00:22:26 That have semantic content,
John 00:22:29 You know, and the numbers of semantic numerals are syntactic. Right. And I just don’t know how else you can think about it. Right. You operate over representations and you transform them. Okay.
David 00:22:43 It’s interesting to point out just to both of you, that these terms, you’re using representations that are transformed. Okay. Just come out of logic and, um, which is basically what we’re talking about. We talked about computers, don’t get carried away with a particular hardware device that we’re taping on. Right. Uh, what a computational device, at least in the cheering sense has to do with, right. Is, do you have sufficient input, right? Do you have the appropriate sequence of transformations of physical matter to read, arrive at an answer that’s correct in finite time. And that’s true if I’m reaching for an orange, if I’m, you know, there’s a much more general concept of what we mean by computation, it shouldn’t be confused with the particular implement that we happen to be operating on today.
John 00:23:32 I also think there’s something that I struggle with, but I think is fundamental. This is there’s probably an ontological reason. I mean, the other nice thing about the Matthew Cod book is he just shows you that very thoughtful people going back to the Greeks and onwards, but worried about the mind brain divide, they were never worried about the, the equivalent divides in their legs or their arms. Right. In other words, there was always a sense that there was something that had to be law, Adrian versus Kenna’s Creek. They had this debate. So there’s the, the sophistication of the discussion of the difference has not increased. I think the only real insight is that algorithms are by definition, substrate independent. Okay. That’s, that’s what an algorithm is. It’s a series of steps that abstract away from how they’re physically instantiated and Abacus the calculator, your fingers. Right. Right.
David 00:24:33 But notice again, it’s interesting that the one concrete, tangible example we have of the interface between the logical seemingly immaterial and the material is that one, I think John’s absolutely right. So when you talk about algorithms, they have precisely that property that we’re trying to pursue. It’s not that hardware is brain software mind, not at all. It’s just that they give us a vocabulary and a set of fairly well understood real physical devices that have some of the properties that we’re pursuing.
John 00:25:12 Um, and the interesting thing is, is that the more mind, like the words you use, the more mind phenomena you care about poor, the more substrate, independent algorithmic you can sound. Right. In other words, it’s it, you’re not going to write a poem or a story about the stretch reflex, and you’re not going to necessarily come up with a substrate independent description of the stretch reflex anymore. But the more complex the behavior becomes, the more one can begin to use the capillary that floats free of the substrate. Right. It’s back to what we were talking about before. Why is that? Why is it that you can get more and more free of the actual substrate and more algorithmic, the more cognitive and mind lake you become?
Paul 00:26:07 Well, one answer to that is our cognitive limitations. Hmm.
John 00:26:13 No, because we actually do quite a good job just like William James did. I’m just saying, you, you say that because you’d like to have a neural connectivity story about,
David 00:26:24 I mean, this, this, this cognitive limitation thing is tricky. Right. Um, and certainly in relation to this question, you know, are we smart enough to be able to understand what mind is, et cetera. And it’s again, I just want to, I was thinking in terms of empirical precedent, right? It’s important to point out. The example I gave for functional states of matter was not resolved until the 19th century. Okay. That’s quite recent. So there’s a temporal nature of limit, right? Einstein’s theory of general relativity, which on 1915, we didn’t understand the nature of space time until the early 20th century now. And couldn’t have done it without Raman, which has happened in the 1850s. So there is a temporary aspect of this. So that’s very important. The question is, is there an absolute, right? Is there something special about brain mind, like phenomenon that are completely different from the history of scientific logical discovery? So they’ll always be outside of outreach and I can’t understand where that belief would come from.
John 00:27:29 We did that out of our reach at all. I’m just saying that the coarse grained objects we use to describe mine phenomena will not be, will not feel the same way that people who like to look at eye movements and stretch reflex, and even the cerebellum where they feel like they can couch their understanding of their behavioral output in terms of their circuitry. And all I’m saying is if that’s what you want, you’re not going to get it
David 00:27:54 Right. That’s absolutely right.
Paul 00:27:56 Yeah. I think that might be right as well. But I wonder if there is a happy medium where going back to, um, and I don’t mean to percent rate on this, but you know, just from a very selfish standpoint, I still would like a mapping. It doesn’t have to, I don’t have to map it onto the circuit, but I want just a way of formulating a question, you know?
David 00:28:17 Cool. Let me just, it’s interesting though. Do you feel that way about the concepts of temperature and heat?
Paul 00:28:24 Well, I was going to say, you said, um, now that we understand space and time, and I don’t know that we understand space-time yet. Right,
David 00:28:31 Right. We do. We know
Paul 00:28:33 We actually have a better explanation.
David 00:28:35 Yes. But let me just, that’s absolutely true. I think all of these theories are very approximate to them and they get better and better, but I just want to get to that point that you make about, I guess we’d have to call this something like satisfiability or something, which is what you want this molecular. And I, I’m just curious because I do too. I’m, I’m the person who’s interested in mappings. Um, but it is interesting that I want to just ask you, is this a general feeling you have, or is it special for my own brain? Because is it important to you that there is a statistical mechanical theory that explains the bulk average properties of molecules and their energy that allows you to use concepts like temperature and pressure. And why does it matter to you?
Paul 00:29:23 So sorry. Does, does it matter that I use the core screened explanation of what heat is to perform work? Is that the, sorry? Is that the question?
David 00:29:31 I think it’s, again, just sort of mediating between you and John’s position on this, which is that we now know of course, that there is such a connection and it’s very important, um, that justifies in some sense, the higher level theory, but for most people doing work, they’re quite happy to deploy the high level theory and telling them there’s one errant molecule in a room. It doesn’t do it for them. It doesn’t make much difference. And I guess that’s what I’m asking is, is it, do we feel once that the, you know, that renormalization has been done right. That we can dispense with the micro?
John 00:30:11 I mean, and they may give you an example of poor. I think, you know, I did a lot of reading on the sort of the philosophy and the history of the action potential. And it’s very interesting that in, um, what’s his name, he wrote that history of neurology in the 1950s, um, blanking shouldn’t have had that grassy red wine. But anyway, at one point he says that the action potential was a huge advance that would help us understand how the brain works,
Paul 00:30:43 Your production. No,
John 00:30:44 No, no, no, no. But I find those kind of statements fascinating because what they do is they take a very useful horizontal piece of work that locally describes transmission and then makes this huge vertical claim for it. Okay. And what I’m saying is, is I can’t decide whether you’re saying you’d like the mapping is just because the whole field wants to always have a vertical claim for horizontal work and you know, and the history of the action potential, you know, in 1952, in that people was published, they didn’t know about ion channels. They just knew that there were sort of voltage sensitive changes in permeability of the membrane and write out an equation of the propagation of the actual potential. Now, if you were to still explain to someone how an actual potential works, you wouldn’t start describing the details of the ion channel sub units.
Paul 00:31:44 It depends on what level you’re explaining.
John 00:31:46 If you just want you to explain the action potential propagation, I can assure you, you will not write a sentence where you include the iron channel composition.
David 00:31:56 Here’s another one
John 00:31:58 I’m trying to finish that. So in other words, it’s nice to know that the w the reason why you have permeability changes is due to the existence of my own channels. It’s nice to know that there’s something they’re doing it, but to actually explain how you get the action potential propagating, you don’t need to know that detail. So, in other words, when you ask your question, you have to ask it in two ways. Does that detail simply give you solace that there’s a foundation upon which this abstraction is built, or does it actually add substantially to the sentence of understanding that children, when it comes to the action potential? I’m going to say to you the answer, Paul is no,
David 00:32:49 Here’s an example. Okay. No, that’s good. It’s good. Here’s, here’s another example. I think it too, I want to be, I hate to be in the middle of such a new thing like this, but I think I have it all done by the crackers. Not cause I’m sort of somewhere between you and John on this, which really annoys me. I want to be more extreme than both of you, but it’s the natural selection is a good example. So when Darren formulated the theory, he had this Nachi theory of genetics, he had serious pan genes, and it was based essentially on a fluid metaphor. It was continuous, but it’s called blending inheritance. And it didn’t in any way. He’s completely erroneous theory of how inheritance works by the way, completely erroneous, um, compromise the integrity of his higher level selection there. Yeah. And of course, during the modern synthesis people at right and holding came along and said, you know, it doesn’t work, man.
David 00:33:40 Blending inheritance will not work. Let’s produce this kind of average quantity in the world. And they then reconcile the theory with Mendel’s contributions, which are articulate and so on. And now two points to make at the level of organisms. It made absolutely no difference. It didn’t compromise. The theory, Darren C was not changed by right. And holding what they did is reconciled genetics with the theory and the theory of population genetics, which tries to explain their distribution of genes using natural selection does have to have both that’s critical. Right? So if the object under analysis is the gene, of course, but at the phenotypic level sometimes called the phenotypic gambit. You can kind of get away with it, but ignoring it and gain theory evolutionary game. So he doesn’t have any genetics in it. So it’s worth bearing in mind. It’s very level dependent in terms of what you should and should not include.
John 00:34:36 And, and, and I think that the mistake that is made all the time is confirmatory, reconciling facts do not figure in the explanation and those get collapsed, right? And, and, and the, the existence of iron channels is a nice confirmation and verification. It may help you poison someone, but it doesn’t change the qualitative nature of the way you think about the propagation of an action potential. You just need to know about varying voltages and capacitance. Do you see? And, and so, in other words, when you ask your question,
David 00:35:12 Wait, what’s the question, I forgot the question. Now, we all have
John 00:35:17 The real question is, is you want that to be some mapping between presumably structures and circuits and mind phenomenon. And I’m just saying that I don’t always have an intuition, why that mapping between level and minus one and level N is going to qualitatively change the intuitive nature of the explanation you construct a level.
Paul 00:35:43 My bet is that there is a in plus one or two level between these two that is satisfactory. I mean, so this, we can go back to David’s question about understanding, uh, heat versus the molecular collection of molecules. And, and that actually, so, so I’m fine with that. I can, I can use heat and I don’t need to understand the molecules to use the heat. And, but I, but I also wonder, because I mean, I don’t understand heat at, you know, of core level, but I use it a lot. And so I have a sense that I understand it, and I don’t need to explain the molecules to use it cause it’s, I can always use it the same way and I can take, you know, there, there, you know, I can take the, um, uh, the explanation I can take that mapping and think, okay, I’m satisfied with that without being an expert.
John 00:36:40 You said, do you think, did you think that if you, if I, if I asked you that the dog chased the cat, right. And I said, do you understand what I said? And you went, yes, I know what that means. And then I said, but do you understand the particular syntactic structure of English that tends to be subject verb object? And did you know that there’s this universal feature, these are syntactical rules of English. And I said, so you don’t really understand the dog, chase the cat as well as I do, because I’m a linguist who can talk about syntax and objects and subjects and verbs. It would be very odd thing for me to say, right? It’s not, I know extra facts about language and I can use, but to say that you would understand the dog chased the cat better. If you were a linguist would be a very odd thing. And that’s what you seem to be forced to adhere to.
Paul 00:37:38 So I actually, I, we can have a running bet I believe, and I don’t know that we’ll get, I kind of doubt that we’ll get there in my lifetime, but I believe that there is not some sort of one-to-one correspondence where I can look at a circuit and the, and know the 1 million, 250,000 neurons firing in this particular pattern corresponds to the feeling of love or something. I don’t think that there’s going to be that mapping. That’s not what I’m looking for. And I think you’re misconstruing my desire as that as like mapping onto the physical substrate, what I’m betting on. And I believe that will, there will be described one day is an in between way.
John 00:38:19 I gave you that I gave you, I told you about trajectories and state spaces, dynamical ordinances.
Paul 00:38:23 It goes exactly back to that, but that’s a usage case. And I, but that hasn’t happened in mind yet. I mean, it’s, that’s happened looking at state trajectories
John 00:38:35 Who are coming up with similar kind of dynamical systems you have of prefrontal cortex and beginning to talk in that way. Uh, so in other words, it wouldn’t, it’s so far you’re right. It’s been sort of convolutional neural networks for vision and recurrent neural networks for
David 00:38:52 Motor cortex. But I have a feeling that, you know, there are people like Charlene Wang and others who are beginning to worry about prefrontal cortex. And so it may well be that you’ll have an object that is a mixture of psychological language and neurodynamics, so it would satisfy you. I want to add something else to this conversation now, which is functionalism and degeneracy. Cause I think when complex systems it’s right.
Paul 00:39:19 Sorry. Wait, so just on, in complex systems,
David 00:39:23 I have no, not correct. It’s right. To have this debate because I feel that even if you adhere to, and I’m just going to caricature, this is Paul versus John here. Right. I don’t think it’s fair to say because, but nevertheless, um, there’s another one which is completely orthogonal to this. So if you think about telescopes, right, there are radio telescopes and there are optical telescopes. You think about cars, there are electrical cars and there are cars that use the combustion engine. They are not at all the same, not at all. They use completely different principles. They achieve one
John 00:39:56 Second,
David 00:39:58 One second per second. Yup. They achieve the same objective. So, um, functionalism. So now if we’re talking about, uh, mind phenomena, I think there’s an argument that deep neural networks, which have absolutely nothing to do with brain. I mean, really nothing. And, uh, certainly not at the material level, not at the level of mechanism, the geometric topological correspondence is spurious. Uh, maybe in some cases, maybe it isn’t, um, we can all agree on that. We probably agree on that. Although there might be some cases, uh, probably going to give us much deeper insights into mind, the neuroscience. And we haven’t talked about that. So that’s not about mind emerging from brain matter. That’s mind emerging from something completely.
John 00:40:46 But I mean, it’s been a bit, but it follows. I mean, the thing is David is if you believe in terms of psychological algorithmic, uh, descriptions of mind phenomena, it kind of follows that you’re you could get them in some other way. Now there are some who say no, the one way that, you know, again, because I’m actually used to be. And I think Paul knows is, you know, very much default functionalist, but I’m willing to believe now that you can have what David Barack and I equally neuro functional explanations, they’re functional explanatory objects with mural flavor, right. That you can have them both. Okay. So in other words, I think that the question is, is whether to have that neuro functional object, you have to have clear, for example, what if it turns out that even though the explanatory object is quite abstract and it’s a dynamical system plus words, but what if actually the tissue itself has properties that you need that there’ll be on neural populations and abstractions of connections, vessels clear local field potentials, effective transmission in others. It may be that the economical object that you end up coming up with can only be built out of biological tissue.
Paul 00:42:11 David D does this accord with your view of the environment playing an interactive part in this, or is it a separate issue?
David 00:42:18 A separate issue? I don’t agree with John and I can’t really think of because of universality. I can’t really think of any, anything like it. Um, and the idea is I understand it is that there’s something super special about molecules, which mean that functions, which are very divorced from them that operate at very aggregate core screen levels are actually dependent on them. So it’s sort of getting your, it is what you want. Paul, it’s having your cake and eating it theory, but I don’t quite know how that could work. I’m not aware of any such physical,
John 00:42:51 I mean, to just understand it, you’re saying that you think that if I come up with some neuro functional object by definition, you should be able to swap out the constituent.
David 00:43:02 Exactly. Yeah. In strong functionalist language. Yes. Yes. I think so.
Paul 00:43:08 Just to mediate between you two then. So John, I, I tend toward this now as well, um, that there, there may be, you know, something that is, and this goes back to like the, the critical point of operation and what it takes to be in that area of operation. And it could take something as, I don’t want to say complex because we’re talking about complexity, but as massively intertwined and evolved over such a long period of time to sit at that right state, it might take the metabolism and the structure of
David 00:43:40 It. Doesn’t go, doesn’t pull. So I’m someone who’s worked on critical points.
Paul 00:43:45 Well, no, not, not for just critical points, but I mean, something like mind, right? So there’s lots of things that operate at critical points that aren’t mind.
David 00:43:53 Well, that’s a critical insight, which is excuse the pun, which is that this is precisely the point. Um, people got very excited about things like having tales, rage, and then they realized that, well, actually we have no central limit theorem for that too. And so that’s not a surprise critical points, got people excited, rightly and people like John beg and others, who’ve been arguing for the brain being by a critical point. But now we know of course it local area networks by a critical point and social systems are by a critical point. In fact, everything that’s evolved is bad, critical point,
Paul 00:44:27 Small world and critical
David 00:44:28 Point. Right? And so actually I don’t think that these features, they are fascinating by the way. Um, but I don’t think there are the tool that allows us to distinguish between, you know, mind, brain, light phenomena and other complex phenomenon. They’re just too ubiquitous. So I think criticality is a bit of a red herring. Moreover, uh, it’s now been shown that, uh, deep neural networks are nowhere near critical points, right. Which have many of the characteristics that people are interested in mind are interested in and there’s a, you can actually contrive statistical models where they are, but none of the learn trained ones are. So, um,
John 00:45:10 What I was saying is actually, it’s not true so far, you know, there’s been no successful, really cognitive general AI achievement. And, you know, as I was saying, Jeff Hinton says, you know, that’s what the last thing we’re going to get.
Paul 00:45:25 And all I’m saying,
John 00:45:27 Now the question is, what is the impediment? Is it architecture? Is it not knowing the right algorithms? Or is there something that you can currently make with biological tissue that obviously by definition, I’m not trying to say that you couldn’t abstract away an object that behaves like the objects that currently only neural tissue can make. But as long as once we work out what it’s made, what that object looks like, we’ll be able to make it in another way. I think you seem to be saying that by definition, if you can abstract to an algorithmic level, if you can come up with a core screen description by definition, it should be made out of different. It should be duplicable with a different sub.
David 00:46:18 Yeah, I do believe, I do believe that I do not. I think your first part of your argument, I think I share, which is that we’re just not sufficiently clever engineers, right. To know how to do that. And, um,
John 00:46:31 We’ve met with, we’re still missing something
David 00:46:33 Missing a lot.
Paul 00:46:34 Can I ask you guys that kind of a ridiculous question, but a break from the seriousness maybe, but, but this, I just had the other day, this daydream where I imagined a functionalist future, where we all accept functionalism, we build powerful AI and we accept because of their predictive ability. We accept that they have better purchase on our own interests and it seems to be, and we allow ourselves to be governed by their
David 00:47:02 Organizations, but we already are well. Okay.
Paul 00:47:04 Um, but let’s say it’s more concrete and more, I mean, that’s, that’s a whole different conversation, but, but okay. Let’s say everyone in. Anyway, the, the, the dystopian vision I had was where we accept a functionalist account, everything that they are doing, it makes it seem as this. Uh, now I just realized this is like the terrible zombie, uh, analogy, but, but it seems, you know, we interact with, you know, they’re their robots, whatever, you know, whatever, pick your favorite television show. And we, we allow for the fact that we assume they have consciousness and mind and on her level, whatever that means. Um, and so we could be in a place where we’re actually giving ourselves up to the organizational principles of these things that we functionally define as having minds. But in reality, there’s, you know, there’s, it’s vacant. There’s nothing there.
John 00:47:56 I just think it’s, I think that’s completely impossible. That’s, that’s an example of giving an example. It just doesn’t make any sense on
Paul 00:48:03 Its face. I realized that the zombie
John 00:48:06 Mind, you know, just like when, uh, lake and Gershman, and Tenenbaum wrote the BBS paper on what you, what would be needed to have general AI. And they basically come up with a set of behavioral criteria, you know, and it, it’s, it’s very similar to arguments and I wouldn’t dare go there with David here about what life is, you know, do you get, is it a different, a defined property or is it a cluster of properties or whatever, but I think that if you had your tick box, your checklist, as they had an air BBS article about what would be necessary, intuitive sociality, uh, intuitive physics, uh, modeling of the world, rather than classifying it one short learning, you know, extrapolation planning. I mean, whatever that list entails
Paul 00:48:58 Four out of five or so
John 00:49:00 They, um, and you had these robots that did that. They have mine as far as I’m concerned.
David 00:49:08 Right. So we’re agreeing then we’re agreeing. I think there’s this very, there’s this interesting question. I think I’ll give another example from computing, which I’m not sure it’s a good one or not, but so it might be that one day there’s a certain class of computational problem that can be solved by quantum computer. So, right. So there are problems now that we might call MP complete. I don’t know if this is true or not, or at least extremely difficult to compute in any reasonable amount of time that a content computer could compute in our lifetime. And that would be a good example. I think of John’s position where this class of function, which you could describe hardware independently, simply couldn’t be realized in anything that had this property of entanglement and spooky action at a distance and massive parallelism that comes out of the quantum to me. So that is perhaps an example of where the physicality imposes constraints and what’s realizable in the logical space. But as far as I’m concerned, unless you believe as Penrose does that this applies to mind brain, which it made, I don’t know, everything we’re talking about is classical. So then I’m not aware of any fundamental physical limitation that’s analogous in, in mind brain. So I, that’s why I think I’m a functionalist. Does,
Paul 00:50:27 Um, does timing matter to speed of information processing matter because you could have the exact same structure and do it very slowly? Is there a role, cause I know you’re very interested in time and that’s part of the complexity story as well. David, how do you, you know, like, um, John mentioned, I had Erie Hassan on and he talks about at different hierarchical levels in the brain operate on different timescales, seem to map on operating on two different timescales. And I thought, well, you know, there could be something to that, but the more recurrent something is it could operate on a slower dynamical timescale, and somehow that has a maps onto cognitive processing. But, but I wonder if, if you think of time, that way as well in the information processing and computation,
David 00:51:15 I, I th I have a map I’ve had a much more modest approach to timescales and computation. So I’ve worked a lot on molecular computation, molecular information processing where time is exploited. So the half-life of a molecule is actually part of your box of tricks. Right? You can use that to solve problems. You can, you can actually make a frequency decoder by exploiting relative decay times. And so I’ve kind of entered that ingenuity of messing with timescales. Um, and clearly life is all about that, right? It’s just been tinkering from the beginning with these properties of molecules and all of these timescales. Um, but that’s not the same thing as saying it has to be done that way. And so, you know, when human beings play chess, you know, we have a brain with all these timescales in it, right. With the timescale of, you know, synaptic chemistry and the timescales of, and so on. Um, but you know, AlphaGo works at equilibrium, right? I mean, once it’s been trained, it’s, it’s basically there is no timescale, right? Um, time’s gone. So we do know that you can solve complicated problems with no temporal dynamics that are interesting. So I don’t know, but at the scales that I care about things often, it matters a lot.
Paul 00:52:34 Doesn’t matter for mind over matter.
John 00:52:36 I mean, again, um, times, uh, hierarchical systems as you go up the hierarchy, they operate more slowly on, on, because that’s, I mean, that’s the whole point of a hierarchal system is the time arises. You operate on, go up as you go up the hierarchy. So, you know, you, you could argue that when you have, when you are worrying about way of going to college in a few years’ time versus your stretch reflex, those are just years versus second.
Paul 00:53:09 Does that map onto our experience though? What do you mean mental experience? Do you know our sense of time and, um, the rate at which we are thinking, right. So there’s, so this is this mapping from brains to mind that I want,
David 00:53:24 I think it does matter. It’s interesting. You say this as a paper that I’m Jeffrey Western, I’ve been thinking about raging for years, which we never will. I don’t know, maybe, um, which is this, which is sort of interesting, right? So you can say, you know, Jeffrey has this very nice result, which is that smaller organisms have higher heart beat rates. We know that’s not his result, but, uh, if you rescale things, according to the allometric theory, the total number of heartbeats in a lifespan is more or less invariant. So there’s a bit like a photon being, massless this thing that pops out, which is quite surprising, right? So a very tiny organism, just beat smash more quickly than us and lives a shorter time. And we beat slowly and never longer time. It turns out that all sums up to the same number of heartbeats. It’s just got some shocking result, but falls out of the theory. And the question we’ve been discussing is maybe we, there are similar invariances with respect to thought that a mayfly or something that seems from our perspective to live only a few days actually thinks it lives a hundred years, right? From the mayflies perspective, it feels the same.
John 00:54:33 Let’s come to evidence, you know, from Parkinson disease. Right. You know, again, all of the sacks talk a lot about these patients in awakenings. He tells a funny story where somebody, um, he sees somebody like this in the waiting area and he asks them what you’re doing. I see that was about, I think I was going to scratch, I’ll pick my nose. Right. And basically he just caught him in this extreme, the drawn out. But that the point has been made that they don’t feel that they’re taking forever to pick their nose. Right. So maybe your subjective experience of time does relate in some way to the speed of your physiology.
David 00:55:22 Right? So that exactly, that’s the question. So in the case of heartbeats, it’s a very simple calculation by the way, but we would do this properly for thought, the way we have to do it is we have to calculate, you know, distances between neurons, how quickly an impulse is propagated, et cetera, to see whether or not effectively as John just pointed out the sensation, the subjective sensation of time was an invariant that falls out of allometry. We’ve kind of cool and useless,
John 00:55:52 Um, you know, cooling, cooling, nuclei, tooling, the brain. Maybe it’s looking at the speed of computations with, with cooling,
Paul 00:56:02 But you might not get, I say, you guys are talking about synaptic transmission rates and it might be more of a recurrence architectural feature, you know, circuit a circuit level.
John 00:56:11 Right. But I think that what’s great about this conversation. If I may say, is it your consistent requirement for something? I think, I don’t know whether it’s the wrong question. In other words, that minded brain will always have their separate vocabularies and their separate conceptual frameworks that do. And they, and we simply have to feel reassured, like David’s saying, talk about temperature, talk about volume, talk about pressure. It’s better to talk about weather prediction to those terms and just feel reassured that it’s consistent with statistical mechanics. And I wonder whether the only way that we’re going to get some neural information into our functionalist explanations is that they’ll look a little bit like a dynamical system is my guess. I mean, I’m beginning to be willing to believe that we might be, we might be able to think we might have a Fineman diagram way of thinking about things about mind, which are very heavily derive from your old data.
John 00:57:20 And David Barack, as I said, I’m working with has convinced me that maybe you, we would at least be happy with neuro functional objects, not functional ones. So in other words, you don’t have to be a pure function. I mean, functionalism has two meanings. One is you think just in terms of processes rather than processors, the strong version of functioning as David’s one, which is that there should be many ways to implement it, that isn’t wedded to one physical implementational instantiation, but I don’t think you’re necessarily wanting that. I think he’d be happy if you just had something in the explanation you gave to people that had something that was neurally derived in the explanation, right there, a few psychological words, and I’m actually, I’m being very serious. Maybe we’ve reached a point where we’ll have not just psychological functional words, we’ll have neurone derived objects in the sentence, just like we have the motor neuron in the stretch reflex sentence. I I’m really not sure whether that would count as what we call in the paper at first level explainer.
David 00:58:28 Hmm. It’s interesting. I obviously hear it as a five. There were phases where people became very enamored of dynamical systems. Um, and I think we’re over that a little bit. And I think that the it’s interesting. I, I’m not sure this is exactly the same thing, but I’ll give an example. It’s an argument I had at a meeting at Harvard with George Whitesides, and we were talking about the merits of information theory versus dynamics. And he works in, in, in nano biology and an extraordinary engineer and, and, and cell biologist. And he hated information theory. And I think I had a similar argument with the soak Institute with, I don’t know if it was Sutton about this. It might’ve been people like dynamical systems, cause it feels closer to the matter, you know, it’s got that quality about it. And George felt that why do you need information theory?
David 00:59:24 It’s just a secondary and position. This observer dependent. Just describe it all in terms of the dynamics, these functional concepts that you are imposing a useless the system, isn’t doing anything it’s just colliding. It’s just a being Newton’s laws. And so genetical systems is the right way to describe it. So it’s a kind of a weird reductionism. That’s not microscopic reductionism. It’s kind of John is describing, but the point about genetical systems of the kind that we study is you can’t always rate Hamiltonian. You can’t always rate an energy function down. So you don’t always have an action principle. So you can’t always say, this is what the system is minimizing. That’s the point to which it is tending. And what Shannon gave us was a framework where we could actually write down a variational principles on top of dynamical systems. So you can say, as a BLM will do, he’ll say that genetical system is maximizing mutual information or tissue. We’ll say that genetics is implementing the information bottleneck or what have you. And so you need this language to give you the variation architecture, but as the optimization language, you don’t have internet,
John 01:00:34 But, but why do you, in other words, again, when we use narrative or words like William James did, no formulism attached to it, but you’re still doing understanding. Well, I mean, you look at even martyr’s incredible work on this dramatic gastric band, right? Where she shows unbelievable redundancy in what the constituent neurons do, but there’s an invariance at the level of the pattern and the pattern and, you know, Eric Smith, your very own Eric Smith has made a beautiful case for ecosystems and in physics that you should treat the pattern as the entity of explanation rather than component processes. So when you look at Eve Marta’s work, it’s the pattern generated by a lot of swapping out that you can do at the level of it. So the invariance isn’t at the level of the components? No, I don’t think so. In other words, why can’t we just, all I was saying is if there was a neural pattern language.
David 01:01:35 No, I know. I know what you’re saying.
John 01:01:37 Isn’t that? Okay.
David 01:01:39 Yes. No, there’s nothing. That’s great. I’m not, I just want to make a point here that, um, there are on the table here, three positions, at least one is let’s call it unfairly. The sort of microscopic reductionist who says it has to be as low as you can go, which is what John thinks I am. But that person in the end just has the total physics and then wants to do quantum mechanics. Okay. And they should, but they can’t. So they do neuroscience or whatever they do. Okay. Okay. So then you have the aggregated middle ground, which is a dynamical system, which says, you know, what we can do is we like shad Lynn and others, which is very interesting. So we can project onto this manifold, which captures the information it’s dynamically sufficient. In other words, the observable, my eye goes left or rage. I get it just from tracking this, this, this.
John 01:02:33 Yeah. But, but just, just be very, be careful though, is there’s a difference. I mean, I, I don’t know what Mike has done most recently, but before what he did was quite traditional that he would record from single units, see what they coded for, and then derive a psychophysical model of diffusion to bound. In this case with two parameters that were confirmed by the neural data, there was no theory of the config. Now, now, now they’re now had it’s right. So now I would say that it might comes up with these sort of you’re right. I, I, now that I think about it, you know, a dynamical system, then I think it’s closer to a neural pattern language. I think it begins to get to being a first level explaining.
David 01:03:17 Yeah, exactly. So, so that’s, I just want to introduce this third one, so you’re right. So, um, that’s his point, this predictive low dimensional manifold that you move around on. Okay. And it’s useful. It’s great. I love it. But then there, the problem is it doesn’t tell you that sort of stuff where you should move it. Doesn’t tell you what the system designed.
John 01:03:38 Can’t you just be teleological
David 01:03:40 Where we wait, but what’s so beautiful about the Hamiltonian, right? What’s so beautiful about using information theory here is it tells you that something is being maximized under constraints and that’s a different language again. And so I, I guess, to be a pluralist here, I think there are multiple different pattern languages, right there. There’s the lowest level Lego building blocks. There’s as you say, John, this will dynamical system, no chiefs, but there’s a higher level yet, which tells you what the system is moving towards. And action.
John 01:04:13 But I would say that, but I would say once you get to that, you don’t need to, you may not need to talk about neurons at all. You can give, you might not,
David 01:04:21 You can just do,
Paul 01:04:23 This is exactly the mapping that I’m seeking, right? These sorts of levels. And John you’re an Amarin with dynamical systems.
John 01:04:28 No, no, but I’m not, I’m not, it’s not as much enamored. I’m just saying that as a functionalist who was much more interested in just looking at really inspired cost functions and psychological like errors and rewards and motivation, I was very much in that world and you can build cost functions out of those behavioral derived measures because of my work with David and thinking about this, I’ve been willing to see, especially after mark told me that he began to chew it with these trajectories, it felt very fine. Manessa, you know, I think we all should be willing to change our minds. I thought to myself, Hmm. It does seem as at least beginning to think with a neurally derived object, which is different from the behavioral, the derived objects that I work with. So I began to think that maybe we’re going to enter an era where we have two types of explanatory object on the same plane. I behaviourally derived one and an early derived one where they’re actually on a level playing field. You see that that’s not something I was really entertaining as much as I’ve been willing to work in with David Brock and talking to the people who are doing this kind of work that maybe you and I think maybe at the moment, that’s going to be the closest, your wish is that you have a hybrid functional object that is made out of behavioral variables and neuro ones, but dynamical ones.
Paul 01:05:53 Hmm. So it doesn’t have to be dynamics, but this is exactly the sort of thing that I’m talking about. That to my level of satisfaction would be some sort of bridging.
John 01:06:03 Um, but it’s not bridging because it has that. They are, it’s a flat evidential landscape. And other words, they’re both being used to explain. There had been derived, you know, deriving from behavior and deriving from neurons. You could say one came up vertically. The other one is horizontal, but the space they occupy is not vertical with respect to each other. I think that is that okay?
Paul 01:06:30 Yeah. I’m okay with that. But it’s interesting that I, I just, I, I maybe I wasn’t, I have not been explaining myself well in this partly because it’s an unknown territory. And so it’s impossible to explain what you don’t know, uh, how it’s going to look. Right. So I don’t think it’s going to be a dynamical systems, state space trajectory. That’s gonna make me feel like I like that’s going to be the bridge. You know, I’m gonna say bridging again, but, but some sort of mapping, some different level of understanding. And David was just saying that there’s going to be multiple levels. How many levels are there going to be? How many do we need to do
David 01:07:05 Many infinite? I actually don’t. I have to say, I don’t think because of this feature. So it’s interesting this question, right? Because we do know that there are an infinite number of models if you’re allowed to have an infinite number of parameters. So, right. So you can always fit a phenomenon that is fit with ed with plus one and up. So, um, I think it has to do with what satisfies our desire for understanding. And I mean, this gets to pedagogy. Fairness is kind of a weird digression, but I’ve always thought that great teachers can explain the same idea in multiple different ways. I’ve just been reading a book called 99 variations on a proof. And it’s, it’s an infusion to a French novelist rent and quinoa’s book called exercises and style. And he shows that you can solve this cubic equation, prove this cubic equation, 99 different ways. Right. And who knows if that’s the upper bound, but they all aluminate what a cubic is and what a solution means and different human beings on this planet will like those proves to different degrees. I love that. And I feel that there’s no reason to assume that there there’s just one or two or three or four, there’ll be multiple
John 01:08:16 Different level. Although I think it’s, I mean, that’s, I like ultra pragmatism. I would say that, that there will be a few favored levels for the best effective theories that you can do pragmatic work with. You can transmit understanding you can lead to new experiments, test new hypothesis. I mean, the best effective theory is the one that leads to the most fruitful number of conjecture hypothesis. Right? So in other words, it seems to me that it would be very odd to not all converge on some cluster of effective theory levels that would all work.
David 01:08:56 I don’t think that’s true. I mean, I gave the example earlier of Newton. You know, the way you did this is you just take conic sections, you get circles, you get ellipses kind of orbits, and then you can do it algebraically and you do it with calculus. And it’s just turns out to be much more efficient than doing a geometrically. But I, I, I’m not sure. I think John, I think by virtue of the preferential attachment nature of culture, that right, that there is a kind of a winner takes all dynamic. There will be a few preferred formalisms, but I’m not sure there’ll be preferred because they’re the best in some objective.
Paul 01:09:32 So in the case of just to bring it back to heat again, where we all feel comfortable with this idea of, you know, what heat is relative to the collection of molecules, is that it, I mean, we all agree. That’s fine. We’re all comfortable with it. Are there more levels that need to be had that could be had, we’ll be a better explanation.
David 01:09:53 There might be, there might be more parsimonious means of describing it. I mean, it’s true. Perhaps there’s something about the simplicity of the phenomenon that doesn’t permit.
Paul 01:10:04 That’s why that analogy might not be right between brain and mind
David 01:10:10 You take something, but you take the example that I gave of a cubic right. Pretty simple thing. Right. And you can just multiply proofs. Um, so I don’t know, I don’t know what the best analogy is. Okay.
John 01:10:21 I think also, I mean, I it’s been, I read it a while back, but you know, Rosa Cowan Dan tenons wrote, um, about, you know, the ventral pathway. And does it count as an understanding rather than what we’ve been saying, which is just, you know, uh, an opaque fit, right. And I actually think they make a good case, but at one point Rosa Calla and Dan Thomas talk about the contrarian principle, that the more, the more complex phenomenon becomes. And I’m, I’m sure I’m mangling this, the number of ways to actually get it done goes down, right? That, that, that, that simple things can be done in a lot of ways, complicated things, complex things reduce the number of degrees of freedom you have available to get it built. And so one of the reasons they argue that there’s genuine insight given from their work on the ventral stream and they make, you know, is that the best predictor of the neuro responses in the ventral stream is now given from a deep neural network that was trained on images.
John 01:11:32 In other words, it is kind of fascinating that if you want to predict, when you go into an area of the ventral stream, what the neurons will look like, you’re going to get a better prediction from your deep neural network, right then from what you think. So, so they argue that first of all, that isn’t a level of abstraction because there are no neurons with biophysics in that system, but they then say that the reason why that may be happening is that things like object recognition in a layered system, there aren’t that many ways to actually do it.
David 01:12:04 I mean, I w I don’t quite understand what they’re talking about, because
Paul 01:12:07 Let me jump in real quick here, because I’m out of time just about, so let’s end on this, but so, so David, let me give you the last word there. And, uh, I’ll just throw in. What if it’s the case that object recognition is just easy. And so there are many different ways to do it. And then David, this is a complexity question. So I’ll, I’ll let you address the, the many ways versus few ways to do complex things.
David 01:12:35 Well, I don’t, I mean, I don’t have a definitive answer. I simply just say that if you have some Boolean function, it can be realized in an infinite number of ways. I don’t understand this idea, that complex things have few ways of being
John 01:12:47 Well. I mean, they convert an evolution that, you know, wings, right. That they ended up having a similar shape. It’s not like you can have,
David 01:12:55 But they’re realized
John 01:12:56 Totally different, but their shape is
David 01:13:00 Well, that’s the function. I don’t know. I haven’t read the paper, but I bet it’s wrong.
Paul 01:13:05 That’s a great, great question, guys. So
John 01:13:09 Thank you very much for putting up with us. Thank you.
Paul 01:13:11 That’s fine. Oh, no, thanks. I appreciate you guys piling on me there for a long time. That was great. Yeah.
David 01:13:16 That is new Mexican photons that you’ve seen me moving like a sundial
Hakwan, Steve, and I discuss many issues around the scientific study of consciousness. Steve and Hakwan focus on higher order theories (HOTs) of consciousness,...
Mentioned in the show: Follow Niko on twitter @KriegeskorteLab. Visit his lab website. The Cognitive Computational Neuroscience Conference. The review papers we base the...
Support the show to get full episodes and join the Discord community. Johannes (Yogi) is a freelance philosopher, researcher & educator. We discuss many...