Episode Transcript
[00:00:04] Speaker A: Neurons are not components, they are living units, and this is how they should be treated. If that is true, and it is true, then what a brain is, or living tissue is, is a collection of interacting living systems.
You have this tendency in neuroscience to say, well, there is information in a signal or there is information in a neuron, right? But if you look inside a neuron, you will never find some stuff called information.
That's impossible. It's impossible because first, because information is something relational, it's about something else. So it's not in the neurons, it's about something else.
The problem in predictive accounts in neurosciences is that the predictions are made by the observer and not by the organism. Well, that's one of the problem. The second problem is that the sense of predictions that is used is extremely narrow because the kind of predictions that we can make, like It Might Rain, for example, is much broader than the kind of predictions of predictive coding theory, which is the next image.
It's a sign here that people got completely lost, because apparently people like Turing or von Neumann, all they had to do was to look at a rock in the right way. And yet they were struggling to build a computer when it was all trivial and just a matter of perception.
[00:01:31] Speaker B: It's all right there.
[00:01:34] Speaker A: It doesn't make sense. I mean, at this point one should step back and say, well, maybe I missed one important thing, you know.
And so.
[00:01:49] Speaker B: This is brain inspired. Powered by the transmitter, brains encode information in representations that perform computations to make predictions, right?
No, no, no, no and no. That's what Romain Brett says. That's his response to those ill conceived notions that neuroscience relies on to try to explain how cognition works.
He uses more words than that to do so in his new book, the Brain. In the Brain in Theory, which we discussed today, Romain is Research Director in Computational and Theoretical Neuroscience at the Institute of Intelligent Systems and Robotics in Paris, France.
In the book, Romain breaks down how many of the common metaphors that we use don't withstand scrutiny. And he offers alternative approaches more in line with what we know about how biological entities work.
Along those lines, we discuss his ongoing work, understanding the cognition of a single celled organism, the paramecium, and what his views might mean for artificial intelligence. This is a long episode, but there's a lot more to be explored in the book, so I definitely recommend that you get it and read it if you're a Patreon supporter. I coaxed Romahain back on for another 40 to 45 minutes or so to Go deeper on his thoughts about how anticipation is the core of cognition, how predictive processing accounts like active inference, predictive coding, and the like actually missed the mark, and how we should be focusing on the notion of anticipation instead. And we also discuss a few other topics in that extra time. So if you're a Patreon supporter, I hope you enjoy that. To hear that and all full, full Brain inspired episodes, go to BrainInspired Co and sign up for Patreon. Thank you, Patreon supporters. I hope you enjoy the extra discussion. And thanks to the transmitter, as always, for their continued support. Here's Romain.
Romain, we were just talking. It's been over five years. Welcome back. What do you. Do you. Have you been avoiding me? Do you just not like me? What's the deal?
[00:04:11] Speaker A: You just didn't invite me. That's the problem.
[00:04:13] Speaker B: Oh, I see.
Okay. The brain in theory. Let me know if I have this.
That's the title. Let me know if I have the subtitle. Correct. The subtitle that I have is why Engineering and Computational Analogies Are Poorly Suited to the Study of Biological Cognition. Is that still the subtitle?
[00:04:35] Speaker A: Actually, no. There is no subtitle.
[00:04:37] Speaker B: There's no subtitle.
[00:04:38] Speaker A: Okay, so what you said is what's written on the back cover, like, here. Right.
[00:04:43] Speaker B: Oh, okay. Maybe that's what.
[00:04:45] Speaker A: There is no subtitle.
[00:04:47] Speaker B: That's what I thought. Yeah. Okay. Yeah.
[00:04:49] Speaker A: In fact, we have been discussing about the subtitles, and after several weeks, we couldn't find an agreement on the subtitles.
[00:04:59] Speaker B: I like subtitles.
[00:05:00] Speaker A: Yeah. So we just dropped it. I don't know, the publishers, they like subtitles for, I don't know, marketing reasons, I suppose.
[00:05:08] Speaker B: Yeah. Okay.
[00:05:09] Speaker A: So what you said just.
I noticed on the website, after the fact, I wouldn't have put that. But it's a. Because that's not really what I. That's not what I wrote, basically. So.
[00:05:20] Speaker B: Yeah. Right.
So that's some, like, copy that some copy editor probably came up with or something.
[00:05:26] Speaker A: Well, I mean, in a way, it's my words, but just edited from a bigger sentence, you know?
[00:05:34] Speaker B: Let me see the book again. Hold the book up again so that people can see it. You just got some. The proofs or whatever the. What did you call it? A.
[00:05:41] Speaker A: It's the advanced copy.
[00:05:43] Speaker B: Advanced copy, yeah. Yeah, yeah.
[00:05:44] Speaker A: Cool. Yeah.
[00:05:46] Speaker B: Yeah. I have the digital version, which has the same cover, so I like that. The brain, in theory. It's strong.
[00:05:52] Speaker A: Thank you.
[00:05:53] Speaker B: Yeah. No subtitle needed, so.
[00:05:55] Speaker A: Yeah.
[00:05:55] Speaker B: Okay. So. Yeah, go ahead.
[00:05:57] Speaker A: Yeah, just to comment on what you just. The sentence, why engineering analogists are poorly students. Well, I didn't write that exactly. In fact, in the very beginning of the book, I think I'm a bit more nuanced than that in acknowledging that engineering analogies have been useful in the study of brains and cognition, but just that they have limitations precisely because the brain is not engineered. So there must be some limitations to that. But some concepts in particular are less suited than others, I think, to the study of cognition, like computation. I'm not sure it's a very good concept, for example.
[00:06:40] Speaker B: Okay. Yeah, we'll talk more about that.
So, I mean, maybe, first of all, what I wanted to say is congratulations on the book and that I think that this is. If I were going to write a book, this is like, the closest thing that I would want to write. I mean, I wouldn't write it as well as you have written it, but you're sort of preaching to the choir.
To me.
To me, if I'm the choir, you are definitely preaching to the choir. So I immensely enjoyed the book, and I hope a lot of people check it out. And then I started wondering.
I looked back at our previous discussion, and it's hard to keep track of who has influenced you how over the years. But when I read your book, this is exactly where I am in many respects, in the way that I am thinking about cognition. And then I'm thinking I didn't come up with any of these ideas on my own, and I can point to people along the way. And, you know, throughout the book, you're citing everyone that are like. Like my little kind of heroes. And you're one of those people, you know? And so I just feel like, anyway, it's like, it's hard to keep track of all the influences, you know, I don't know how I got to where I am.
[00:07:50] Speaker A: Yeah, well, I can tell you my influences, I guess.
I mean, some of my early influences, I think, have been Francisco Varela, for sure. That was a big, big thing when I discovered that, and just a very different way to think about cognition.
Gibson, too. Gibson was also an epiphany for me, I would say. It's also maybe the. I don't know, the state of mind I was in when I read it, because I was actually working on perception, auditory perception at the time. And then I read his classic book, and I was like, yes, that's it.
[00:08:31] Speaker B: That's great. So many people have that. Yeah. Have that same, like, reaction. Right. Like, it's like an epiphany or something.
[00:08:37] Speaker A: Yeah, especially at the very beginning of the book, the introduction, he talks about the concept of information.
And the concept of information when you come from computational field, is Shannon. It's a very specific concept of information. And very early in the book, he says, well, there is.
I mean, perceptual systems are about getting information, but not information in the sense of Shannon. Because that's not at all the sense in which we usually mean information when you. When we say you get informed by something. And so.
And.
And yeah, I mean, I. I hadn't read that before, although, of course it's. It's an old book, but somehow it's.
[00:09:21] Speaker B: Well, Shannon would have loved that because he wrote that short blurb about, like, warning people, like, look, don't use my concept of information for everything. And so Gibson would have. Was in line with that.
[00:09:33] Speaker A: Exactly. I mean, that's quite amazing. When you read the old.
Yeah, the old works of Shannon is that very early. I mean, very explicitly. Even in the pioneering paper, he writes explicitly, my concept of information is not about semantic information. He says that very explicitly.
And in fact, his paper, the title is not information Theory, it's communication.
It's a theory of communication, I think. Right. If I remember correctly.
But then, I don't know. It's just.
It was just so appealing, I guess. I don't know.
[00:10:16] Speaker B: Yeah, well, I mean, that's how we study, like, neural communi. I was going to say communication. The neural activities, as you would put it. Right. Spiking is. Well, they, in a sense, like, talk to each other. So there's got to be information there. Aha. Shannon and formalized information. We need something formal.
Ride this metaphor as long as we can. And we still do.
[00:10:39] Speaker A: I guess the fact that, I mean, Shannon provides a formalized concept of information is very appealing, especially to modelers, because the concept of information is very hard to even explain, in fact, so let alone to formalize it. So when you have a very beautiful theory that formalizes something about information, that feels right and it's very tempting.
So, yeah, there's Gibson, and then there are a number of other people, of course, Big heart is one of them. But big, hard, I discovered quite late, in fact, after I wrote the paper on neural coding, the BBS paper.
[00:11:29] Speaker B: You discovered Dickhart after that?
[00:11:31] Speaker A: Yeah, right after that.
I don't know how I stumbled on it, but that's the thing.
[00:11:38] Speaker B: You can't keep track of all these things unless you keep a diary, which is what people used to do. I had Mark on the podcast and One of the difficult things, we're just really getting into everything right now already. So we'll come back to the beginning in a minute. But one of the difficult things about, for example, reading Bickard and reading you is that the issues are subtle and, and it takes a while to unpack everything from our sort of everyday Kuhnian paradigm sense of like thinking about these things. And so I, I, I even, you know, I have trouble reading Picard and I've even reading your book.
You know, you use the word like, like for example, prediction or something and we all have this common sense notion of what prediction is. And then you use it in a certain sense and you try to reorient people to the terms, some of those terms, you know. And that's what Bickheart does also. But it's really, really rich once you sort of get it, you know. So. Yeah, I like the interactivism. Is that what he calls it?
[00:12:39] Speaker A: Yes, it is. Yeah. Well, what I liked very much about Big Heart and Big Heart is a difficult author too. I mean, he's very precise and technical in his writing. So it's. Well, yeah, it takes a lot of attention basically. And that's.
[00:12:57] Speaker B: But, but. Sorry to interrupt, but like, I think when, when the issues are like subtle and complicated, there isn pithy little punchy one sentence thing that like captures it all. So it takes time.
[00:13:09] Speaker A: No, that's, that's, that's true for sure.
What really, what I mean, the insight I got with a big heart, basically. Well, there's all this stuff about coding, but I had already thought about it and came to similar conclusions. So I was like, oh yes, a friend. But, but what really, what I really got from his writing is the metaphysics stuff. I don't know if you see what I refer to, but it's his idea.
[00:13:38] Speaker B: Metaphysics.
[00:13:39] Speaker A: Exactly, exactly. It is the idea that the way we tend to think about mental stuff is exactly, that is in terms of stuff basically, that you have mental states in the brain that you manipulate. It's like if you have actual objects, it's basically thinking about mental processes as if you were manipulating objects, physical objects in the world. And if you think about it in the early days of AI Minsky and those people, the model they were working on is it's a world of blocks. And it's exactly that. It's the idea, yeah. To think about phenomenon in terms of moving blocks, changing the properties of blocks and so on. And that's substance Metaphysics. Basically. That's what Descartes describes.
But when you look at living phenomena, it's exactly the opposite. You have phenomena which are intrinsically fluctuating.
Might think of a river, for example, or a flame is a great example, also a candle flame. Those are intrinsically fluctuating events. And in those processes, sorry, and in those processings, you might look for tractors or stable things emerging from the processes. So the relation, what you are trying to explain is essentially the reverse. You start for something fluctuating and you try to see if there are stable aspects of them, as opposed to kind of the intuitive way of thinking of things, which is to assume from the start that you have stable objects with properties that are there from the start.
And one thing that he shows, that he points out also in his papers and his books, is that if you look at the history of science, and particularly in physics, the general move has been to go from substance metaphysics to process metaphysics. And it's. It is logical because you start by assuming that phenomena have certain properties, but then you want to explain why they do. And so you can't assume them anymore. And so as you make some progress, you have to kind of question the assumptions, try to explain them in different terms.
And so you have quantum physics, electromagnetism and so on.
But in cognitive science, it's kind of not mainstream yet, I would say.
[00:16:29] Speaker B: No. Yeah, you even write in the book that we've kind of skipped over it. Same with molecular biology, which I agree with, but maybe we're coming back to it.
But there is this tension of, in a sense, that abstraction, or coming from the idea, from a substance metaphysics, that there are things to be manipulated, things can be thought of from a process metaphysics perspective. Things are just really slowly flowing processes. Right. And so something stable is just stable with respect to the things that are flowing around it. And actually everything is flowing or whatever.
But what I was originally going to say there is that in some sense the crowning achievement of our intelligence is the ability to abstract and to keep these sort of things in stable form and, and so like to create symbols which are like some abstract, stable thing. So it's almost natural for us to, to approach cognition from that perspective rather than the messy, like everything is flowing and moving perspective. I don't know how. You know, it's almost like because we're so quote, unquote intelligent, we can trick ourselves into thinking that we have these stable abstract ideas which we manipulate as symbols, etc.
[00:17:54] Speaker A: Yeah, I guess what you're saying is that we, especially in science, are trying to build categories and laws and so on which are. Yeah, stable things. You hope that the scientific laws are stable when you try to explain the world, but then you have to be careful not to confuse the model and the reality.
So the scientific laws are things that you can write on paper and they don't change, but they are about things that themselves change. It's just some properties that you can define where you can find some regularities and so on, but it's secondary to the phenomena themselves, which are not, in general, stable.
[00:18:39] Speaker B: Let me see if I can summarize your approach here, and then you can tell me how I'm wrong. Before we get into some of the specific kinds of topics in the book, it seems to me that your kind of recurring framework or way of thinking about cognition is that, you know, most of our conceptual tools, at least, that are popular these days or have been kind of dominant things that we, you know, ascribe to a system, like a biological system, are kind of imported to the system from our external cognitive apparatus, and we're not taking the system as itself seriously enough. And so it's sort of a shortcut that is the source of many of our conceptual errors in terms of thinking about cognition. Is that a fair summary of your general approach?
[00:19:34] Speaker A: Yeah, I think the.
Yeah, I think that's good. Yeah.
We tend to think about what's going on inside our head in the same way as we feel, how to say, our unbounded experience, if you wish.
We were talking about the world of blocks. And if you look around you, you see objects and they hear and they don't move, or you can move them, you can make actions on them. And we think that in the brain, somehow the neurons, they do things like that. They move things around and. And so on. Right. So it's a projection of our microscopic experience sort of thing, and to the microscopic world of neurons.
[00:20:27] Speaker B: But this runs across the gamut of concepts like coding, information, representation. These are all sort of things that you say. Well, that's actually the experimenter or the observer that is experiencing those things. It's not the system itself. It's not inherently contained in the system. We're importing those concepts as if they're inherently contained in the system.
[00:20:50] Speaker A: Yeah. So these are different concepts. So I don't know if I can discuss them all at once.
[00:20:57] Speaker B: No, no, I. I mean, you don't need to discuss. I. I just was kind of thinking about your general approach because that is a recurring theme that you cite throughout the book, like. Oh, it's actually. That's where, like, there's like the slip conceptually is like, oh, actually it's the. The observer that is decoding the.
[00:21:12] Speaker A: Yeah, yeah.
[00:21:13] Speaker B: There's no decoding happening between neurons, for example. So this is just a common kind of theme that you return to.
[00:21:20] Speaker A: That's correct.
Especially for the term information. For example, you have this tendency in neuroscience to say, well, there is information in a signal or there is information in a neuron. Right. But if you look inside a neuron, you will never find some stuff called information.
That's impossible. It's impossible because in. First, because information is something relational. It's about something else. So it's not in the neurons, it's about something else elsewhere. So you will never find it in your neuron.
[00:21:51] Speaker B: And also, Are we talking, Shannon, information right now?
Specifically? Because we have to specify. Right.
[00:21:56] Speaker A: Well, in general, whatever it is, if you think of information as something that is about something else and then it's not in that thing, Nothing can contain information.
So when you say, for example, that a book contains information, of course, in a sense it does, but it's kind of an ellipse. It means that when you read the book, you are informed.
And you are informed because you know that the letters refer to some concepts which you know already in advance. Of course.
So, yeah, you don't really have information inside the letters. If you look with a microscope, you will not see that there.
So it's just a way of speaking. There's information in the book in the same way there's no information in the neuron or in the signal. Not something that they could send somewhere else, for example. Right.
So rather it is the system and never a neuron which informs itself. That's what it means.
[00:22:56] Speaker B: Yeah. Okay. Just sticking with like sort of a higher level themes, like for someone like, who's going to approach this book, the one way to read it is like every chapter you can think, all right, what's Romain going to, like, dash my hopes and dreams of this chapter? You know, it's like a process of like, deconstructing and criticizing a lot of like the main metaphors and paradigms that are used to pat ourselves on the back and say that we actually accomplished something in neuroscience. Right. And. And in a sense, I've often said this on the podcast and elsewhere, it's like, it's really easy to like, criticize things. What's really difficult is to build up, to synthesize, to create new things. And you're not just criticizing in the book, although that's a large part of it. You take a Lot of pains to flesh out why these concepts are not particularly the answer. Yes, they're useful, but. But there's something fundamentally missing from them.
But then you go on to actually try to build toward something. And so I want to make sure that we come back to that and not just spend the whole time talking about how information is wrong and coding is wrong and how everything is wrong. Because in a sense, that's the easy part. Although it's not necessarily easy.
[00:24:16] Speaker A: Yeah, yes. No, you're right. But I wouldn't really completely agree that criticism is easy. I think. What is easy.
Yeah, maybe. Maybe that's true. Probably that's true. Although it's a necessary first part. I mean, why would you build something new if you thought that the existing thing was fine? Right.
[00:24:40] Speaker B: That's right. But what I'm saying is that that's not what you're. A lot of people spend their careers just criticizing and then never building anything new, which is harder. Okay, we'll say that. And to criticize. Well, is difficult, but maybe in a sense, it's easier once you hone in on what is wrong with the problem, then everything kind of falls out of that.
[00:25:02] Speaker A: Right, exactly. I think. I mean, what is difficult in the criticism is to pinpoint precisely what is wrong. Because many people, I think, have a feeling that certain things do not quite work.
[00:25:21] Speaker B: Right.
[00:25:22] Speaker A: But to understand really what is wrong with that, I think this takes some time and thinking, and then from there, you can try to think about alternatives, and that's more difficult. I agree with that.
[00:25:42] Speaker B: Well, I know that this book. So one of the things I wanted to ask you is, like, how do I know when is the right time for me to write my first book or a book? Right. And for you, this is something that was born out of a project of blogging, but the blogging was born out of a project of curiosity about some of the assumptions from which we work in these fields, and that you set aside time every week to think about these hard problems and then try to articulate your thoughts about it through the blog. And I know you had a conversation with Yuri Bujaki, and he said, just do it. And so finally you did it or something. But how do you know when the right time is to write a book?
[00:26:23] Speaker A: Yeah, I don't know. There's no write.
That's.
Yeah, I think that that was the advice that. That he gave me, that you should just write it.
Don't wait until it's published. Basically, that's what he gave me.
[00:26:40] Speaker B: Some people write books in order to learn about a subject, the best way to learn is to teach, for example. And so part of that was cool.
[00:26:48] Speaker A: I quite agree with that. That was in fact, exactly my experience when I started blogging.
That was more than 10 years ago now.
And yeah, it made me think about these different concepts.
And yeah, as you say, then after the blog, I wanted to write a book which would be more synthet and organize around these subjects.
But then when I did start to write the book, well, there are many new things basically that I think I understood by just working on the book, which you will not find in the blog.
Everything on Anticipation, for example, is, I don't think, something I discussed.
And there are other things that just crystallized basically by working on the book.
It took me, yeah, quite a bit of time to write. I mean, I don't know, some people, I think. I think Mark, Big Heart took. It took him 20 years, but.
[00:28:02] Speaker B: But he's. He's still writing. He's not. He's not done yet.
[00:28:04] Speaker A: I mean, his last book. Yeah, but for me, it took me two years, but I was working on it every morning, basically for two years. So it's still quite a lot of work.
But yeah, I think it helped me a lot.
[00:28:23] Speaker B: Well, part of writing a book or a blog or just writing in general, is that you find out what you don't know.
Right. And so I don't know, maybe one opening question for you. Opening.
We've already been talking a while. Is, is there something in particular that was challenging in terms of realizing, oh, I need to.
These things aren't connected like I thought they were connected, or was there some, you know, friction that you really had struggled to get past that was sort of the key to other things or what is it? Was there something in particular that you learned from writing the book?
[00:28:58] Speaker A: Yes, I mentioned anticipation. This must be the most difficult concept, I think, in biology, which you call
[00:29:05] Speaker B: the core concept of cognition.
[00:29:07] Speaker A: And it is the core, I think, indeed, of.
Yeah, of cognition, indeed. That's the thing, I think that people are trying to explain with computation or with other theories and are struggling with. So people have to coin new words like teleonomy and whatever, because you don't know how to explain that. How can something be about the future? What can.
What does it mean?
I've been struggling with that. I can't say that I've solved it. I feel that I have understood a bit of it, at least clarify a few things. I don't think it's a closed chapter basically on the notion of computation and I think I have a clearer mind also on what it means.
[00:30:05] Speaker B: Okay.
[00:30:06] Speaker A: What a computation is and what it is not and why in particular the brain should not be described as a computer, for example. Well, let's.
[00:30:16] Speaker B: All right, let's go down the computation road for a minute then.
So, I mean, all of these things are also like, things that I'm struggling myself to think about and articulate. It's one of those things that's like, obviously it's not just computation, but then to articulate why it's not computation and what it is is also a hard thing to do. Right. What takes the place of computation?
I was just talking with Louis Favela and we were talking about computation and dynamics and the relation between them. And so he asked me to define computation. And I was like, ah, shit, you know, Turing, blah, blah, blah, blah. But one of the features is that it's atemporal.
And yes, you can do it. You have to sequentially do things to get to the computation. But a computation itself has no temporality, has no dynamics itself. And I don't know how you. I don't think you say that in particular in the book, but you talk a lot about things that are completely in line with that.
[00:31:15] Speaker A: So I think I said particular in the book.
[00:31:19] Speaker B: But you do say it, that it's a temporal, though. You probably do.
[00:31:23] Speaker A: I do, actually there is one example which is the clock.
And the clock, if you take even a digital clock, for example, digital clock implements a computation, right? It's digital. But what computation is it?
Well, I'll tell you what it is, what it is. It's a counter. It counts.
Right? But I mean, you can have a counter that doesn't tell you the time.
In order to tell you the time, iterations basically need to be synchronized to a physical system that ticks. In other words, a clock. Right? And the clock is by definition not the computation. It's not the counter. Right, the computation. I mean, you can have counters that count various things. You can count cells, for example, and it's not a clock, but it's the same computation.
For. I just say a computation is not about time. It's an independent of time. It's even in the definition of computation that it's supposed to be independent of implementation. That's the whole idea of software versus hardware.
And so, yeah, I think time is a really important part that is absent in computation and then that distinguishes it from coupled dynamical systems, for example.
That's very important.
There is another aspect, or should we go on with this Subject.
[00:32:53] Speaker B: No, yeah, sure, sure.
[00:32:55] Speaker A: Which I realized while writing the book. I mean, I had been thinking about this question already, but as you said just earlier, if one asks you what it is, what is it? Computation. How do you distinguish that from dynamics, for example?
Well, and you gave a great characteristic.
But it's interesting that we find the question difficult.
I think it's interesting because it has apparently been not easy at all to make a computer. Right. It's a rather recent invention in the history of humanity. Right. And yet we are here wondering, but isn't everything just a computer? Right. And in philosophy of cognitive science, for example, the big challenge is to define computation, physical computation, in a way that avoids triviality. Like everything is a computer, the solar system and a climate and so on, to the point that some people, like Chagrir, for example, who's the leading philosopher on this question.
Well, his point of view is that computation is more perspective on a system.
It's a way to see how a system works, but it's not like an interesting property. I think it's a sign here that people got completely lost, because, I mean, apparently people like Turing or von Neumann, all they had to do was to look at a rock in the right way, and yet they were struggling to build a computer when it was all trivial and just a matter of perception. It's all right there.
It doesn't make sense. I mean, at this point, one should step back and say, well, maybe I missed one important thing.
And so then in the book, I started to ask this question, well, what is it that they were trying to do? What challenge were they trying to solve, basically?
And what were they trying to implement, like, concretely?
And if you look at what they were trying to implement, it's pretty simple.
I mean, it's not pretty simple, but it's more straightforward. So a computer, before there were machines, a computer was a person, a person who did calculations with a pen and a paper, typically. Anyway, if you look at a classical definition of computational effective methods, that's the term that they used that Turing and Church used, that they were trying to formalize.
So an effective method or computation is something that you can do, a procedure that you can do by following elementary instructions without any ingeniosity, by just using a pen and a paper, okay?
[00:35:49] Speaker B: Ie, an algorithm.
[00:35:51] Speaker A: Yeah, exactly. That is an algorithm. That's what Turing and Church, for example, we're trying to formalize.
And so the Church Turing thesis is to say that these two models, the Turing tape and lambda calculus, they are equivalent, and they formalize this loose Definition, Okay. And so obviously it's not a theorem. It's just a way of saying, well, this is the formalism that corresponds to the intuition that we have of an algorithm.
So in this definition, usually when you study computer science, the key part of the definition is that it's a finite set of elementary instructions.
And this is indeed important because if you allow the machine to use any kind of instruction, then everything is computable. You just say the instruction is to do whatever you want to do. Right? So it has to be finite setup instruction. So that's one thing.
So for that they used automata, for example, and so on.
But in fact, there's another part of the definition which is usually unnoticed. But I think it's the key.
It's just using pen and paper.
And why is it the key?
Because in order to do a computation, to follow the procedure, you have to write down intermediate results, the computational variables.
You do that on a paper, and when you write something on a paper, it doesn't change.
You can come back to it, you can read it several times.
And what the algorithm is about is changes in what you write on the paper.
So the very core of a computation is coming back to our previous discussion based on substance metaphysics. It's the idea that the basic objects are stable objects, symbols on the paper. And the algorithm describes how you change those objects from times to times. And very importantly, these objects, they don't change on themselves autonomously. Right.
Now, if you want to implement a computer, what you need to do is basically something that implements the pen and the paper.
And so what Turing did is, for the paper, he had the tape and. I'm sorry.
[00:38:28] Speaker B: That's okay.
[00:38:29] Speaker A: No, no, it's just spam.
So in Turing's model, the paper is the tape and the pen is the automaton that changes the tape.
And if you look at the old computers, mechanical computers, what they used were mechanical components, typically gears, because a gear is something that is stable but that you can change to a different stable state. Right.
But then if you want to scale that, it's very complicated.
So people started to look at electrical phenomena. The problem with that is that electrical phenomena are not stable at all. And so what did people do? Well, they had to use some ingenuosity. It wasn't actually just about looking at a rock in the right way. It was about building stable circuits out of fluctuating phenomena. And so you have the flip flop circuit, for example. The flip flop circuit takes two transistors and puts them in such a way that you have two stable attractors and so you can use that to hold a bit and you can transition from one bit to another.
And then those flip flop circuits communicate with each other only at the time of equilibrium. And so you only have communication between stable states.
And so, I mean, you have to engineer that. It's not just about putting electrical components together. And that's a computer. No, you have to make it so that you have stable elements that can be modified.
So that was the challenge in making a computer. And even if you look at more modern computers, like if you take the quantum computer, what's the challenge with building the quantum computer that makes it perhaps impossible to do.
How could it be impossible if everything is a computer? Right?
Well, maybe it's impossible to do. Why? Because it's very hard to hold a qubit. So a quantum bit stable for any reasonable amount of time without it interacting without, you know, on its own with other things. So that's the technical challenges to make stable elements.
So the core of computation is the paper.
It's the paper. You need to have stable symbols on the paper that you can edit with your pen.
I think it's useful to take an actual concrete computation algorithm and I think in the book I might take the factorial algorithm.
[00:41:11] Speaker B: I think you could take the Fibonacci sequence, maybe.
[00:41:13] Speaker A: Oh, do I?
At some point, yes, but I don't think here. But it's not, it doesn't matter. Although it's actually a good example that you're giving here.
[00:41:25] Speaker B: You gave it, not me.
[00:41:27] Speaker A: Yeah, but I didn't give it in this context, but I gave it in the context of prediction. But anyway, Right, Anyway, in most algorithms you have variables and those variables, they are used several times, right? And so in order for you to do, to implement the algorithm, you must be able to read the variable and the variable should not change until the next time that you want to change it.
So if you read it several times, it will be the same thing as the last time it has been modified. If it changes on its own, then you can't do the job. Basically, that's why the paper is important.
That's why the paper is important. Right? And so if, for example, you made flip flop circuits in a way that the communication is not exactly, you know, it's before the transistor reaches equilibrium, then it will not work.
You need a synchronizing clock that is adjusted in such a way with the components so that you actually act at equilibrium, otherwise it will not work.
That's also something I think that's what Turnable described in, in one of his papers that I liked very much about levels.
And what he says is that in the computer there is this.
I'm not sure what word he used exactly, but this kind of shield between the microscopic dynamics and the macroscopic operation.
Basically the microscopic operation of the computer is independent of whatever the electrons are doing in the transistor because the transitions are at very specific times that are at equilibrium.
And you don't have that in a biological system.
The cells, the neurons, they are not coupled just when they are at equilibrium. In fact, they're basically never at equilibrium. It's an out of equilibrium system. Rather out of this you might see at a higher level maybe some attractors transiently. I mean, that's possible, but it's not the core or the organization of a living system.
[00:43:52] Speaker B: So how did we get here? We started by talking about computation. Yeah, computation. Yeah. And now you've already ended up on. This is not the way that biological systems actually work. And I'm tempted to just say yes, but this is an approximation that has. This is the approximation that has been useful to look at the different levels and in some sense, if you isolate them by brute force. Right, isolate spikes, for example. That's the whole point of taking an information theoretic approach to the levels is to isolate, to shield that level from the cognition. I'm not sure if that's what you were, if that's a direct analogy to what you're getting at.
[00:44:36] Speaker A: The problem is that the system itself is not congruent with what you're trying to do here.
I mean, the computer is like that for the reason that I say it is that it is built in such a way that everything that occurs out of equilibrium does not cross the boundary of the logical units. Basically, that's not the case of neurons.
[00:45:00] Speaker B: How so?
[00:45:01] Speaker A: Yeah, if you just look at a voltage trace of a neuron, well, it's fluctuating. It is not stable in order to. I mean, people then talk about firing rates because they want to have stable quantities. But it's a cheat.
Of course, if you measure anything, you get a number. Right, but whatever it refers to is actually fluctuating. And the next second you get a different number.
Intrinsically, it's a fluctuating system.
It's not like a bit that you can write and then later on read several times. No, the spikes are events and the cells are coupled.
[00:45:46] Speaker B: Well, this is why some people like Randy Gallistol, who argues that actually if you want to do computation, you need something Stable, like DNA, but that's probably not it.
So it must be some stable molecule because proteins turnover at a timescale that's not useful. If you leave a protein and come back, it's not going to be written on the paper the same way.
And so this is why he argues, yes, we do computation and therefore there must be stability in something that we're not seeing yet as of yet.
[00:46:17] Speaker A: Yeah, yeah, so that's, that's a very good, that's a very good remark. So indeed, although I don't agree with him, this specific argument I think is right in the sense that he realizes that indeed the physical states of neurons are not computational states. It doesn't say it in these terms, but that's what it means.
They are not computational states.
But you do find in cells
[00:46:49] Speaker B: structures
[00:46:50] Speaker A: that are stable and that could be therefore the basis of a computation, which are nucleotides, for example.
But I think the logic is wrong.
Well, okay, for several reasons. That's the story about nucleotides. But in order to do computation, it is not, how to say, in order to build a computer, yes, you do need to have the system built in terms of the stable units.
That's a communication together at equilibrium and so on.
But in order for a system to do computation as well as other things, that is not necessary.
You could have a physical system which is a dynamical system which can for example, hold a pen and write on a paper and execute computations and you don't need to have symbols in the brain for that. Or you can very much have a dynamical system which does have stable states.
Right. For example, if you take classic example, the red stain of Jupiter, for example.
So the red stain of Jupiter is the classic example of a self sustained system, a bit like the candle flame, which is in fact always materially fluctuating. It's an atmospheric process basically, which maybe in a few centuries will be different or not there, but you have some attractors inside a big complicated dynamic ecosystem, so you can have them. That doesn't mean the full system is just made of bits and flip flop circuits. Right?
So I see no problem with the dynamic ecosystem executing computation in particular when coupled to a pen and a paper, even if it doesn't have, or even if it's not made of flip flop circuits. Right? So I don't think you need to have a nucleotide based computation or things like that. I mean, in theoretical science, for example, there are lots of people working more on the dynamical side to show that you can have Stable states at the network level and so on.
So this is possible as, but it's possible as emerging from a dynamical system, not pre existing as in a computational system. Basically.
[00:49:26] Speaker B: I think his point, he uses examples like dead reckoning, where you actually need that stability, that memory. You need like a memory trace that's stable enough to perform the dead reckoning. So he doesn't see ways to perform this from the dynamical system point of view. But I don't want to get off on a tangent, just on Gallistol's approach.
[00:49:46] Speaker A: That's exactly what he says.
But he seems to completely bypass the whole idea that there can be stable attractors in living systems. And in fact there are just a cell is a stable attractor. Because if you look at the turnover rate of proteins in a cell, so for example, in the hippocampus of mice, it's around one week. Okay, so that being the half life.
[00:50:17] Speaker B: Right. That's the half life of all the.
[00:50:19] Speaker A: The half life. Exactly. The half life is about one week. Well, so let's say after one month, one month, your brain is completely materially renewed.
[00:50:28] Speaker B: This is the ship of thesis kind of approach, you know, like where you replace one board at a time.
[00:50:33] Speaker A: Yeah, it's exactly that.
Yeah, it's, it's that.
But still the neurons themselves, they can live a hundred years.
They look pretty much the same, they seem fine, but they're different. So what this means is that what you see, the shape and all that, it's not a stable object, it's an attractor of the dynamics of synthesis and degradation. Basically you have a stable state there, and that's your cell. So we know that exists. These long attractors you don't need.
[00:51:07] Speaker B: I just had the thought that I wonder how much of each of us has been replaced since the last time we spoke to each other.
A pretty good portion.
[00:51:15] Speaker A: Oh yeah, you're right, it's 12, 5. Yeah, something like. You're gonna do the math 60 times
[00:51:20] Speaker B: you've used the pen and paper.
[00:51:23] Speaker A: Yeah, I think it's.
[00:51:24] Speaker B: Sorry, I interrupted you.
[00:51:27] Speaker A: Yeah, right. So I mean, this is precisely the thing about biology. Biology are out of equilibrium system. Biological things are out of equilibrium systems. And that implies that everything that seems that appears stable is in fact an attractor of the living processes.
So they exist so you can have memories even though materially everything changed. That's the shape of a cell, for example.
So I don't think his arguments work for this reason. I don't think you need necessarily nucleotides and so on you can look for stability at the level of the organization rather than at the material level, basically.
[00:52:14] Speaker B: So you said the magic word organization there.
[00:52:17] Speaker A: Exactly. Yeah.
[00:52:18] Speaker B: Let me. Okay, so I don't want to spend the whole time talking about computation because you write about so many other things that I also want to get to. But one of the other sort of ways into thinking about computation, and a lot of what you write about is that we are embodied and it's all about anticipation and behavior, et cetera.
This is kind of pithy, right? You unpack this notion for me that computation is a behavior, but behavior is not a computation.
So given what we've been talking about, can you sort of unpack that and maybe this will lead to some other topics in the book?
[00:52:56] Speaker A: Yeah. So computation is a behavior. I think given what we've been talking about before, it's pretty obvious because computation initially was some particular activity that some people it with a pen and paper. So it is a behavior.
And this particular kind of behavior is the one that people like Turing or von Neumann try to replicate with a machine.
So computation is a kind of behavior, but then there are other kinds of behavior that do not involve a pen and a paper or calculation. Right.
For example, if you take an example I've taken in the book is riding the bicycle.
Riding the bicycle is an interesting example because you can kind of describe it algorithmically. You can say, if you want to ride a bicycle, you go on your bike, you sit on your bicycle, then you push on the left foot, and then you push on the right foot, and then go back to line one. Okay.
And you do that all the time. It looks very algorithmic, but it's not an algorithm, because an algorithm is something that you do, and if you follow it, then it leads to the results.
But in the case of the bicycle, in order to do it, yes, you need to follow the recipe, but also you need to know how to ride a bike.
If you just put your son, who has never done it, on a bicycle, and tell him, well, it's easy. Push on the left, put on the right, it's just going to fall. Right. Because riding a bicycle is not just about that. Right. It's yes, pushing on the left, pushing on the right while trying to ride the bicycle, basically not to fall, and so on.
So it is not an algorithm. It's just kind of a recipe or it's a guide. It's a guide.
It has a recursive structure. It has these elements that you find in algorithms, but it's not an algorithm.
[00:55:06] Speaker B: So A neuroscientist would say, well, sure, but okay, if that's. Your algorithm is push right, push left, but the behavior is inactuated by computational processes in your brain. Right? So, yeah, okay, but you have to learn how. You have to know how to ride a bike, which involves some balance and all the braking and all those sorts of things. But they're all performed by computations that are just communicating in the right way that riding a bike emerges.
[00:55:36] Speaker A: Right, exactly. So you have just explained what the main thesis is in cognitivism. Cognitivism says, well, not only is computation a kind of behavior, but every kind of behavior is, at some level, even if it's not obvious at first sight, a sort of computation.
It's a form of reductionism, cognitive reductionism, which says that cognition is made of the combination of little atoms of cognition, which are computations, elementary operations.
But it's a theory, it's even a speculation because it's absolutely not obvious if just look at the behavior, you don't see that. Right.
[00:56:19] Speaker B: It's almost an assumption as well.
[00:56:21] Speaker A: It is completely an assumption. Right. It is the assumption of cognitivism, basically.
And so it's a form of reductionism.
And so you find this most explicitly in classical computationalism.
So classical AI, for example, where it was thought that.
Well, take macular and bits model, for example.
They thought that the binary state of a neuron encoded a logical proposition. That was the motivation to make their model.
And so, because they thought in terms of classical computationalism, not connectionism, by the way, even though it is at the root of connectionist models, they were not connectionists at all.
They were classical symbolic computationalists.
[00:57:10] Speaker B: Right, Logic.
[00:57:11] Speaker A: Logic, exactly.
They didn't like the Perceptron, for example. They thought it was nonsense, but they didn't like it.
[00:57:19] Speaker B: They didn't like it.
[00:57:20] Speaker A: I didn't know they didn't like it. No, yeah, yeah, they got ripped off by Rosenblatt, basically. Anyway, yeah, because Rosenblatt took their model, but interpreted in a completely different way in terms of probabilities.
[00:57:37] Speaker B: Right. So they wanted to map like the logical propositions onto symbols that are the units of reasoning.
[00:57:46] Speaker A: Reasoning, really, the symbolic stuff that you saw in expert systems and so on, that was really this line of thought. Right, right. And so the idea here is that deep down in the brain you have little logical units that corresponds to symbols, symbols related to actual mental stuff, properties, properties of objects, for example, or and so on.
[00:58:12] Speaker B: So the information is in there, it's represented, it's coded in there.
[00:58:16] Speaker A: Absolutely, yeah. Logical propositions are coded in the activity of the neuron.
And yeah, that was the assumption that deep down, its computation with smaller operations, small logical operations, and every kind of behavior is something like that.
It's a theory, Right. It just didn't turn out to be correct. But it is basically a theory that cognition is made of computation.
Now, cognitivists have the tendency to say that cognition is defined as the part of human behavior that is computation. But that's a problem.
[00:59:00] Speaker B: It's a problem, right. But is it a problem because of the way that we use terms like computation and cognition? The definitions change over time. Right.
And then they can become meaningless. Right. Like the word computation now, and like the word mechanism now just kind of means how it works.
[00:59:20] Speaker A: Exactly.
[00:59:20] Speaker B: Instead of like a very formal operational thing.
[00:59:24] Speaker A: And why is that? I mean, if you look at your history of the field, initially computation was meant as computation, Right. I mean, Makur and Pitts, that's exactly what they meant.
Logical propositions, operations, a discrete number of.
And the classical cognitivist, that's also what they mean. The models there were finite automata and so on. Right.
But it went out of fashion. So what happened is that instead of saying, well, actually it's not computation, it's something else, people have just changed the meaning of computation so as to kind of fit what they were seeing, which did not match the computationalist theory.
So in neuro computationalism, for example, well, there has been a shift already in what computation is supposed to mean. I think it's still computational, though. But in connectionism, for example, you still have the elementary operations, which is what neurons are supposed to do, But you still have symbols at some points. So the main distinction is that you have symbols that are intermediate computational variables, but you also have symbols which refer to things. Because if you say this neural network compute the identity of a face, that must mean that there is an output which is the activity of a neuron, which represents the identity of the face.
So it is still symbolic at some point, even though in between we might agree that the neurons, they don't represent specific things, specific properties and so on, but at some point they do in fact.
So it's kind of a mashup of different ideas that you find in the neuroscience literature. You have mashup of classical computationalism when you say that you have mental representations that the brain computes and so on. You have a connectionism where in fact not everything in symbols. So are there new representations? Yes, sometimes, but sometimes they are not. I don't know, it's intermediate variables which don't necessarily represent something in particular.
And then you also have dynamical talk in neuroscience, but that is referred to in terms of computation. So this is completely confusing.
[01:02:03] Speaker B: It's in the service of like you only have dynamics so that you can compute. It's like this fundamental assumption that doesn't go away. It's all about computation, even if it's done through dynamics. Right?
[01:02:13] Speaker A: Yeah, but so you can have computation from dynamics. That's exactly what the flip flop circuits do.
So you have two stable states from this dynamical electrical circuit. You can, but you can also have no stable state, out of equilibrium system, chaotic systems and whatever. And that is not computational then.
And so there is often in the literature confusion between computation and dynamics for this reason, because sometimes you have dynamics, which is not obviously computation, but we are, are so used to using the word, the terminology, computation and so on that we say computing for whatever.
And I think it's not a good habit to do this because if you want to explain what a system does, you have to be at some point a bit precise about what it is that it's supposed to do. Is it's, this is why people get theories.
[01:03:15] Speaker B: Right, Right. I mean, what do you say to the charge that like, oh, you're just being a curmudgeon and like you're, you're. Yeah, sure, if you like are precise about this definition, then almost nothing counts as this. Right. But we, in order to make progress, like we want to make progress, we don't want to quibble about definitions of words. Why is this important? Why are you hung up on these things? You know?
[01:03:37] Speaker A: Yeah, it's a semantic question.
Yeah, well it is semantic in a sense because I think that meaning actually matters when you talk about things.
Especially I think when scientists talk about things, it would be important, I think that they make clear what they mean.
Now I'm half joking here because I know that what people mean here is that they just use computation in a different way. And it's fine if, but it's fine only if it's used in a coherent way. The problem is it's not used in current free optimism, almost never used in a coherent way. If you look at a typical neuroscience paper, you would have some introductory or concluding remarks about the scope of what they are trying to show. For example, in terms of information. People will say, for example, that, well, you have that in David Marr's book, for example, since, since we know by looking around you we get some knowledge about the world, this information must then be Stored or represented in the brain somehow. Okay, well, here you have two different, very different meanings that are used.
You have knowledge to know about stuff, and you have information or representation, which is just a trace, But a trace somewhere is not a knowledge. To become knowledge, there needs to be some cognition and so on. It's two completely different meanings. So the problem is that when people use information in the general sense of, you get informed, you actually know something to motivate their work. But then they use exactly the same word to just mean. Well, it's just a correspondence between something I observed, like the scientist observes, and something I see in the brain. That's all there is. It's a technical definition. Well, it's a problem because you use the same word to mean something completely different with a much bigger scope. I mean, if people use the same word in the same way throughout their paper, that would be fine. But the problem is it's not coherent.
It is almost never coherent. I mean, the reason why people get confused about information and representation is because.
[01:06:01] Speaker B: Because
[01:06:03] Speaker A: implicitly, perhaps even without noticing it, they put much more meaning in those words than the technical definitions. Because the technical definition of information is really quite boring. It's just a correlation between something that you observe, not even the organism, and some stuff in the world.
How is that going to tell you much about how an animal knows?
I mean, it's just different things, right, but somehow, you know, it has to be related, but you never explain how. So I think it's a problem. I mean, I'm fine with people using words in whatever way they want, especially in technical papers, as long.
[01:06:51] Speaker B: Oh, okay.
[01:06:52] Speaker A: Well, as long as they define it properly and. And they stick to it.
But that is not the case.
[01:07:00] Speaker B: There is, like, this little substitution that's, like, slipped in. In your example there. The substitution was, like, replacing information, like using the term information to mean cognition as a whole.
[01:07:15] Speaker A: No knowledge. No, in general knowledge.
[01:07:17] Speaker B: Yeah, yeah, yeah, sure, knowledge.
Okay, so. All right. But let's switch gears just a little bit.
You have been studying paramecia, the paramecium.
And I sort of. I had this in my back of my head, like, while reading your works and stuff. And I'm sure I could find out just by reading, you know, going back and reading your work. But is it like a chicken or an egg? Like, did you start studying the paramecium because you had these ideas that maybe there's something more to this or something missing in the computationalist perspective?
Or did studying the paramecium generate these ideas? Or is it just A virtuous circle where they feed on each other.
[01:08:00] Speaker A: It's more the former, even though now that I'm working on it, it also makes me think, I suppose I was now before I worked on Paramecium, I was working on systems neuroscience of hearing, special hearing, in fact.
And that's about at the time when I wrote the blog too. And I was quite frustrated by the way the field worked. Still works in that people were interested mainly in asking questions like how is to your location of a sound represented in the brain? These kind of things, which, I mean, if you ask it as a technical question, it could be a first step, I could grant that, but it's also the end step for some reason, and in that field. But I think it's the same in many other fields. Subfields of neuroscience, you have competing theories about coding, about representation of different things in the brain which are contradictory. But everyone has the same data, right?
So how is it not already resolved?
Well, the reason is that the very question itself cannot be resolved because it is asking how do you interpret the activity of neurons in terms of that something else that you're interested in?
And I mean, there are different ways to interpret many things.
So if it's just a question of interpretation, it's quite hard to make progress.
So, for example, in the send localization field, you have two big theories. One is the to caricature.
One neuron encodes one particular position or it's average activity of a hemisphere that encodes the position of sound.
And I found that much of the literature was rhetorical in the sense that, yeah, there were arguments, you say, yeah, but it goes in this direction. Oh, there's this cue and so on, on. But it's rhetorical, like lawyers, basically, because if you look at the data, the same data, you see two things. Yes, the average activity of neurons does correlate with the thing you're interested in, with some location. And yes, they also have very heterogeneous properties and responses. So maybe in fact it's not the average, but it is the detailed pattern of activity.
And then. So if you want to go further than that, you need to go beyond interpretation and ask it instead.
How does the activity of these neurons contribute to sound localization behavior?
For example, what happens next?
Why do they fire those neurons?
Something happens next when they fire, what happens is that they act on other neurons and maybe ultimately that makes the head turn and so on.
And if you want to remove yourself as interpreter from the picture, what you need to do is look at a system that behaves. And then you can say, well, you have a system which does that, and those neurons, they organize in this way and so on. But you need to look at a system which means that if you don't have the full organism, at least you need to have a sensory motor loop, you need to have an environment, you need to have the sensory part, and you need to have the actin part, which replaces your interpretation as an observer. And so that's why, coming back to paramecium now, I was interested in moving to different kinds of questions where you ask questions about the whole organism in its environment and not just.
[01:12:24] Speaker B: But in this case.
In this case, the whole.
Sorry to interrupt. But in this case, the whole organism is a single cell. What is a paramecium exactly?
[01:12:33] Speaker A: Yeah.
At the time, I discussed a bit with my collaborators, and they were almost laughing at me because it's just extremely difficult to ask this kind of systemic questions in, say, a cat.
It's so complicated structurally, experimentally, and so on. So people have been looking, for example, at C. Elegant.
If you look at C. Elegans, it's something like 11,000 synapses already. It's.
[01:12:59] Speaker B: We know, 102 neurons in the.
[01:13:02] Speaker A: Yeah, we know, we know, we know the connectome. Okay, but still pretty complicated.
[01:13:08] Speaker B: Oh, C. Elegans is too complicated for Romain, everybody. He has to go to just a single damn cell, like, to simplify it as much as possible.
[01:13:15] Speaker A: Yeah. So before I stumble on paramecium, I was thinking, well, maybe what I should do is just some more theoretical work on. Imagine you have an animal that is just made of one urine. How do you start? How could it work and how could it learn? Because I don't come from a biology background. I come from a mathematical computer science background. And I had never heard of paramecium before.
And so I think people who have done biology, maybe in class with a microscope, they must have seen some microscopic pond life.
I hadn't, but I stumbled on it by accident.
Yeah.
[01:13:54] Speaker B: So it's. A paramecium is like a single cell that has, like, little cilia that can kind of. And so it can move around in an environment and it can. Like the way it behaves is it sort of randomly goes up a gradient of food source or. Sorry, it randomly turns and moves. And then if it's going toward a food source, it keeps going. If it's not, if it's going away from the food source, it randomly turns again, that sort. So there's these simple behaviors. So it's a Behaving fully functional, autonomous agent of a cell. And is that why you were interested in studying? Because it's behaving and so strip everything else away, strip as much of the observer away as possible and take the system as it is and study that
[01:14:37] Speaker A: it's basically the simplest kind of organism, well, bacteria aside, but that you can study from the perspective of linking behavior and physiology at the organism scale, right. As you say, so prometheum, but ciliates, if you just take a drop of water from a pond and look at a microscope, I don't know if you have done that, but it's amazing. What you see there is amazing. If you didn't know the scale, like the special scale of these things, you would think they are animals.
They are actually the ciliates, they are single cells.
But first of all, there's a huge variety of them. And even though it's just one neuron, you can see that they can do many different kinds of behaviors. Which already is a direct contradiction to connectionism, by the way.
So paramecium is one of them and it's one of the most studied ciliates and cds, they have many animal like features, so they need to feed, they mate, their sexual reproduction as well as division.
They are sensitive to light, to various chemicals, they are mechanically sensitive, sensitive to heat and so on.
They also have collective behavior.
They are surprisingly rich and they are just one cell.
And the reason I got interested in paramecium specifically, and this I think I got from Hiller's book on ionic channels. In Hiller's book, one of the first figures is the recording of an action potential in Paramecium that was done in the 1970s.
So at the time people were not shy to stick electrodes in everything they could find.
So mollusks, Aplysia, of course, and frogs and so on, on.
Because, well, the basis of excitability was discovered in invertebrates, right, in the squid. And then people started to wonder, well, does it work in the same way everywhere and yes, well, sort of the same way, yeah, with some variations, but basically with ionic channels in plants, in the heart, in mussels, in mollusks and in protists. And so, and so in paramecium, if you send, if you inject a current in paramecium, if the current is strong enough, you will have an action potential, It's a calcium based action potential.
And that action potential will trigger what's called an avoiding reaction, which is a reversal of swimming. It swims backwards and then at the end it kind of Turns on the bottle and start swimming again in the forward direction. And so the action potential triggers this change in direction.
And then you have stimuli which can be mechanical. For example, so if paramecium hits an obstacle, you have mechanoreceptors, and that will depolarize the cell. And if it's strong enough action potential and up, it draws back and changes direction.
If you touch on the rear, it will accelerate instead. And this is mediated by a hyperpolarization, which then accelerates ciliary beating and so on. It's extremely rich. There's also adaptation, learning and so on in this single cell.
And of course, from a modeler's perspective, this is very appealing because you can start modeling both at the physiological level and the organismic level in the environment, and you can pull yourself out as an interpreter of the activity.
I don't need to know what kind of symbol the action potential of paramecium are trying to mean, right?
All I need to know is how it is coupled to the environment and what happens when there's an action potential with its motility.
[01:18:46] Speaker B: Is there room, though, to insert that observer into the chain of activities going on in the par? If you wanted to, could you insert representations in, could you insert information in? And could you do all the things that we do for brains?
[01:19:01] Speaker A: I mean, technically you could, but you would just realize that it's completely meaningless.
I mean, of course you can, because technically information is just a definition that applies to sets of numbers. So of course you can apply it there as you can to anything else.
[01:19:19] Speaker B: So why is it so easy to avoid doing that with a single cell versus being difficult to avoid with the brain?
[01:19:26] Speaker A: Because it is not useful.
The reason why people do that in neuroscience is precisely because they are using reductionist techniques. They are approaches. That is, if you just look at a piece of a brain, but no piece of brain behaves, you don't have any other way than to correlate it with the organism, of the behavior of the organism, which you are not observing or which you are not explaining at least, so you don't have any other way. Or, for example, you have a whole subfield of neuroscience that deals with, with cellular neuroscience. So, for example, to interpret the role of different plasticity processes, intrinsic plasticity or synaptic plasticity in the neuron. But the difficulty with that is it's very difficult to connect it to what the organism does if you're just looking at one neuron. So, for example, if you have an intrinsic plasticity mechanism, which raises the threshold or lowers the threshold.
What is the significance of that out of context? That is very difficult. So the only thing that.
The kind of thing that you can do is.
Well, yeah, in terms of information, it increases the information that the outputs and so on. But you do this because you don't have any other choice. What you would like to do instead is how does that contribute to the organism adapting to something or, you know, interacting.
[01:21:08] Speaker B: Well, that doesn't stop people talking about the mechanism of working memory by recording from a single cell. Right.
People make the leap all the time. But of course.
[01:21:19] Speaker A: Yeah. Well, they also are encouraged to make this leap because you're supposed to, you know, explain the significance of your finding. But of course, if you look at the single cell out of context, it's quite difficult. Difficult, Right, right. But you don't have this issue if you're dealing directly with a unicellular organism, because if you have adaptation at the physiological level, you can directly look at what it means in terms of changes in the behavior. Is it adaptation of the organism to the environment, for example?
And so this is what I like very much in this.
Yeah. In paramecia is there's less bullshit, basically.
[01:22:02] Speaker B: Yeah, but so doesn't that make you. I mean, this is one of the things that in my little trajectory of thinking about these things, you run into these issues that you bring up throughout your book. Oh, no, I can't interpret it in this way. There's not information in the brain.
[01:22:18] Speaker A: Right.
[01:22:19] Speaker B: Also, I can't.
It's almost like paralyzes one from doing any research at that point. Unless they go to a single cell or something. Do I need shift my whole. I can't be a neuroscientist anymore because I can't interpret anything the way the whole field interprets it. I see that it's all a masquerade and that we're inserting all of these false notions.
And so I can't. I. What hairs I have left, I'll be pulling out every time I read any paper. Mechanism, computation, you know, all of these, like really big claims in terms of the interpretation. And I. And I see it for what I think it is. And then I think, oh, how can I even proceed in this field? Right. So I don't know. There's this paralysis that I often feel. And the whole question of, like, how do I proceed with these sorts of notions like biological organization, which we haven't talked about yet. I know you like the Moreno and Mosio organizational approach and autonomy. And I'm all on board with that stuff. But, boy, it's really hard to get started, to restart after that without retooling your whole career. And in some sense, you've been able to not retool your whole career, but you have made this shift and it's a comfortable place to be in. What should the rest of us do? Sorry to sort of switch topics there on you, but.
[01:23:43] Speaker A: Well, I mean, I have struggled with this exact question myself, Right. And that's why I kind of shifted.
[01:23:51] Speaker B: I think there are, one could say that's the safe thing to do. Right. You're avoiding the problem by studying a simpler system. But I still want to study the os, the cool brain. What should I do?
[01:24:01] Speaker A: You know, I understand that.
Well, I don't know if I'm avoiding the problem though, because.
Okay. One conclusion that I came at by writing this book and trying to develop an alternative, as you mentioned at the beginning, is the idea that, that neurons are not components, they are living units, and this is how they should be treated. And if that is true, and it is true, then what a brain is, or living tissue is, is a collection of interacting living systems.
Right? And so if you. With this perspective, then I think it makes sense to start by understanding that what is it that is, you know, what is the element of interaction? And in this case, it's the single cell, but the single cell seen not as a component, but as an autonomous life form.
If you look at, you could take an evolutionary perspective, for example. What we know, of course, is that we all come from, from unicellular ancestors, right?
So a million, 2 billion, 2 billion years ago, our ancestors were marine protists, right?
And they were. And that, and that's, I think.
Well, one important insight is, is that these marine protests, they were faced with pretty much the same kinds of problems as any animals is faced with, with, I mean, not all, but they don't talk, for example, but, but I mean, they had to survive, to adapt to changing environments, to evade predators, to look for mates, to, et cetera, et cetera, complicated problems that they had to solve just as well, even though they were just one cell.
But now what an animal is, is a clone basically of cells, cells developing from just one cell whose ancestor is one of those protists. What this means is that each neuron is a descendant of a protist, right? So, and those protists were autonomous cells who had all this, you could call cognitive abilities, even though, I mean, this could be discussed. But you see what I mean, they had to adapt and so on.
[01:26:40] Speaker B: They had to Behave.
[01:26:41] Speaker A: Yeah, they had to behave. And that's the ancestor of our neuronstrapists to be more sophisticated. Right. So I think if you take this perspective, then you see that instead of trying to understand neurons as doing some summing and thresholding, for example, you instead try to understand them as autonomous units.
I think that already changes the perspective.
[01:27:06] Speaker B: Does that change the. The perspective into one of. So, okay, they do summon threshold, but that is in the service of being an autonomous unit. Is that the flip that we need to make?
[01:27:20] Speaker A: Answer this question.
[01:27:22] Speaker B: Maybe it's a poorly formed question. I mean, that's, you know, they're like the. The concept of like. So from that autonomous agency perspective, biological organization perspective, right. That people like Mosio and people, People. Many people write about these days in theoretical biology, you can think of DNA as being not the blueprints for the life, but as something that the cell uses. Right. And as opposed to DNA, I create the cell.
It's, hey, I'm a cell, I have this DNA, I can use. It's a sort of a flip of perspective, Right. So I'm thinking in terms of, like, in my own thinking, like, how. How do I flip my own perspective and think, okay, well, like the spiking of a neuron, that is, that's done in service of continuation of the cell's life, which is what it all boils down to in the end. Right. Maintenance of the organization of the cell to continue and reproduce, et cetera.
I may have taken us off track here, so I'm sorry if I. Yeah,
[01:28:23] Speaker A: yeah, but it's an interesting.
Several interesting things. So.
Right. First, I think the focus should be on the processes rather than the operations or the symbols that spikes are supposed to represent and so on. And some of those processes, for example, are.
Some of those processes are actually studied, of course, in neuroscience, like intrinsic plasticity and things like that, but not very much, I would say.
So first, you must have processes at the cellular level that make it so that has to maintain, in a way, the organization of the cell.
And then there's the idea that cognition basically is a kind of collective behavior of those cells rather than a sequence of operations, because a sequence of operation puts the normativity outside the system.
And as you have someone organizing a series of operations, whereas the cells have an autonomy. If you just look at two cells of the same type, for example, same neuron type. So not only, of course, all the cells in the brain have the same genome, but some have expressed different subsets. Okay, but even when they have the same subsets, of genes that are expressed, they look different, right? Because they grow in different ways. There are constraints in particular, mechanical constraints that make it so that they are in certain ways and so on.
If you take for example, the blind spots, I think it's here, it's an example I give in the book, the blind spot. So what is the blind spot? It's a place where there are no photoreceptors in the retina. And why is that? Is the blind spot encoded in the genome?
Well, you see what I mean? The blind spot is there, but it's not encoded in a genome in any meaningful way. It's just that the nerve has to pass somewhere, and so in that place there will be no cells. That's all. It's not that it's encoded in the genome, right? It's just that the constraints of development make it so that neurons develop in certain ways and it's in a way independent of the genome. I mean, if.
And that's why cells of the same type can be quite different. In fact, I think I got off track a little bit.
[01:30:52] Speaker B: I made that happen. I'm sorry, because I sort of am rambling on. I mean, part of it is just that I want to make sure that we discuss some because even though we've talked about a lot already, we haven't touched on so many of the topics in the book, and of course readers will have to find that out for themselves.
But my own interest in biological organization is one of the parts of the book that really was screaming out to me like, yes, yes, yes, because I've been interested in this and in fact I'm putting on a workshop, I don't know if I said this earlier, next fall, where I am to talk about how these ideas of constraints as causes and biological organization as a principle of cognition, as opposed to the computationalist, how we can use those principles in a productive manner, like basically, where to start? How do we incorporate these ideas into neuroscience? So that's one of the things I was excited to talk with you about.
[01:31:47] Speaker A: Okay, so yeah, constraints.
So specifically about DNA. So you mentioned how to think about DNA. Basically.
I actually just submitted a paper on this question, the genetic code and so on theoretical biology paper, which is largely inspired by my work on paramecium. Oh well, in fact by others work on paramecium. But that I got to find out thanks to that.
In paramecium research, for example, it has been observed already in the 50s, 60s that you can have monster paramecia that have exactly the same genome, but Very different form.
For example, one case is where two paramecia come together for sexual reproduction.
So what happens normally is that they stick together and there's an exchange of genetic material and then they detach and then they divide.
But sometimes it doesn't work, they can't detach. In that case, what happens is they fuse.
And so you have a big paramecium with two mouths, two anuses, digestive system and so on. The nuclei, you have just one, the big nuclei.
And so you have this kind of double paramecium and then it can divide and it divides faithfully.
What do you mean faithfully as a double paramecium.
[01:33:29] Speaker B: Oh, okay, okay, right.
[01:33:31] Speaker A: So it's a stable line. It's a stable line with exactly the same genome.
[01:33:37] Speaker B: Right. And that was never encoded in the genome in the first place.
[01:33:41] Speaker A: I know, because they have the same genome anyway, the single paramecium and the double paramecium, they have the same genome. Exactly right. But it's just that you have two forms that are stable. Because it is something that we tend, I think, to forget as animals is that it seems that the new person, the kid, grows from nothing. Right. But of course it is never from nothing or from DNA. What happens all the time is that a cell divides. So there's always continuity, always.
And so that's of course more obvious at the unicellular level.
So when a cell divides, well, the parent disappears. Basically you just have two kids and they are both pieces of the parent. So what is transmitted is the genome and pieces of the parameter always. So you have pieces of the membrane in particular.
And in paramecium, what has been shown is that the pattern of the cilia, so it's covered in cilia and the cilia are arranged in a very kind of mosaic way so that the cilia are bit in the right direction. Okay.
And the way it works, it's basically not encoded in the genome.
It works by templating from. From the initial pattern.
When cilia are added, they added inside, like in between existing cilia and they orient themselves depending on the existing orientation of the cilia. So that is used as a template independently of the genome.
And so that is why when you have the double paramecium, it divides and it divides faithfully to the parent.
And even though that's something different would happen in single paramecium, there are even experiments where people managed to turn part of the cilia and then that propagated along the row of cilia and then stayed stable. And you had cilia beating in reverse with a stable line.
Because what the genome specifies primarily, I mean, for the coding genes are the proteins, but how the proteins are arranged, that is not necess. I mean, it is not explicitly specified. I mean, it is partially specified as constraint, because proteins can interact in certain ways and not others. But it's a constraint. Right. Very often there can be different ways, and in paramecium there are different ways, as we can see.
So the genome must be understood as a constraint, not as a code. Apparently in paramecium it doesn't encode the form, but it, of course, constrains. It particularly constrains the kind of proteins that are made.
But none of this is new, actually. Right. I'm talking about research from the 60s. In fact, this is not new at all. Right.
[01:36:47] Speaker B: All the cool stuff is old. So, I mean, from a process viewpoint, constraint is just something that changes more slowly than the stuff that it constrains. So is that in line with thinking of DNA as a constraint? I mean, one of the things that you write about in the book that I wanted to ask you about along the same line, is that you think of scientific laws as formal expressions of constraints, since we're on the topic of constraints.
[01:37:16] Speaker A: Yes, absolutely. Okay. Two different things. On the first thing, the idea of a constraint, which is. Is well explained by. Yeah. In Moreno and Mosio and Kaufman also, and a few others. Juareiro also wrote about it.
Right. And simple example in biology is the enzyme.
So the enzyme catalyzes certain reactions.
And so. But. And the enzyme doesn't change through the reaction.
So it is there. It causes direction in a way, but it's not actually involved either in the substrate or in the product. And it doesn't bring energy either.
Right. So it just makes it so that certain processes can happen. It also doesn't specify the processes because a given enzyme can catalyze different kinds of reactions.
What happens depends on what substrates are present at the given moment and so on. So it constrains what can happen.
And then to the second part of your question, something I realized while writing the book is that there's this recurring thing in the philosophical literature that in physics, the idea of cause has been.
Which with Aristotle, with final causes and so on, has been replaced entirely by efficient cause.
All physics is described in terms of efficient cause.
Efficient cause is basically what you have with billiard balls.
Okay? You have balls, hit a ball, hitting another ball, and that triggers the other ball to move. That's the efficient cause. And a lot of molecular biology Is described in this terms.
You have these two proteins that come together and then this happens and linear causality, chain link. Yeah, and for some reason it is said, it is written very often that physics, like modern physics, is all about efficient causality. And, and this is in contrast with kind of pre scientific descriptions in terms of final causes, what the processes are aiming to do and so on.
So it is true that like modern science has tried to expel theological explanations, like for example, the idea that planet trajectories must be circles because the circle is the perfect trajectory.
[01:40:07] Speaker B: Basically my dad's example was that of a terrible teleological statement is that we have eyelashes to keep the rain out.
[01:40:15] Speaker A: Right, exactly this kind of explanations.
But the idea that physics is describing as efficient causes is completely false.
For someone like me who has actually had an education in math and physics in retrospect, I mean, I've done exercises and so on, it is not at all about efficient causes. When you, for example, I take the example of the law of ideal gases.
The law of ideal gases is an equation that expresses an exact relationship between pressure, volume and temperature of a gas.
This is a proportionality relationship. Right, and what is this? Well, it expresses a constraint between different things that you can measure, that is that these things always vary in this specific way. Or if you move the pressure then, and you keep the temperature, then the volume will go, will change in this way because they are constrained in a certain way. So what an equation does is it formalizes a constraint that you have observed.
And all of physics is about this. In fact, it is not about. I mean, what you do in classical thermodynamics, for example, is using these kinds of equations. You never would describe what happens as well, those molecules, they go there and there and then they hit each other. And this is. No, never, that is not the case. Even Newtonian dynamics is not described like that.
Newton laws are described as equations, equations that turns into dynamics because there are differential equations. But if you take, for example, the static versions of the equations, you have equations that relate forces, for example, Example, if you want to know if an object, a rigid object is going to fall or not, you look at the forces like gravity and so on, and you see whether they match or, you know, and never you would think you would describe things in terms of how the molecules of the object change and never.
Or if you work on electricity, for example, Example.
I mean, I have done some exercises of electricity in my undergraduate studies. And what you do is, well, you have this current, this resistance and this voltage and the Ohm's law says that V equals R times I. And so you can solve this and find the voltage and so on. Never. You would describe, well, the electrons that actually pass through the wire and then they hit never. However, it's always dominant. I mean, the whole thing's about modern physics. It's about a lot about equations. Right? What are equations? It's precisely not efficient causes.
It's formal by definition. It's formal. And so they express constraints between different things that you can measure in real life. And that's the same thermodynamics. Also, the second law is a constraint on entropy.
Entropy goes only in one direction and so on. And it's not in terms of efficient cause either.
Efficient cause is more something that you use as part of full causality.
You know, I mean, when you describe what events that happen around you, well, I push this object and this is what happened, or I talk to that person and that's what he said.
But in physics, that's never what you do. Never.
So I think there's a big confusion here about the nature of modern science, which would be about efficient causality.
Even Rosen talks about the Newtonian paradigm, but I think it was a bit confused.
Modern science is not at all Newtonian in that sense. Not in the sense of efficient causality.
[01:44:37] Speaker B: But tied in to the notion of efficient causality is mechanism. And that's all we talk about in neuroscience, right?
[01:44:44] Speaker A: Yeah. Right. So if you talk about mechanism as efficient cause, then you mean mechanism in a very narrow sense, which is the sense of mechanics, basically, and even not the mechanics that you would learn in school, but the idea that you have mechanical objects that interact well, like balls, basically.
[01:45:11] Speaker B: Well, I guess the reason why I brought that up is because in my own thinking, like Juareiro, whose book is titled Context Changes Everything. And you could just say, constraint changes everything. Substitute. And that's the way I think about it. And when you start thinking about it in that way, it's all constraints and mechanism kind of disappears and you're left with like, okay, well, at some point, well, what is the thing that's flowing through the constraints?
There's like some life force or something like, how else can I explain it?
Which I'm not saying that I do. But when you start to look at everything as constraints, I don't know, mechanism kind of disappears and there's no room for it anymore. But that doesn't feel great. But then you start reading the term mechanism and everything and just thinking, oh, this is meaningless.
[01:46:04] Speaker A: Well, one issue with the term Mechanism, as with many other terms, is that they are used in different ways.
[01:46:12] Speaker B: Again, it just means how it works now.
[01:46:15] Speaker A: Right. If it means how it works, then all right, fine.
But it's not always what it means. For example, in molecular biology. And I talk for example, about the book of Jacques Mono.
He.
I think he really describes probably how many molecular biologists think about phenomena, you know, the key and lock principle. And yeah, so when he say, for example, that the cell is a molecular machine, it means that it has mechanism really in the sense of gears, like in sense of actual mechanical objects that interact. Interact when they touch each other and so on. Right.
So not in the general sense of mechanism, where you could put, for example, electrical phenomena. Right.
So it's a different sense. No, he really meant it in terms of rigid objects that interact based on their shape and so on. That's what he meant.
So. And in this sense that is close to the idea of efficient cosmic.
But you have phenomena in the cell or just in the physical world that just don't match at all this kind of phenomenon. They exist too, of course, you have gears in the world, they exist. But you also have phase transition in water, for example. Water becomes ice and vapor. And this just doesn't match the model of the key and lock phenomena.
[01:47:52] Speaker B: The hydrogen models start grinding on the oxygen models and then they relax. Yeah, yeah.
[01:47:57] Speaker A: I mean you can't describe, I mean, how the molecules are ordered and so on, but the transition itself, that doesn't work.
And this is the same for all electrical phenomena. In particular, try to make intuitive sense of how just a simple electrical circuits will work.
It is really not intuitive, honestly. And right mechanisms. So it's just that in the cell you have phenomena.
And that, by the way, has been very well described by Dan Nicholson, which you've had in your podcast.
I love his papers, in particular the one where he talks about which shows that the cell is not a machine, machine. And he meant machine in the sense of molecular biologists, Kian Locke and so on. And what he shows there is that there are lots of phenomena in the cell which just don't fit that.
Especially because at the molecular scale, most molecules are actually just wiggling around.
They don't look like what we expect from our usual experience.
And one thing he doesn't mention in that paper, it's more about chemical aspects. I think in that pa the electrical aspect, I mean, as coming from electrophysiology and so on. For me, it's just huge.
One of the big thing, especially in neurons, is that the membrane is polarized. And electricity is quite amazing in that. It's.
If you want to explain to a student or to a biologist what happens when, when some eons pass the membrane, honestly, it is very difficult.
It is very difficult because what happens is that some eons enter through a channel, so they are charged and then suddenly by kind of magic, that changes the polarization of the whole membrane like tens of micrometer away.
That's the magic of electricity acts at a distance resistance and it acts not at all as we imagine in our general experience. In fact, I found out from my previous work on electrophysiology, neural acceptability and so on, that most biologists, they don't really understand electricity, I think, but I think most people don't understand electricity. I'm not blaming them specifically for that, but I think really it's not intuitive because it's not local phenomena phenomenon.
It doesn't follow the mechanistic framework. Basically something hits another and so on. It doesn't work like that.
But those are extremely important phenomena in a cell in particular, if you're interested in the question of levels.
Right.
So there's the level of the cell and the level of the brain, but there's also the level of the molecule and the level of the, the cell.
And what bridges the level of the molecule and the level of the cell, especially in neuron, is electricity. Well, there's also mechanics, but electricity is one of the big things because you can have a local chemical event like one eon or a few eons passing at one point of the membrane that immediately changes the polarization of the whole membrane. And that feeds back on little molecules like proteins which are on the other side of the cell, which might open there locally. And so you have top down causation through electrical phenomena.
Yeah.
[01:51:50] Speaker B: So Romain, like somehow we only have a few minutes left here.
It's really passed by quickly and I feel like I've failed to do, do any justice to so many of the ideas that are in your book. And I have like in my notes here, I have so many other questions and I wonder if I can ask you maybe a couple just quickly or something.
One of the big things that we didn't talk about is the anticipation. And I wanted to ask you the difference between anticipation and prediction. And maybe we can end on that. But before that there are some questions I had from people who have loved your episode from over five years ago. And a lot of people tell me like that's one of their favorite episodes. So anyway, maybe I. Let me ask you A quick question from one of the listeners here.
Oh, this is good because this is sort of broad. How do you hope that your work will influence the field?
[01:52:42] Speaker A: Wow, that's a difficult question.
I think. I didn't really ask myself this question when I wrote the book. I just.
[01:52:53] Speaker B: Or when I wrote the perfect answer.
[01:52:55] Speaker A: I. I just needed to do it somehow, you know, I don't know if.
Well, honestly, I'm not sure I can convince people who have done their whole careers with the computational mindset. I mean, I can see from the reactions to my piece in bbs, for example, that it's not going to be the case probably.
But I see maybe two ways in which this might change some people's mind.
Well, either for people entering the field, maybe they have a more.
[01:53:31] Speaker B: Oh, see, I was gonna double set
[01:53:34] Speaker A: of, you know, of expectations.
[01:53:37] Speaker B: Yeah, yeah.
[01:53:38] Speaker A: Or also I think, well, more concretely, I think from reactions, what I saw is, is that somehow it speaks to people who are already convinced. But that doesn't mean it's useless. It's just that I think it might help some people, at least it helped me to put words or to put, you know, make it more accurate. What is it that feels wrong? You know?
[01:54:09] Speaker B: Right, yeah.
[01:54:11] Speaker A: And to try to maybe build something a bit more concrete, bit precise to understand where it doesn't work and maybe in what direction we should go to develop different frameworks. Right. So yeah, I honestly don't really know how it will be received or.
[01:54:34] Speaker B: Yeah, that whole thing about, I mean, you have to follow your intuition as a scientist, but just because something feels wrong, if you're a good scientist, that doesn't mean it is wrong. And you really don't do need to specify. That's one thing that gives me like troubling pause in my own thoughts is like it's like intuitively obvious that some of these things are wrong, but it's the articulating why and how that's like the hard problem that you spend so much great effort to do. And so I just appreciate that in general. So another question here, and it's ostensibly, this is a podcast about cognition and AI. So what does this thinking mean for AI in the talk around so called AGI and future research into making intelligent machines? So I've had a lot of people, the podcast has kind of taken a philosophical turn and people like Yogi Yeager who have been on and in this biological agency and autonomy sort of perspective and processual perspective. Perspective and like Kaufman talks about the adjacent possible things are open and they're not like closed formal systems. And there's like this fundamental differences that people point to between biological organisms and artificially engineered organisms, which you talk about a lot in the book that you focus on the actual biology, you know, and like what's important about the biology and stuff. So I'm curious to know what you think of AI in general.
And now I'm making this a little more long winded that I, I should basically, I guess it's an open question. What do you think your thinking means for AI?
[01:56:10] Speaker A: Yes, well, I think the discourse on AI and AGI, even the term AI. Right.
Artificial intelligence, is essentially a continuation of classical cognitivism or computationalism as it was thought in the 1950s.
Intellectually, it's that it's the idea that your brain, your mind is just a computer. And of course, if you believe that and you develop computer programs that somehow mimic some of the capacities that we have, then it's tempting to believe that. Yeah, that's it.
But, but of course all that relies on the idea that we are computers or machines. If we are not, then much of the assumptions on which this argument rests just collapse.
So I mean, I don't know. I don't think I have very original things to say here. Many people before have, have said that.
I've noticed that people in the AI field have repeatedly pretend, claimed that they were on the verge of developing a human intelligence or superhuman intelligence. And that's been since the field exists. Almost. Right.
[01:57:51] Speaker B: Yeah. I think it's almost in the last paragraph of your book that you point to Dreyfus and his little.
[01:57:56] Speaker A: Exactly. So Dreyfus.
Well, I've written several books about that and a paper more recently.
I think it was the fallacy of the first step or something like that.
[01:58:07] Speaker B: Yeah, that's what you, yeah, that's what you refer to.
[01:58:08] Speaker A: I think Melanie Mitchell also more recently wrote and still writes on that and many other people. Really.
I don't usually write specifically on that, but I very much agree with Dreyfus on, on that.
I got off track a little bit.
[01:58:27] Speaker B: No, that's okay. So for me, one of the things that I struggle with. Right. Is I almost am at a point now where like, because quote, unquote, intelligence is just assumed to be computational, I almost think, okay, well maybe I'm not interested in intelligence then, because what I'm interested in seems to equate or you can't separate life, life processes, cognitive, biological cognition with what I think of as intelligence. But maybe I'm just, maybe I'm not like with the hip new definition of intelligence, which maybe it is all computational. If it is so okay, you can have that and I'm just not interested in that. Maybe, maybe it's a shift of my own interests or how I understand what the term intelligence means. So I'm left with like, I don't know what anything means anymore.
[01:59:21] Speaker A: Yeah, well, first of all, the definition of intelligence is kind of a blurry one.
[01:59:26] Speaker B: Of course,
[01:59:30] Speaker A: there is one particular thing. You know, at some point the paradigm of intelligent activities was things like chess or calculation, doing calculations. Right. And pretty quickly in the 20th century, doing calculations, even complicated calculations could be done by just a simple calculator with gears.
And so that was not anymore the pride angle of intelligence and so on. I mean, but why was it the paradigm of intelligence?
Because it is something that we humans find difficult, that we need to train on, we need to be educated on all this logical stuff and so on. It doesn't come out like that.
Five years old is a struggle, or even a 25 years old biologist, or even a 50 years old biologist or anyone anyway, struggles with logic. It's difficult.
And I find it even quite ironic that it is specifically people like analytic philosophers who have spent 20 years of their life or more training to try to articulate logical, accurate reasoning, who then pretend that the very basis, the basic core thing of cognition must be logical operations. But it's actually the top of it. It's the thing that you need to, to spend years learning to do that most people can't really do properly. Right. It can't be the basis.
So really there is this thing that what seems really intelligent must be the computation. And so since we are an intelligent species, we must be a computing species. But then there's a weird inversion because it is specifically what is hot and requires education that is now actually the basis of how it works in the brain. Yeah, I think it doesn't make much sense and.
Right. And more broadly on the question of whether, you know, whether intelligence or cognition has to be biological. Basically because you mentioned Yogi Yeager and other people, Rosen also believed that life cannot be computed and so on. Well, one thing that I bring up in the book is the question of goal and because what a computation is.
Well, we discussed that quite a bit. It's a procedure, but it's a procedure that is oriented to a goal, which is to compute. You compute something, right, you want to have a result and the computation can be correct or it can be wrong. So you have a goal. So it's a goal directed behavior well, any behavior is goal directed, right? That's what behavior is. So there's the question of goals. But in a machine, the goal is whatever the programmer sets, right? So the only, as far as we know, entities that have intrinsic goals are living systems.
And this is grounded in the organization of living systems. Systems which must be, which must basically seek free energy from the environment in order to maintain the organization running out of equilibrium. And so because of that, the organization must be set in certain ways so that it runs. Right? And I think a lot of, maybe not all, but a lot of directed behavior can be traced back to this fundamental feature of the organization of living systems.
And this you don't have in machines, because machines, you build them in order to do something that you find useful.
So there's this distinction, I think, which is important.
Maybe just to finish, there are two questions which are a bit quite different.
Is current AI directed towards words AGI or actual intelligence or agencies and so on? That's one question. And the second question is, is it theoretically, like in principle possible to engineer a machine that would be living or that would be an agent or conscious or whatever? The second question, I don't have a clear cut answer, but some do. But, okay, let's say I'm agnostic on this question. Question. I'm skeptical that it's possible, but I'm agnostic, say in the sense that I don't think I have a very good argument on that. But is AI directed to that?
Well, maybe it could go to that, but then by developing completely new sets of principles, because I don't see a set of matrix calculations being conscious by having more steps.
For me, that just misses the point.
I mean, it's essential for me, it's essentially wishful thinking to say that just because we have developed stuff that is impressive, it's going to be even more impressive and therefore it's going to be alive and conscious. I mean, there's a big gap there. There is nothing in any current AI that looks even remotely as an autonomous agent.
Not in the sense of agent that companies use with their own goals and that want things and so on. No, not even the first step.
The first step is towards mimicking operations that are complicated. It's not towards making something alive. So for me, it's more propaganda than scientific claims, which, I mean, is not so surprising given that those claims are made mostly by CEOs of companies who sell the products. So should they be taken seriously? I'm not sure about that.
[02:06:07] Speaker B: Okay.
[02:06:08] Speaker A: Yeah. All right.
[02:06:09] Speaker B: Damn it. I don't know how time flew by without. I really have so much more that we could discuss and I really want to highlight the. I think it's chapter seven on anticipation where you talk about this difference between anticipation and prediction because there are so many theories of the brain where prediction is at the heart of it, predictive processing.
You talk about inactive or, sorry, active inference.
[02:06:39] Speaker A: Yes.
[02:06:39] Speaker B: Yeah, active inference and inactive paradigms. But to you, to me, like all of the ideas that were many of what much of what you talk about returns almost always to, like the properties of the biological system.
Then you start with the biological system, which is an unpopular place to start these days, but you're more point recurrently throughout the book is that when you do that, when you start to look at how the system is made, how it is or not made, how it is organized, how it is active, the behavior of the system, when you appreciate those things for what it is outside of the computer metaphor, the machine metaphor, outside of the metaphors that are dominant, you start to develop different approaches and different perspectives and different answers in what these things are. So, so I just want to highlight that. That's one of the main take homes and themes of the book is treating the system for itself, which is not the popular way to do things right now.
And I do have to go, but so I want to just make sure. Congratulations again on the book and I hope that people, especially starting out in the field, will take books like yours from this perspective and really take it to heart. And don't just assume, assume the computational metaphor that we all just sort of assume because that's the easy thing to do. You're doing the hard thing, which is a beautiful thing. So thank you for being here and I wish you success and I hope it's not another five years until we speak again.
[02:08:10] Speaker A: Yeah, sure.
[02:08:11] Speaker B: Okay.
[02:08:12] Speaker A: Well, thanks. Thank you. It was great.
[02:08:17] Speaker B: And we're back.
Thanks for doing this extra session with me. Mainly I wanted to. Wanted to ensure that we just went a little bit deeper on, you know, one of the key features of the book which, which is anticipation. And by the way, yesterday, look what came in the mail.
[02:08:38] Speaker A: Nice.
[02:08:39] Speaker B: I don't, I don't. It was impeccable timing between our recordings here, but anyway, so I'm excited to have the physical copy, so. And it looks cool. It like looks, looks nice.
Okay, so what I want to do is have you elaborate a little bit more. I know that we talked a little bit about anticipation and its relation to prediction the last time we chatted, but I'd love for you to elaborate a little bit more on the difference between anticipation and prediction. And part of the reason I think that you do this in the book is because there are predictive There's a range of theories of brain function that revolve around prediction, like predictive coding, predictive processing, active inference is one that I would love for you to highlight a little bit. So talk a little bit about the difference between anticipation and prediction the way that you see it, and then we'll get to how you see these predictive theories where they fall short.
[02:09:43] Speaker A: Sure.
Well, it's pretty simple. To predict is to say in advance, and to anticipate is to do in advance something like that.
Now, since most.
[02:10:04] Speaker B: Brain Inspired is powered by the Transmit, an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advanced research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives written by journalists and scientists. If you value Brain Inspired, support it through Patreon. To access full length episodes, join our Discord community and even influence who I invite to the podcast. Go to Brainspired Co to learn. Learn more. The music you hear is a little slow jazzy blues performed by my friend Kyle Doniphan. Thank you for your support. See you next time.