BI 175 Kevin Mitchell: Free Agents

October 03, 2023 01:46:32
BI 175 Kevin Mitchell: Free Agents
Brain Inspired
BI 175 Kevin Mitchell: Free Agents

Oct 03 2023 | 01:46:32

/

Show Notes

Support the show to get full episodes and join the Discord community.

Check out my free video series about what's missing in AI and Neuroscience

Kevin Mitchell is professor of genetics at Trinity College Dublin. He's been on the podcast before, and we talked a little about his previous book, Innate – How the Wiring of Our Brains Shapes Who We Are. He's back today to discuss his new book Free Agents: How Evolution Gave Us Free Will. The book is written very well and guides the reader through a wide range of scientific knowledge and reasoning that undergirds Kevin's main take home: our free will comes from the fact that we are biological organisms, biological organisms have agency, and as that agency evolved to become more complex and layered, so does our ability to exert free will. We touch on a handful of topics in the book, like the idea of agency, how it came about at the origin of life, and how the complexity of kinds of agency, the richness of our agency, evolved as organisms became more complex.

We also discuss Kevin's reliance on the indeterminacy of the universe to tell his story, the underlying randomness at fundamental levels of physics. Although indeterminacy isn't necessary for ongoing free will, it is responsible for the capacity for free will to exist in the first place. We discuss the brain's ability to harness its own randomness when needed, creativity, whether and how it's possible to create something new, artificial free will, and lots more.

4:27 - From Innate to Free Agents 9:14 - Thinking of the whole organism 15:11 - Who the book is for 19:49 - What bothers Kevin 27:00 - Indeterminacy 30:08 - How it all began 33:08 - How indeterminacy helps 43:58 - Libet's free will experiments 50:36 - Creativity 59:16 - Selves, subjective experience, agency, and free will 1:10:04 - Levels of agency and free will 1:20:38 - How much free will can we have? 1:28:03 - Hierarchy of mind constraints 1:36:39 - Artificial agents and free will 1:42:57 - Next book?

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: You, then what does that mean for free will? Am I really making decisions? Am I really in charge? Or am I just a pre configured, pre programmed kind of meat puppet? The self disappears. In that view, any sort of system that you would think has ultimate freedom and is not constrained in any way by prior clauses is just a random behavior general that's not a self. If you get all of that and it's doing it in a way that we think is adaptive and appropriate, then I think you might have artificial general intelligence. But in order to do that, you may have to have made an agent. [00:00:46] Speaker B: This is brain inspired. Hey, everyone. I am Paul. You'll have to bear with me here in this little introduction. I am a little bit loopy. I just came back from Europe and developed a cold, have some jet lag. So I apologize, but I guess that's what happens when you travel the world. If the world is one place in Europe for three or four days, my guest today is from the world. Kevin Mitchell is back. Kevin is a professor of genetics at Trinity College, Dublin. I said he's back. He's been on the podcast before, back on episode 111 with Eric Hoel. And during that episode, we talked a little bit about his previous book, which was called Innate how the Wiring of Our Brains Shape Who We Are. Today. He's back to discuss his new book called Free Agents how Evolution Gave US Free will. So I won't say much here about what the book is about, because we discussed that right in the beginning of the episode, and we talk about what led Kevin to write this latest book off of the heels of his previous book. But I will say in the beginning here that the book is very well written, just like Innate was, and it is at the level where a lay audience could attack it and get just as much out of it as an expert in developmental biology or neuroscience or likewise other sciences. So if our conversation leads you into confusing corners of your mind today, just know that he spells everything out in the book very well. And of course, we don't touch on everything in the book today. But the basic premise of the book is that we have free will, and that is due to our agency, which has evolved and complexified through the course of evolution ever since the beginning of agency, which was the beginning of life. And part of what Kevin writes about is the idea of the indeterminate universe, that the universe is fundamentally indeterminate at the quantum level or whatever lowest level you choose to think about. And it's that indeterminacy that gave rise to the capacity for free will, in essence, because it gave rise to processes that can constrain themselves and lead to life, lead to agency, complexify over evolution lead to free will. Okay, so I'll leave it at that in this, what's becoming a rambling introduction. So I apologize for that, but I had a lot of fun speaking with Kevin. I hope you enjoy the conversation. I link to the book and Kevin's website in the show notes at Braininspired Co podcast 175. A special thank you to my Patreon supporters. Guys, I know that I've slowed down recently in releasing episodes, and that's because I got a real job and I'm still figuring out the new schedule. But I'm picking up the steam again and should be back on track for a more regular schedule. So thank you for sticking with me and thank you, as always, for your support. And by the way, this episode comes on the heels of my previous episode with Alicia Huarero, where we talked a lot about constraints and biological organization and how the modern notion of causation is not sufficient to explain and understand complex systems. And so Kevin and I touch on a lot of those points a lot as well, and he visits those ideas in his book as well. So if you enjoyed the last conversation with Alicia, I'm sure that you'll enjoy this conversation with Kevin. Thanks for being here. Here's Kevin. So we're recording now? I guess, so I was happy to get a physical copy, which is so nice to read. Is that backwards for you? [00:04:35] Speaker A: Oh, no, I can see it. Yeah, that looks good. [00:04:36] Speaker B: Yeah. Okay. It's so nice to thumb through. Is this what it's going to look like? I guess there are different versions and different new cover. [00:04:42] Speaker A: Actually, that's not the final version. [00:04:44] Speaker B: Okay, maybe I shouldn't I'll blur that out then for folks. [00:04:47] Speaker A: It looks very similar to that. [00:04:49] Speaker B: So, free agents, welcome back. Nice to see you, Kevin. [00:04:52] Speaker A: Thank you. Thank you for having me. And yeah, great to be back. Nice to see you too. [00:04:55] Speaker B: So you've written another greatest hit here, your last book. When did innate come out? 2012? [00:05:02] Speaker A: No, it was 2018. [00:05:04] Speaker B: Wait, it was that? Oh, yeah, okay. I was thinking it was 2017, actually, and then I just did a little poor math in my head. [00:05:11] Speaker A: Wow. [00:05:11] Speaker B: So that's pretty fast turnaround for a book. But in some ways, innate was a precursor to free agents. [00:05:19] Speaker A: Yeah, in a funny way. So innate is about how our brains get wired and the idea that we're not blank slates, that we have some innate predispositions behavioral tendencies and capacities. So our own sort of individual nature, not just human nature generally, and that arises due to differences in our genetics and differences in the way our brain actually develops, just the way the actual run of development turns out. So, yeah, in giving talks about that topic, I would often get this question at the end, which is like, hang on a second. Now if you're saying my brain is sort of wired a certain way and that affects my behavior and I didn't have any choice over my genes or the way my brain developed, then what does that mean for free will? Am I really making decisions? Am I really in charge? Or am I just a pre configured, pre programmed kind of meat puppet? And that's a very valid concern. And that's what got me thinking much more deeply about the problem of free will and whether there's a framework that we can think of in which we can think of free will in a more naturalized way. Because, of course, there's that problem which I call sort of biological fatalism that we're just configured a certain way and we can't do anything about it. And then there's a bunch of more deep sort of metaphysical problems of physical determinism or reductionism. And those are all things that I started thinking about and consider in this book. [00:06:58] Speaker B: So was it like the 7th or eigth time that you were asked about free will? Like, and you're rolling your eyes and thinking, oh, this is pointing me in the direction I have to go. I know you had already been thinking everyone's thinking about free will all the time. [00:07:12] Speaker A: Well, they are. There's the sort of impetus from considering behavioral genetics literature, right? But then there's also an impetus just from neuroscience these days, and I think many neuroscientists, because, paradoxically, we're making so much progress where you can do these amazing experiments, especially in animals. You can go in with optogenetics, you can tweak this circuit or activate those neurons, or you can make the animal move left, move right, roll over, go to sleep, wake up, hunt, mate, all kinds of things. And you can get into the cognitive circuits. You can make it perceive something or believe something. You can implant a memory. You can make it more or less confident in its beliefs. You can really get in there. And I think it feels like the more we do that, the more it looks just like neural mechanism. Like, that's all there is. And there may be a tendency, I think, that rather than explaining how cognition works, we're explaining it away. It's like it's just circuits firing. The whole beliefs, desires, intentions stuff is kind of epiphenomenal. And many neuroscientists, I think, come to the conclusion that we really are just complicated machines. And certainly there's various people out there making that case. That means that because we can see the neural circuits at work when we're making a decision, they would say, your neural circuits are making the decision, right? I would say you're using your neural circuits. You're making the decision, and that's just the machinery you use to do it. But there's two different ways to see that. And so that current of neural reductionism is pretty strong at the moment. And that was another sort of impetus for me to write this book, because I don't think that view is right. And I wanted to push back on that a little bit because it ultimately has some dark connotations, frankly. [00:09:14] Speaker B: Yeah. Do you think that again, so I've mentioned this before. You write about things like process philosophy in the book and treating organisms as a whole instead of, like their parts. And so this kind of trajectory of thinking seems like it's kind of coming to the fore again. Do you think that we're on the cusp of some sort of shift in how we think about these things? Or am I just in that tiny niche that you're also in? [00:09:48] Speaker A: Well, I don't know. I hope so. And it's funny, I think that maybe these philosophies will come back just because we need a way of thinking about systems level holistic level stuff. Because we can now do systems level holistic level experiments and analyses, right, when we couldn't when all we could do is record some do LTP on a few neurons in a hippocampal slice or record a couple of neurons in an animal or something like that, which is. [00:10:20] Speaker B: Still valuable, we should say. It's like when people are going to criticize modern deep learning, they're like, it's very impressive. [00:10:30] Speaker A: But it takes an approach of isolating one component of a system at a time and it gives a kind of an impression of a mechanism driving a behavior where it's like this little part of the machine. When this happens, that happens, and so on. So it's a little bit, I think, of an illusory perspective. But now we can do recordings from tens of thousands of animals in awake behaving animals sorry, tens of thousands of neurons. And we therefore have to have some way to deal with that, right? And so there's all this sort of interesting computational stuff thinking about like manifolds and dimensionality reduction and trying to see what is it within all that information that's actually causally important and meaningful. And generally it's the big sort of patterns, right? It's not the low level details and it's the flux of those patterns through time and it's the way those patterns are interpreted in other parts of the brain and so on. So you get this very holistic kind of approach that you can't just reduce it to a kind of a driving motive force of like a reflex. And that, I think, lends itself much better to process philosophy thinking to more holistic systems thinking. And like, holistic has a bad name. It sounds really woo. It sounds like mysticism, right? And even process philosophy is like, oh, everything's in flux, man. It's all changed. Nothing is. It sounds very vague and nebulous. And I think a lot of scientists don't think it's scientific. But I think, like yourself, I think we're moving in that direction. And I feel like the process philosophy and the sort of relational philosophy, it's going to come to the fore just oddly driven by the need to think about the experimental data that we will now be able to get. [00:12:28] Speaker B: Right? It's like that reductionist approach gave rebirth to these sorts of ideas in some way. [00:12:35] Speaker A: Yeah, and they're old ideas, right? [00:12:38] Speaker B: Everything's an old idea. Well, we'll talk about whether anything can actually be new later, perhaps. But I mean, yeah, it's impressive how old ideas continue to kind of recirculate. [00:12:49] Speaker A: Yeah, and there were people like Whitehead and Berkson and the start of the 20th century that there were big movements there towards this. And later on, just in thinking about system stuff, not so much philosophy, but thinking about cybernetics, say, as a movement, it was huge for a while, and then it kind of faded away. But I think that's going to come back as well because I think principles of control theory as a broad sort of way of thinking about things, especially thinking about the brain, is going to be much, much more useful and more relevant. [00:13:25] Speaker B: So you started your thoughts off talking about these philosophies. And in fact, I just had a conversation the other day with someone, and I was telling them, well, now I have like a process philosophy kind of approach to things. And he was like, okay, well, what does that mean? What does that do? And I was like, oh, shit, I don't know, man. Where was I going with this? Oh, yeah. But the question for me is, well, how do I marry these perspectives with a scientific approach? And do you think that things like control theory and dynamical systems are steps in that direction? [00:14:03] Speaker A: Absolutely. I mean, they are underpinned, whether it's obvious or not, they're underpinned by some more holistic philosophies. I mean, they are just about they're really a science of organization, so it's antireductive to its core. If you're thinking about dynamical systems, thinking about control theory, information theory, hierarchy theory, all of those sorts of what we call system science, system thinking are non reductive. They tend to be focused on dynamical principles, how things change or don't change through time, and also how you get sort of causal effectiveness within a complex system. And so, yeah, there are scientific theories that we can avail of that are actually already many of them existing and again, also sort of being rediscovered. You don't have to go all the way down the philosophical rabbit hole if you don't want to. These things cash out in very real practical terms. [00:15:11] Speaker B: Yeah, okay, we've already gone down like a bunch of rabbit hole here. [00:15:16] Speaker A: Rabbit hole. It's not going to be the last. [00:15:18] Speaker B: No, but let me take us back just because this is a lot for someone kind of just coming into this introductorily, I suppose. And I was going to ask you, so you write so well, and you're very thorough and clear and both in innate and in free agents, you really go step by step and kind of build up the story. And I found, well, this is a very enjoyable read, and I'm not having to strive too hard to understand the concepts and stuff and connect things and I'm just curious, like, when you're writing, who you're writing for? [00:15:52] Speaker A: Yeah, that's a really great question. One of the things that's tricky in writing that kind of book, like, innate and Free Agents is they're sort of pitched. They're not like real academics. [00:16:06] Speaker B: They're in between, sort of in between. [00:16:07] Speaker A: They're not fully trade books where it's supposed to be kind of just entertainment, almost. Right. They're difficult concepts. Right. And so I'm trying to explain those to basically the educated public generally. So the way I think about it, and actually, the people I have in mind, if I'm ever sort of wondering about it, is my parents. So my parents are well educated, but they're not specialists in these areas. And I feel like if I can explain the gist of these things to them, then I've done my job, and they might not get all the details, like any other reader might not get all the details, and that's fine. So, yeah, that's my target audience, really. Now, as it happens, these things, they're pretty interdisciplinary. They span a lot of areas. And I already had found, for example, even in blogging, that, say, writing about the intersection of genetics and neuroscience, having to explain genetics to neuroscientists is in some ways, like having to explain genetics to the educated general public. [00:17:15] Speaker B: It's crazy. [00:17:16] Speaker A: You can't assume the basic, detailed knowledge and the same vice versa. Right, yeah. So I think we're also specialized that for many, many topics, we are just even scientists are just the educated general public when it comes to some science that they don't know about. [00:17:35] Speaker B: Yeah, I know. I tell everyone, like, watching some 60 Minutes thing about neuroscience, I'm like, it's all wrong, but it's so easy to you want to believe it. And then if I watch a 60 Minutes thing about nuclear physics or something, I'm like, oh, that seems right. I'm sure the physicists are like, no, it's not right. I've settled long ago now on becoming at ease with the level of this podcast, right. Because I can't explicate all these, like, low level things, so I have to assume a certain level of education because I have smart friends and they're like, I have no idea what you're talking about, you know, in this podcast. So I have to be comfortable now with like, well, okay, this is for a very particular audience or educated people who are really engaged and really want to learn these things, essentially. [00:18:24] Speaker A: Exactly. I think that's the thing is that people will find the podcast or they'll find these books because they're interested in these topics. [00:18:34] Speaker B: But your book does bring it down, and I think it opens it up for a lot more people who are genuinely interested in these things. So congratulations, I suppose, is what I'm saying. [00:18:43] Speaker A: Thanks. I appreciate that a lot. Yeah. I mean, there's a fine line between oversimplifying and abstracting important principles, and it's the latter that I'm striving to do. And so yeah, if it comes across that way, then that's great. Thanks. [00:19:01] Speaker B: Well, so like I said, what the book does essentially is builds a careful and thorough case going back the evolutionary tree and essentially beginning at life and making the case that you need life for agential characteristics. And then you go from single organisms on up through, well, lots of different things to build eventually the case for free will and how our behavior? Well, there's just too much to talk about or to list, I suppose. And we'll go through many of these things. But just maybe as a jumping off port point, is there something I mean, obviously the topic free will, the concept of free will itself might just be the answer to this, but you touch on so many different topics along the way. Is there one in particular or one or two in particular that you find yourself revisiting and that bothers you? And do I have this right? And how do I explain this, how do I even think about this, et cetera. [00:20:02] Speaker A: Yeah, so the theme of the book really is the approach is to it starts out thinking about the problem of free will, which we kind of articulated a little bit already, these sort of challenges to it. But then it backs off that and says, look, if we want to understand that this elaboration of this capacity in humans, which are the most complicated example of it that we know of, it's going to get all muddied. Up and tied up with these other issues of things like moral responsibility and consciousness and so on. And things we can talk about maybe later on. But what I wanted to do was think about just agency, really. If we want to say, well, how does a human being control their behavior? Well, how does anything control its behavior? What does that even mean? How does a living thing do something right? I mean, rocks don't do anything. Planets, electrons, they don't act. Things happen to them or near them or in them, but they're not doing anything. The doing there seems to involve choice in a sense that the possibility of things being one way or another and then control the possibility that the organism as a system can influence which way things go, right, that some things can be up to the organism and it can be the decider. And so the challenges are twofold. First of all, that there's some external thing just pushing the system around. So it's just part of the universe, and the universe has causes and it's just part of that big causal web. And you can't really isolate an entity as being a causal entity in itself. It's all just at equilibrium. Now, it isn't all just at equilibrium because living things keep themselves far from equilibrium. So I think that problem is sort of generally solved because they have some causal insulation from the rest of the world. But there's a deeper problem, which is the idea that, okay, but maybe just within the organism, it's being pushed around by what some of its parts are doing, or it's being pushed around by all of its parts. You just go down levels and say, look, it's just neurons firing, or it's just molecules being pushed around, or it's just atoms or quantum fields or wherever you think it bottoms out. [00:22:16] Speaker B: Yeah. [00:22:17] Speaker A: So those are the real challenges. And what I've done in the book is sketch what I think is just really a framework for thinking about those things in a way where you could see how it could be resolved, that an organism could have causal power in and of itself and could could have control and could exercise choice. But even for me, even after having written this book, this question of choice is the one that still just niggles at me and just still makes me kind of worried, because it's just really hard to get away from the idea that this all depends on physical machinery. It's not an idea. It does depend on physical machinery. And, yeah, once you dig down deeper and deeper, it's hard to say well, it's just hard to escape from the idea that it's just the physical machinery working. And so what I want to come to is the idea where, actually what's controlling the way that physical machinery works is what these patterns mean to the organism. They basically embody its reasons for doing things. And it doing things for reasons is essentially agency. But the little question then, of choice. [00:23:34] Speaker B: Still well, what do you mean by choice? Do you mean when you dig down and think about the neural underpinnings of choice and how that all plays out, or our modern science of how that plays out, or what level are we talking about, even in terms of choice? [00:23:48] Speaker A: Well, yes, so all of that. So what you might say is that you've got this complicated physical system. So there's this typical thought experiment in the philosophy of free will, which is, if you rewound the tape to some point in time, could you have done otherwise going forward again? So if you have another chance, would you always do the same thing, in which case it's just all determined? And are you even really in control of that, or are there some options? Is the future open? And if it is, are you the one choosing what the future looks like now? Usually that thought experiment is to do with the idea that if we're in a deterministic universe, physically speaking, where there's only one possible future, then what does it mean for an organism to have control over its actions if it never had any options? And that's where compatibilism goes. And there's various people, like Dan Dennett and many others, who've presented and defended an argument that even if the universe is deterministic, if the organism is configured in a certain way, that it does things for its own reasons. And in situation A it does X because it wants to do X. And if the situation had been different, if it had been B, it might have done Y. Right? But when you rewind the tape, the situation won't be different, it'll still be A and you'll still do X, but that's still because you wanted to yeah, now I have problems with that view and we can talk about those. But I have a deeper problem with the framing of that experiment, or in fact the framing of the whole philosophical debate around free will because it generally assumes determinism is true, which like our best physics, as far as I can gather from talking to lots of physicists, suggests, well, it isn't true. We've got indeterminism in the world, indeterminacy at low levels, but also at classical levels. If that's the case, then the problem is flipped around. If you think the universe is deterministic and you're talking about free will, then the problem is really, wait, where does the freedom come from? If the universe is indeterministic, then the freedom comes for free. Right? You've got the system could go this way, that way, this way, that way, whatever, right? It's all these sort of jiggly, random bits. And what's more important there is the control, right? The will of the organism. What it's doing is constraining all the jiggles to make the system do what it wants to make happen what it wants to happen. And that's a very different way of thinking about it, actually. And it flips the script a little bit of a lot of the debates, because the debates in philosophy tend to end up with, okay, either determinism is true, but free will is compatible with that, or determinism is true and free will is incompatible with that, and we don't have free will, and I don't think either of those positions is actually satisfactory. [00:27:01] Speaker B: In the book, you expound on this argument and maybe this was like an AHA moment for you where you said well, the universe is indeterminate and how can I build up the case from there? And you can react to that, but how much of your argument does hinge on indeterminacy? Because it's so hard to think. So from what you're saying here's, what I take from what you're saying is because things happen randomly sometimes we can take advantage of that randomness, but that indeterminacy, at least on the quantum level or whatever level that you want to go to, is outside the realm of our agency. So things just happen. And what you're saying is we can capture that and take advantage of it as a whole agent, but then go ahead. [00:27:52] Speaker A: Yeah, in some ways, yes. There's a common sort of rejoinder to the idea that you need some indeterminacy in the universe in order to have possibility and choice and the idea that agency could exist at all. And that rejoinder is to say, well, look, either things are determined at a physical level, in which case I didn't make a decision, or things are indeterminate at a physical level and they're just doing random stuff. And if that's controlling my behavior, I still didn't make a decision. Where am I in the loop? Right. So the main point is not that the indeterminate things determine our behavior, right? That's just another kind of reductionism, is to say, actually all the causes are physical at the lowest level. That's where all the causation is happening. Now, some of it may be random bits, but when that random jiggle of an electron or an atom or particle makes it bounce into something else, that's the causation. It's all physical forces and so on. Now, the other way of thinking about that, which is what I talk about here and this goes back to Greek philosophers like Epicurus, for example, who already saw that if all the causal work is being done at the lowest level, then you won't have anything for the macroscopic organization to do. It's sort of causally comprehensive down here. It doesn't matter how the organization flows from that. There's no top down causation. If, however, there's some indeterminacy down here or some under determination of what's going to happen by the low level details, that gives some scope for the organization to constrain things. So it's not that the top down causation reaches down and determines which way the random things go. It determines which way the whole system goes. So it's a top down imposition of constraint that allows macroscopic level things to matter and to have some causal power in the system. And that's where agency can emerge. It's where life can emerge, and it's where I think ultimately our freedom to control our own behavior lies. [00:30:08] Speaker B: So reading your book and hearing you talk just now brought me back to the same point. Good old reductionist me to the big bang, right? And I was going to open with asking you why is there something rather than nothing? But maybe we can save that. So there's the indeterminacy and then there's the constraints and the organization of the system. And then I think, well, how did it all begin? Was it all uniform? It couldn't have been uniform. So I guess in the beginning there had to have been some structure or the indeterminacy led to that structure or like there was like a polarity and stuff and things had to move to one pole and that changed the other pole. So I don't know, how did it all begin? [00:30:48] Speaker A: Yeah, so I think that's exactly right. George Ellis, who's a physicist and philosopher, has written about this and many others. So the idea, as far as I can understand it, is that this sort of pre Big Bang, there was this cosmic inflation, but it was very homogeneous and quantum fluctuations at that level led to some inhomogeneities. And without them you would never have. Gotten any sort of gravitational pull that was in any way directional because everything was uniform, right? But once you get these inhomogeneities, then they can act in a way that starts accreting things like galaxies and planets and so on. And so you need some symmetry breaking in the early whatever they call it, the surface of the cosmic inflation or something like that. And I think that's right. And that principle, I think, extends all the way up. Right. If things are just homogeneous, there's never any sort of little random thing or something to make things not like that. You don't get any forces. We think about causes in physics, that there's a force acting on something here and so on. But those forces only arise because you've got an uneven distribution of matter and energy. It's the organization of the system that controls those forces. And if we focus, we just kind of take the initial conditions that's given and we focus on what's happening within it, then it looks like, oh, well, all the causation is exhausted by knowing where the things are and what the forces are between them. But that's only an instantaneous kind of view of the system. You have to ask, well, wait a minute. Why are they in those positions? How they get that way? And in lots of systems, it doesn't matter. But in living systems, it does matter because living systems are historical, right? I mean, that's what life is. It's a historical process. So if you don't think about causation extended through time, you're sort of missing the big point of what life is. It continues, right? It persists. That's its whole shtick. And it does that by accreting causal power through these historical interactions. [00:33:09] Speaker B: It's all a process, man. [00:33:10] Speaker A: All a process, man. [00:33:13] Speaker B: Where I get stuck is whether indeterminacy is necessary or if you started with some sort of organization and some sort of principle, some sort of asymmetry, right? Would you actually need indeterminacy to make the case that free will exists? Or could you build the case simply from that top down self organizational principle that leads to life and then you can have top down through time whether we actually do need indeterminacy in that case. [00:33:47] Speaker A: Right. Well, so Dan Dennett probably makes the strongest, most compelling case for that view, which is that we don't need indeterminacy. [00:33:54] Speaker B: That there isn't any, but there could be still indeterminacy. It's just whether we need it to seal the deal for free will, for example. [00:34:02] Speaker A: Yes, exactly. He thinks it's not relevant. [00:34:07] Speaker B: Okay. [00:34:07] Speaker A: So he thinks the question is not relevant there. And that's a defensible position, but it's one that I disagree with. So partly he is interested in free will from the standpoint of its relevance for moral responsibility. [00:34:23] Speaker B: Yeah. [00:34:24] Speaker A: So in a sense, those two things are tied up together in his reasoning. Right. What he wants to do is find what he calls a kind of free will worth wanting that is the free will that would allow us to say some things are praiseworthy or blame worthy or deserving of reward or punishment and so on. So it protects our sense of moral responsibility on which basically all human society is founded. So he would say there's all sorts of thought experiments that he comes up with where you've got some agent do James and he wants to kill his uncle but then he's going to on his way to do it but he decides not to. But unbeknownst to him an evil neurosurgeon has implanted a thing in his brain and it makes him do it. And so he never really had a choice but is he still responsible? And it's sort of an idea of dissociating the idea of choice from responsibility. Now, I think those thought experiments take a lot for granted in for example, the existence of James and his uncle and any agents and life. Right? Why would there be life in a universe that's completely deterministic? Because the whole point of life is it's doing things to keep itself organized in a way that sort of depends on there being some options of doing other things could have occurred. And if that's never the case then it's not obvious to me how you would get life at all. It's not obvious to me certainly how you would get complicated agents that seem to have devote so much machinery and energy to the processes of decision making and action selection if in fact there are no decisions to make and there are no actions to select. So for me a lot of the thought experiments incompatibilism take way too much for granted. And I would say if you're going to assume that the universe is deterministic and then work from there that your first job is to say why would we have agents at all? And explain that and then you can have a conversation about moral responsibility in my view. [00:36:35] Speaker B: Well, you do talk about indeterminacy in terms of stochasticity in the brain which is just rampant. So if we boil it down to like we're deciding on what to watch on TV or something no one watches TV anymore, but terrible example but what car to buy? No one drives anymore anyway. So in that case do we need the indeterminacy to influence our choice or at that level is indeterminacy unnecessary but it's necessary as a precursor, so to speak? [00:37:14] Speaker A: Yeah. So two things I think both can be true. The main point is that indeterminacy in the system allows macroscopic organization to emerge and for meaningful patterns to inherit at macroscopic levels and have some causal power in the system. So it gives some causal slack in the system, to use George Ellis's term, and means that we can get complexity and hierarchy where the patterns at a high level are important. Whereas the particular instantiations in neural firing for example, they're bit arbitrary and contingent and they don't matter that much, it'll be some instantiation. But those high level patterns are multiply realizable and what drives the system is what the pattern means not the particular details of its instantiation in any given moment. Again, they'll vary from moment to moment anyway. So that's the broad kind of ideas that actually in most of our behavior. First of all, most of our behavior is not of the kind that philosophers tend to be concerned with experiments or that neuroscientists tend to study in their controlled experiments in the lab where it's like a binary decision right now. It's an instantaneous thing and you've got two choices and let's figure out what drives your choice at one moment. But I mean, actually most of our behavior is managed, right? It's managing our behavior through time. So we have plans to carry out those plans. We have goals. To achieve those goals we have to constrain our behavior on a moment by moment basis in a top down kind of a way. So my plan for this hour or two is to talk to you and that's constraining my behavior, right? I'm not getting up and going out of the room. So that just happens all the time, right? And it has to. Otherwise we would have no plans. We'd never be able to do anything that's future looking. So in real terms, organisms manage their behavior through time and a lot of that actually becomes habitual. We don't have to think about it in the moment. We're not actually making a choice. We have made a choice before and that guides our behavior right now in a way that we don't have to think about. So there's all this kind of long term, causation through time nested kind of constraint that is informing our action at any given moment. Now, sometimes there are some decisions that we make that we don't know what to do, right? Either it's not habitual, it's a new situation or it's a situation where we don't have enough information to discriminate amongst the range of alternatives as to which one is optimal or we just don't care that much. So it's like I don't really care. I'll have a tuna sandwich, a chicken sandwich. I don't care. I'll pick one or I'll let my brain pick one. It's a weirdly sort of duest way to frame it. But the idea is that actually that randomness that's in the system can be a resource that organisms can use in some circumstances. But it's important to say I'm not saying that all of our decisions are driven by that randomness. It's that the organism can decide to use that randomness sometimes. And that becomes a really interesting kind of sort of case. And there's lots of examples where organisms do that from very simple things like escaping from a predator where actually being predictable is terrible, right? Predictable things are lunch for predators, right? [00:40:54] Speaker B: But in that case you have to have a very predictable response to escape. You have to choose to escape in that case. [00:41:01] Speaker A: Exactly right. So what you've got in some cases is you've got a drive to do something. But it doesn't matter what you do so much. It's really important to do something move left or right. It's more important that that happens than the decision of moving left or right. And in fact, it's sometimes good if you always move left. Predators would take advantage of that. Same as if you're I don't know if you play poker at all, but if you're predictable at the poker table, people will pick up on those patterns. It's why some of the poker pros actually use randomizers literally to say whether they'll bet or raise or whatever because. [00:41:39] Speaker B: It don't trust their own they don't trust ability to randomize. [00:41:43] Speaker A: Yeah. [00:41:45] Speaker B: The example that comes to mind is if someone's trying to shoot you, you're supposed to I forget what the pattern is called, but you're supposed to zigzag randomly as you run away instead of running straight away. [00:41:55] Speaker A: Exactly. [00:41:56] Speaker B: So you have to have like a randomizer in your head. When should I zig? When should I zag? [00:42:01] Speaker A: Exactly. Different organisms use those. And there's actually a resource, in a sense, in mammalian brains, for example, there's a system that seems to control the randomness of ideas of what to do. So when we're in some situation, some ideas may occur to us of what to do, and some of those will be habits of thought. So when we get up in the morning, it occurs to us to take a shower or it may not even consciously occur to us, we just do it. Right. It doesn't occur to us to maybe do a disco dance or play Twister or something like that. Whatever teaches our own, however yeah, maybe okay, sorry, I don't want to knock your host. However, there are circumstances where when we're doing something, we're engaged in some activity, maybe it's not turning out so well. Right. So we've got systems to monitor. We've set some goals. Are we achieving those goals or are they being frustrated when we're not achieving them? A good thing to do is go back to the drawing board and say, I need to try something else. And sometimes the best way to do that is to expand this kind of search space of options that you then evaluate. And so there are systems with the locus ceruleus in the brain stem that sends norepinephrine to parts of the cortex that effectively kind of raises the temperature of the system and shakes it out of the ruts that it's in the obvious things to do and helps it think outside the box. There is some randomness in the system that organisms can use as a resource to either break a deadlock where they don't really care what they do, what option they take. But it's important to do something so they can't keep thinking about what it is. They just have to do something, and they've just kind of let the system resolve that, resolve these sort of competing options, which I think, by the way, is what happens in these Libbit experiments of raising your hand whenever the hell the urge takes you. I think that's what's happening. And in fact, you can track that readiness potential as a noise accumulator. [00:44:11] Speaker B: Why don't you just expound on that a little bit? Just some people won't know Libbit. I remember even as a graduate student in neuroscience, one of the postdocs came up to me and asked me what I thought of the Libbit experiments. And I was like, what are those? And he explained them to me, and I was like, oh, I don't know what I think about those. But then when you dig down into. [00:44:28] Speaker A: It, the very famous experiments that are supposed to tell us about free will or even are interpreted as showing that in fact, we don't have free will, that you're not making a decision. Your brain is making a decision, and it only tells you after the fact. So the setup is very simple in the task that the person has to do. So they're seated at a table and they have a little thing on their wrist that measures the electrical conduction in their muscles. And what they have to do is just flick their hand like that whenever the urge takes them. So that's the explicit instruction whenever you want to just flick your hand like that. And they're wearing an EEG, which is recording their brain waves, and especially over the supplementary motor area, which is involved in motor planning. You can see prior to when someone does an action that there's some electrical potential that you can see that kind of ramps up until the point where they make an action. So that's fine, that makes sense. If you're going to do something, some part of your brain plans it and then it does it. Right? The bit that was kind of surprising to people was the other bit of the experiment is that he asked people to watch a clock that was going around like this and keep track of the moment when they felt consciously the urge that they were about to do something. And what was really surprising is that this readiness potential in the brain started to ramp up way before they supposedly consciously felt this urge to do it. Like about 200 milliseconds, 300 milliseconds before. [00:46:11] Speaker B: And thus we have no free will. [00:46:13] Speaker A: And thus we have no free will. Right? That was the extrapolation from that. That was the interpretation. Now, there's a whole bunch of reasons why that interpretation doesn't hold, in my view. First of all, you're just acting on a whim, right? Literally, if all you were doing was letting some slight randomness within your brain resolve a competition between doing it and not doing it at any given moment, fine. That's actually a really good way to do it. You've got no reason to care, right? Nothing is at stake. What you did was decide to sit down in the chair and do the experiment, right? That was your deliberative decision. And it's one of those decisions that constrained your behavior then through time. So there's that. There's the fact that actually, if you look at the way the data were captured, in order to see that little signal with all the background noise that's going on, you have to record it over many, many trials. And when you do that, you have to have some way to compare them to each other so they time lock to the point at which a decision is an action is taken, right? So an action is taken and then they look backwards in time and what they see is this readiness potential ramping up and they say, well, look, it starts to ramp up here. That's when the decision was taken because it inevitably goes up like this, and then you act. Now, it only inevitably goes up to an action if you start with an action, right? If you start with always there was an action, then it looks like this inevitable ramping up. If you start with some time lock to some random thing like a sound cue, then what you see is actually activity goes up and down and up and down. It's just kind of noisy. And there's a sort of an accumulator, a noise accumulator in there a short term plasticity that allows the noise to ramp up, but sometimes it goes down again and they don't do an action. So it very much looks like this is an instance where there's a noise as a resource, it breaks this deadlock, allows you to act on a whim when you've already decided to. And in fact, there's really nice experiments from Liad Mudrick and Uri Maus where they did this very same kind of physical setup. But the choice that the person had to do wasn't arbitrary. It was a choice between right and left hand that was picking one charity versus another that was going to get some real, actual money in the real world. And they cared. It was set up that they cared about some of the charities more than others. And when they do that, then the readiness potential, you don't see it. It's not linked to the action, right? So when you're making a deliberative decision, it's just a whole other sort of scenario. Yeah. So there are these cases where you can make a random decision and you can use the randomness explicitly in real time to do that. More generally, the randomness across time allows that macroscopic organization to emerge. So just to get back to the Locus Ceruleus thing, one of the cases in which using a bit of randomness is good is if your current options for what to do aren't turning out very well. There are systems that say, am I achieving my goals? If you're not there can be a signal that's sent back it's norepinephrine sent back from the locus ceruleus to parts of the cortex. That the effect of that is to raise the temperature of the system, to shake it out of its ruts, the local minima that it's in, and allow it to explore more of option space and suggest some things that are really outside the box. So you can think about the origins of creative thinking that way. What's really creative is that I'm not just doing the same thing. I'm not just doing the same thing everybody else does. I'm going to think of something really novel. And most of those really novel ideas aren't very good. So what you want is to subject those ideas of what to do to your evaluative system that predicts, okay, well, okay, yeah, this idea occurred to me, but what's going to happen if I carry that out? Is that going to be a good outcome or not? And so that's where the willing part comes back into the equation. Even if you're using some randomness to generate ideas. [00:50:36] Speaker B: You said outside the box. And right when you said that, I happened to be thinking about a state space and trajectories through that state space. And when you said kick you out of a local minima, I'm still in the same state space of possibilities that is constrained by all of the constraints. And I was going to ask you about creativity later, and I might as well now. I mean, how do we create actually anything new? Because presumably through your story, free will comes about, right. It is allowed through agency over evolution. And at some point you have, quote unquote, free will. It's not out there in the ether, and then we just harness it. [00:51:15] Speaker A: Sure. [00:51:16] Speaker B: And so that in essence, then is actually a new creation. But then when you raise the temperature via epinephrine or whatever mechanisms that we talk about, we're still kind of constrained by a state space, but we all want to get as close to the edge of that state space as possible to be as creative as possible. So how do you think about creativity when we're being creative? Are we really just using our whether we're using our will or not? Maybe a better question is how do we use our will to best pop out of many local minima to be as creative as possible? [00:51:52] Speaker A: Yeah, all great questions. So I think there's a balance, right, between most of life is this learning experience, right? So you've learned from past experience what are good things to do and what are not. That's why you have all these habits and policies and heuristics and so on that guide behavior in really, really useful ways that we don't have to think about too much because we've already thought about it. We've done the reinforcement learning. That all been done. And of course, evolution does the same thing over millions of years. We do it over our individual lifetimes. So all of that happens because it's good, right? It's usually adaptive to guide behavior based on what has happened in the past. And that's true up to the point when it's no longer adaptive to do that. And actually, a really good example of that is what Thomas Kuhn called the paradigm shifts in science. And Carl Popper has written about this as well, the idea that everybody's going along sort of with these shared assumptions that they're not even thinking about in a field, and that shapes and constrains the paradigm of both experiment and theory and interpretation. And that can seem productive until you kind of hit a wall when people finally realize, wait a second, something's not working. And some people may keep hammering away at that same paradigm, but some people may say, well, wait a minute, maybe there are some assumptions here that we need to examine, we need to make them explicit again, and we need to see. We need to back off and go into some other sort of parallel space. So I think a lot of creative thinking in individuals is like that where you've got your current thinking is not working out for some reason. It encourages you to expand this search space, but not, like, into anything. Right? You can't have an infinite search space because we have finite time. But also, even if you might want to ignore one little bit of what your history has told you, you don't necessarily want to ignore everything that all your history has told you, right? So you don't want to back off completely to a blank slate that doesn't know anything. You just want to do that with respect to some particular problem space. You can see constraints as a totally negative thing. It's like stopping me from thinking these other ways. But you can see it also as a really positive thing in that it's focusing your search in ways that have been adaptive and productive in the past. And you have some control over that to say, well, actually, it's no longer productive now I need to pull back a little bit, think of something else. [00:54:35] Speaker B: Yeah. I like in the book how you return multiple times to this idea that if we didn't have constraints, if we didn't have possibility spaces, free will is kind of useless because what are you deciding upon? There's going to be no utility for you. If you're truly free, there's nothing to be free of, essentially. [00:54:56] Speaker A: Yeah, it's a weird sort of view. So there's a very absolutist conception of free will that's used common in the philosophical literature and in some of the neuroscientists talking about it as well, which is to say you're only free you're only really free if your behavior has no prior causes affecting it whatsoever. If you can trace anything you're doing to some prior cause, then you're not free. Now, if you follow that to its logical conclusion, it becomes really incoherent. So one of the ideas, one of the framings of this is to say, okay, look, yeah, I can do what I want. I can act for reasons, but where did those reasons come from? I didn't want to want something. I mean, this goes back to Schopenhauer. Sam Harris has expressed that a lot. Robert Sapolsky does as well say, if I can't choose what I want, then I'm just being constrained by these prior causes. And that might be my genes, it might be all my prior experiences, it might be the way my brain developed. [00:56:02] Speaker B: And so on that's constraint to the limit. [00:56:04] Speaker A: To the limit, exactly. And so that is seeing those things as determinants of your behavior as opposed to influences on your behavior that still leave you some scope for things to be up to you as a whole. [00:56:17] Speaker B: Organism as a whole. Yeah. [00:56:20] Speaker A: When you think about that view, it's kind of literally head scratchy. It's like, well, what do you want? What kind of freedom would that be? If you could do whatever if you could want whatever you want, well, fine. Why would you want anything? If the answer is, well, I'm going to want to want what I want, you're in this intimate regress where ultimately the self disappears. In that view, any sort of system that you would think has ultimate freedom and is not constrained in any way by prior causes is just a random behavior generator. That's not a self, right? The self only persists through time. It only exists as a thing that has continuity through time. Anything else is just an instantaneous physical system. The property of selfhood is one that is diacronic, right? It extends through time. It's not synchronic or instantaneous. And so it becomes a really incoherent notion. The self is made of constraints. Yourself is just you continuing to constrain all the stuff that you're made of into being you, right? You're just stopping it from taking on a form that isn't you. That's what selfhood is, right? And you can think about that in psychological terms, but you can also think of that in very simple terms. That's what life is doing. The simplest single celled organism is continually working to constrain its organization in that pattern. And it's a dynamical pattern. It's not a static thing, but that pattern is what makes a bacterium a bacterium, for example. And it's doing work to keep that going. And if it weren't doing that, it wouldn't be alive. There isn't any life in an instant. Again, life is a historical extension through time. [00:58:14] Speaker B: I just realized I must have that kind of completely unbound free will because I've had multiple listeners tell me I seem to have a random generator with my questions. So maybe I do have that kind of free will. [00:58:27] Speaker A: It's funny. I had a friend, he didn't have random thoughts, but when you're having a conversation with him, he would say things that to me, would be just out of the blue. And to him there was a train of thought. But it went so fast that he skipped over, like, four or five connections and then just came out with something. And I'd be just bewildered and would occasionally ask, where the hell did that come from? And sometimes he was able to track it back. Actually not it's hard to track back. [00:58:55] Speaker B: I do that with analogies a lot where it makes complete sense to me but I can't remember how I got there. And then I realized on paper it. [00:59:02] Speaker A: Looks terrible anyway, sometimes those things work and sometimes they don't. That's where having some ability to inspect them and evaluate them is useful. [00:59:17] Speaker B: You make this distinction between our whole selves. So what I want to ask you is about what we think of as ourselves, which is our subjective awareness. How does that relate to what you describe as our whole selves and free will? How does then free will come into the picture? [00:59:37] Speaker A: Yeah, no, I mean those are really tough questions and generally we think about people's kind of common conception of free will. I think what they take it to mean is that there's some me, right? There's some me, some self that is able to deliberate and decide at any given moment what to do, where I'm in charge and I can make decisions and I have some responsibility for them. And when you dig into what that conception of the self is, it's really tricky. And I think there's a naive idea that there's this unitary self that can be isolated, that's sitting inside your brain watching things go on and it's the sole sort of decider and there might be some particular part of your brain where you could find that bit. And I don't think that's right. I think the self is more made up of all these sort of perceptions that you're having but also all of the memories that you have so historically, all of the ideas and attitudes and policies that you have, the commitments that you have and the plans that you've made. So very much informed by the past and directed towards the future. And what we are right now as physical things is just, I think, the momentary avatar of that self that extends through time. And we're informed by our past self about what's a good thing to do on behalf of our future self generally. And how far in the future you think is interesting, how far you're capable of thinking varies between people. It varies across species, it varies across age and under various conditions and so on. The self for me is not just that momentary thing, certainly not just the momentary physical configuration in an instant. It's that through line, through time that for me is the self. And thinking about how that constrains and informs our behavior and how it guides our behavior and manages it through time i. Think is the interesting thing. Focusing on these instantaneous binary decisions, I think, is a kind of just puts us into a wrong frame of thinking about that's just not how organisms work. [01:01:51] Speaker B: That's a very philosophy process, philosophy oriented statement there. [01:01:55] Speaker A: It is, I think. Yeah. [01:01:56] Speaker B: So then how does free will come into the picture in terms of the feeling that we have of ourselves versus ourselves as more than just our consciousness? And then is free will always under the harness of our consciousness? Or I think what you're going to say is free will is under the harness of ourselves as agent, as whole? Selves yeah. [01:02:19] Speaker A: So free will I mean, free will usually is discussed as a term in relation to humans. And I think that's fine. I take the same approach in the book. And what I talk about otherwise is agency. So agency you can think about as this doing things for reasons that in here at the level of the whole organism. Right. They're the whole organism's reasons for doing things where there's some holistic kind of processing going on, where the organism is not being pushed around by individual bits of its parts, but it's doing a big integrative decision making process that says, what's the scenario that I'm in now? What are all these things in the world that I'm encountering right now? What's my internal state? What are my short term goals? What's my long term plans? We have this nested, kind of contextual, top down constraint. All of that, I think, is right. That happens. And that justifies saying it's the agent doing something. It's not just a physical system being pushed around by its parts. I think we have all of that, and then we have this consciousness. So the question is, what is consciousness doing in this picture? And I think if you think about the nervous system as a control system, the reason that we have a nervous system is because controlling our behavior in the world is a really good way to do homeostasis. So any living system wants to keep itself organized. And in order to do that, it has to keep various parameters within livable limits. So you can't let your temperature go too high, you can't let your fuel get too low, all these other sorts of things, right. You have to keep everything organized just so, and you have to do some work to do that. And one way to do that is to reconfigure, say, your metabolism. When something in the world changes, say, the oxygen runs out and you switch from, if you're a yeast from aerobic to anaerobic respiration. Right? [01:04:13] Speaker B: Right. [01:04:13] Speaker A: But another way to do that is to move, okay, there's no more oxygen here. I should move to where there is oxygen. And then you can think, okay, well, you need a control system to say, how do I move and where do I move? How do I know where there's more oxygen? And so on. So from that simplest kind of conception of behavior you can think about much, much more complicated behavior like what we do or other animals as just kind of an elaboration on that theme, right? We're trying to maintain ourselves. We have goals that now extend much further into the future. We're not just reactive to every stimulus that comes along. We're proactive and we make plans and so on. But the point of behavior is still generally at base level to stay alive. So if that's true, then we can ask, well, what does consciousness get us as a control system function? What does it give us that we didn't have before? And first of all, if we back off the idea of conscious feeling what it feels like we come to some cognitive capacities that we have that other organisms may not. And those are things like metacognition and introspection. So we don't just act for reasons. We can reason about our reasons and we don't just think about objects and things in the world. We can think about our own thoughts. So we have this kind of recursive extra level or levels that allows our cognition itself to be something we're thinking about. And that becomes really valuable as a control parameter or control faculty because, for example, it lets us think about the certainty that we have. So we're not just having a belief. We can think about, well, how certain am I about that belief? And where did that belief come from? Was I right to make that judgment? Maybe I made a wrong inference. Maybe I had bad data or maybe it was just something random in the world that I shouldn't pay attention to. So all of those things that ability to introspect and to do metacognition you can see the value of them in a control system kind of language. [01:06:16] Speaker B: Sure. [01:06:18] Speaker A: Now, the consciousness question is really tricky though, right? Why should it be valuable for those things to feel like something maybe thinking about your own thoughts just sort of necessarily produces a conscious mind? Maybe that architecture just makes that happen. I don't know. [01:06:38] Speaker B: And under that, it could be epiphenomenal, right? [01:06:41] Speaker A: If it's well, it could be epiphenomenal and still causally effective, right? It could be emergent in that sense. But when it emerges, it could be causally effective. [01:06:51] Speaker B: Sure. Okay. [01:06:52] Speaker A: And so one way that it is causally effective I think unarguably is that we can tell each other what our thoughts are. And because we're a hypersocial species you can see the value of that even in something simple like coordinating action. So say we're going out, we're hunting a mammoth and I say Go left, and you say Go right. [01:07:15] Speaker B: We're always hunting mammoths these days. [01:07:19] Speaker A: As you do on a Saturday morning off. We are hunting mammoths and we need to coordinate who's doing what. And you need to know what I'm thinking of and I need to know what you're thinking of. And of course, as we get the emergence of language and culture, however that happened, then consciousness pays off, I think, in that arena, because you have to be conscious of your thoughts in order to say them to somebody else. At least, I think you do. [01:07:50] Speaker B: At least some of the time. [01:07:52] Speaker A: Some of the time, at least. [01:07:53] Speaker B: Right. [01:07:56] Speaker A: But coming back to the sort of phenomenal aspects of conscious experience, why it feels like something, what that gets you, it's really hard to know. I mean, there are some arguments for why different qualities of experience are important. Mark somes, for example, has really interesting work on this, where he's saying that actually the affective quality of different emotions that you have, the way they feel, becomes important in order that you can distinguish different types of things that you have to care about all at the same time. [01:08:30] Speaker B: Now, a constraint as a set of constraints, perhaps. [01:08:33] Speaker A: Yeah. Well, you might be hungry or thirsty or horny or afraid that there may be threats, different things. All of those things carry some sort of homeostatic signals. So if you're afraid that's a signal that says, this situation is bad, do something to reduce that feeling of fear, and we'll be back in happy times. Right. If you're a thirsty it's, again, this situation is bad, do something to bring that parameter back into healthy range. However, if all of those things were feeding into the same decision space because they have to be adjudicated over at the same time because you can only do one thing at a time and they didn't have some qualitatively distinct label to them, you wouldn't know what you were feeling was about. Right. You wouldn't know what the signals were about. That's his argument. I think that makes some sense. I don't know why they have to feel the way they do. [01:09:26] Speaker B: Right, yeah. That's always the question. [01:09:30] Speaker A: That's the tricky question. So in terms of free will in humans, I think what we get in the evolution of humans in the expansion of our prefrontal cortex, these neural facilities or faculties for executive function, metacognition, introspection and so on, and ultimately consciousness, they give us an extra level of agency. They give us a kind of meta agency where we can think about what we're doing and why we're doing it and whether those were good reasons to do it. We can reason about our reasons and tell each other about them. [01:10:05] Speaker B: In some sense, the story of free will kind of, you could say, tracks the story of consciousness or the story that you could make in terms of how agency gave us consciousness. Right. In some sense. So you've referred multiple times to free will in humans. And one of my questions is, well, what's the minimal criteria? Where is the line in agency? Does that yeast cell you referred to, it has agency? Does it have some semblance of free will? Does it have free will? Is there a gradient that we should be thinking about of free will, how much there is for different species, et cetera? [01:10:47] Speaker A: Yeah, I think so. Yeah, I think you can think about different levels of agency and you can start to operationalize that in various ways. So one would be how many things can you think about at once? What's the complexity of your action repertoire? So a sea elegance can move forwards or backwards, and it doesn't do much else besides that. If that's the repertoire that it has to choose over, it doesn't require much cognitive depth to do that, right? It responds to some local signals, touch and mechanosensation, but they're things that are right here and now. So if it's thinking about anything, it's thinking about stuff right here and now, and it's deciding forwards or backwards and a few other things, right? So that's pretty simple just in quantitative terms. And I think you can say, well, as you get more complex animals, and I follow the track that goes along the lineage that leads to us, but of course, you could follow the track that goes to insects or octopuses or birds or anything else, and you might see different kinds of agency emerging. [01:11:56] Speaker B: How important are interneurons? I mean, you talk in the book about the development of between sensory apparati and motor apparati and how in earlier organisms, they're yoked together, and then you get this evolution where you have clusters of interneurons that then you can start having recursivity. Sorry, recurrence. How many interneurons do we need? [01:12:18] Speaker A: Free will, how many levels do you need? Okay, so you can start in simple organisms where sensation and action are pretty tightly coupled together, but, I mean, not completely, right? So not linearly. So even in a bacterium that is a chemotactic gradient, if you control everything else in your experiment, say, and you just look at some one biochemical pathway in isolation, it looks like a very linear kind of thing. There's a receptor, some proteins get phosphorylated, they affect the flagellum, which rotates this way or that way, and that determines which way the bacterium goes. But even in that system, it's actually much more integrated than that. There's all kinds of signals coming in in the real world from other things in the environment, from cell crowding, from temperature, from osmolarity, and really importantly, the recent history of the organism itself, right? That's how it knows if it's going up a gradient or down a gradient. Okay, so you've got some agency holistic kind of processing that happens like that, but still pretty tight coupling. And the holistic integration is all in one level. Now, there's a limit to how many cogs you can join up together in one level before they get jammed up. And so what you get in organisms with neurons is you've got sensory neurons and motor neurons, and sometimes there's a pretty clear sort of reflex arc between them. But you can also get these intervening levels of interneurons. And they're useful because they can integrate information from multiple different sensory things. They may compare the level of a touch at one end versus the other end of the animal, for example. [01:13:58] Speaker B: You can't end of the past. [01:14:00] Speaker A: Yeah. Or in the past. Absolutely. And so the more levels that you get, the more abstract the things that you can think about. And this is really obvious in visual perception. So vision and hearing aren't just local senses. We're not immediately touching the things that we care about in the world, the objects or the odorants. What we're getting is just disturbances of the electromagnetic spectrum or vibrations in the air. I don't care about photons hitting my retina. What I care about is what they're bouncing off of in the world. Now, to figure that out, that's not just my retina doesn't know that, right. Some work has to be done, some processing work has to be done. And the best way to do that, or at least the way that it happens in nature, is through a hierarchy where neurons at the first level integrate across some photoreceptors. Neurons at the second level integrate across some of these, and they perform contrast enhancement or divisive normalization, or they amplify signals, or they integrate, or they act as a temporal filter or whatever it is. Right. But what we end up with is that these neurons at this level, they're not just responding to dots of light, they're responding to edges, they're responding to objects, they're responding to types of objects. Ultimately, we get to the point where we can think about types of objects. We can think about the relations between them. We can link our perception, our sensory data to we can make inferences about what's out in the world and tie that back to our accumulated knowledge about those things in the world. So when we see a face, well, probably belongs to a human being, right? Probably attached. Maybe it can talk, and maybe I should talk to it. Right. So we know about things in the world, and we bring that to bear in guiding our behavior. So those levels become really, really important. And what's interesting is that as you go up, you're coarse graining. Right. In each transmission of information, actually, a lot of the details are lost, even from one neuron to another. This neuron here is getting inputs from this guy. He may fire only if there's a certain rate of spikes coming in. But the pattern of spikes doesn't matter. Right. That's lost. [01:16:14] Speaker B: Gets lost. Yeah. [01:16:15] Speaker A: And there's a digital readout here. Either the rate was high enough to reach threshold or it wasn't. So loads of information is lost from level to level. And that's true at population levels as well. So what you get is that these patterns at higher levels mean something to the organism. That meaning has been, I want to say extracted, but actually created, I think is a better term for it. And it's those patterns, ultimately, the ones that are at the highest level that are the things that the organism cares about. Those are the actionable patterns. I don't care, as I said, which photons are hitting my retina, which photoreceptors being activated. I care what's going to guide my behavior is what are the objects that I infer are out in the world, and what can I do about them and what do I know about them? And so on. So those intervening levels give you this cognitive depth, this ability to abstract, and they also, I think, illustrate how it's the meaning of those patterns that's important. It's not the low level details of neural firing. And for me, that's really key because it gets us away from this idea that the brain is an electrical machine. You often see these pictures of neuron A fires, and then it makes neuron B fire, and that makes neuron C fire, and we start teaching students about simple reflexes and so on, and that kind of is what happens, right? But I think you can flip that perspective for most of what's happening in the brain. It's not like that. It's an informational machine. And what you have is, rather than neuron B being driven by neuron A, you've got neuron B surveying or monitoring its inputs, and it's got some criteria for what kind of incoming patterns are going to make it send a signal. And that's a different way of thinking about what's going on in the brain. It focuses the view on the meaning, the interpretation of patterns through time and the criteria that set those interpretations. And that's where the historicity comes in, configures the system such that the meaning of a pattern then becomes actionable information for the organism, and it's decoupled from obligate action. It doesn't have to drive it. It's just a part of the information that the organism then has in its cognitive arena, amongst many other bits of information that it's going to figure out what it should do relative to. [01:18:42] Speaker B: So can we draw a line on the phylogenetic tree and say free will began here with this amount of complexity, this amount of hierarchy? [01:18:50] Speaker A: I like to say that actually, even the simplest life forms that we know about, the simplest sort of free living life forms, which are bacteria, have a minimal kind of agency. They do a minimal kind of cognition. I don't see any reason why we shouldn't call what they're doing the kind of problem solving they're doing, cognition. It seems to satisfy the abstract sort of definition of cognition for me. And then, yeah, then we can go from there. And I don't like to draw these bright lines, although I will say that whatever happened in human evolution is probably a singularity of some kind. People talk about the singularity. When is the singularity going to happen? In AI. It happened, right? We're it we're. We're the singularity. Nice and how that happened is really interesting. And it may be that it's not just some biological change from down our lineage in the difference between a human brain and a chimp brain, for example, because we were proto humans that were hominins and homo species for hundreds and hundreds of thousands of years. Millions of years for some of those hominins that didn't take off in the way that our species did. We only did in very, very recent time, biologically speaking. And that probably was some sort of cultural spark that maybe literally lit the fire that let us do that. And that's a whole other sort of area that's really interesting. So I see a continuum of all of these things, and I don't like to draw those distinctions of where there's a definition that applies here or doesn't apply there. But at the same time, I do see a difference in humans qualitatively, and it's interesting to think about how that emerged. [01:20:39] Speaker B: Okay, let's go back to sort of gradients of free will. What I want to ask is we've talked about different situations where when we need to run from a tiger, we're relying on the fact that we have to move, and maybe then we create this random generator on our zigzag pattern or jump in the tree, et cetera. And then when we have to deliberate more, we're engaging our free will more. We're talking about free will as if it's a binary thing, either kind of on or off. Right. But what I want to get a sense of is how much free will that we have. And I know this is different across even the course of a day because of will depletion, is it called? [01:21:19] Speaker A: Yeah, I'm not sure what I think about that, but yeah, just fatigue. General fatigue. [01:21:24] Speaker B: Sure. [01:21:25] Speaker A: Okay, so when we're talking about free will here, what I like to do, because that has so much metaphysical baggage, is just replace that with actually what we're really talking about, which is conscious cognitive control of our. So let's just sort of try to naturalize it in neuroscientific terms or cognitive science terms. And of know, danny Kanneman has this system one, system two idea, which is simplistic, but it's useful in thinking, okay, a lot of my behavior is controlled by system one, which just means it's habitual. I don't have to think consciously about it in order to go about most of the stuff that I do. But sometimes I do engage this other system, that's effortful, and it takes longer. It's inefficient, but it's what I need when I'm not sure what I should do when past experience is not a good enough guide. And I don't have one obvious option. So if you think about rather than who has more, these gradations of free will, you can think about instances, first of all, in which you have a greater capacity to engage system two. So, for example, if you have more time on your hands. If you're not under an immediate threat, if you don't have to spend your time just looking for food at every moment of the day, then you've got some more just capacity in the world sense to allow you to think about things and you can deliberate a bit more. Also, you may be able to plan over a longer time horizon. So we can do that, obviously, in terms of things way beyond our own lifetime. A cligons probably isn't planning very far in advance, right? It's reacting to things. It's like, what's my internal state? Okay, what should I do about it? It's just very, very reactive. Whereas as you get more and more complex, you can be more and more proactive. So you can guide and manage your behavior with more autonomy in a way that is even, again, sort of mathematically formalizable. So David Krakauer and Jessica Flack, for example, have interesting work where they're asking, how much autonomy does an organism have and their formulations to say, how much mutual information is there between the current state of the organism and its state at time t in the future, or T plus one or T plus two or so on. And the idea is that the more information the current configuration has about the future state, the more autonomy it has, the less it's being pushed around by the exigencies of the environment. And the further into the future that is, the more agency it has. Because it's not just controlling how things go right now. It's not just making the immediate future happen one way versus another. It's making the far future happen one way versus another. That requires sustained effort through time, which humans are sort of obviously very good at. Other organisms are good at it to lesser or greater extents. And even within humans, some humans are better at it than others. Right? [01:24:26] Speaker B: Do you have a higher capacity for free will than I do? Is that what you're getting at? [01:24:31] Speaker A: Much higher, clearly. So I have a higher capacity for free will than I used to have when I was a baby. When I was a baby, I didn't have much capacity for agential control at all. Right? I was really reactive. I wasn't planning things. I didn't know much. I didn't have a good model of the world. I only had homeostatic signals pretty much to go off of and then weird stimuli from the environment that I didn't know what they meant or what they were about. So I had to develop through time that knowledge and the control that allowed me to plan over a longer time frame, which usually means inhibiting immediate impulses. So impulse control is something that varies throughout the lifetime. It can vary throughout a day, as you said, depending on how tired you are. Your impulse control is sort of effortful. It can vary depending if you've had coffee or alcohol or you become disinhibited if you had an alcohol for example. But it also varies between people. So there are personality traits or measures of executive function that vary as traits between people. I mean, in personality literature, it's basically conscientiousness that is a personality trait. It is one of these predispositions we talked about at the start when we mentioned the book Innate, which was really about how those things come to be. So one of the ways you can think about that in two ways one is that some people have more free will than others. Just in an operational sense, on average, better impulse control. Yeah, on average, right. As a trait. Over time, they have better impulse control. They can carry through plans over longer time frame. They may be able to sort of integrate more information all at once and so on. And in one way, you can say, well, that just is reducing everything to biological machinery again. But on the other hand, it makes me think, well, look, free will is not some mystical thing that's sort of out there. It's magic. It's an evolved biological capacity. Free will is just the name that we give to those sets of capacities, all the sorts of agency that we inherited from other organisms, plus these elements of metacognition and executive function and impulse control and planning and rationality that are expressed most in humans. And the fact that we can see that it varies is evidence of that. It's evidence that that's an evolved biological capacity that has some real neural underpinning so we can naturalize it. It doesn't have to be mysterious. We can investigate it. [01:27:17] Speaker B: Of course, when you were a teenager, before your prefrontal cortex was pruned, you had less free will as well than after your preter. [01:27:24] Speaker A: Teenagers clearly are they engage in riskier behavior. They're more impulsive. They're more emotionally labile and volatile. And there's very good reasons why that is. Right? They're not just faulty adults. There's very good reasons why adolescents should be characterized by those kinds of traits. And Sarah Jane Blakemore, for example, has done really beautiful work explicating that and saying, look, these are adaptive at that point in the life history of actually mammals. Because it's true. Not just in humans. Teenage mammals of all sorts share some of those traits. [01:28:05] Speaker B: Yeah, teenage squirrels. [01:28:07] Speaker A: I like that. [01:28:09] Speaker B: One of the things that I appreciated, just getting back to what you were just talking about, how there's different levels, essentially personality traits, et cetera. And you really kind of go through in the book and helped me formulate how to think about these things in terms of nested constraints on each other within the setting of thinking about a mind and how to think about different processes of the mind. So at the base kind of level, we could start with action, which is like behavior, right? And you talk about this and you harken back also to your book Innate where you touch on personality traits as well. But you could think of like action, and then kind of in the mind that there are these personality traits that are at a slightly higher level, and then there are like habits that you form via the personality traits. And then on top of that, there's character, which is kind of a longer term, slower process related to habits and personality traits. The way that I thought about these when you're writing about them is that each is a slower constraint on kind of the lower level. And I'm not sure if a pure hierarchy is the way to think about it. I'm sure it's not because it never is. But can you think about those sorts of mind aspects in terms of hierarchy or hierarchy of constraints? [01:29:32] Speaker A: Yeah, absolutely. I think that thinking about different parts of the brain being concerned with things at different timescales in the first instance is really useful. We can think about, for example, kind of a gradient of your motor cortex. What it's concerned with is really controlling particular motor actions that you're doing, but then controlling the behavior that you're doing. So you just took a drink from your mug there totally up to my. [01:30:01] Speaker B: Free will to do that, right? [01:30:02] Speaker A: Well, you decided to take a drink from your mug. You didn't decide to move your arm like this and so on, right. You didn't have to tell your motor cortex how to do that. Your motor cortex knew how to do that, and it didn't matter, right? The details at that level don't matter. You were still going to take a drink right now, over a longer time frame. You decided to do this interview, and you decided to bring that cup in with you because you knew you were going to need a drink, right? So you were planning that that was constraining what you did, and it was informing what you just did. So I think that that's true. And if you look at the sort of neural correlates, the receptive fields, if you will, of what neurons in different parts of the brain are interested in, what they carry information about, then as you go up in the prefrontal cortex, it's the longer, longer term planning that they carry information about and goals and behaviors. It's not the short term motor action. So I think we have that hierarchy of action and in terms of the trajectory through a lifetime, I think what we're seeing is the way that those different levels of the hierarchy are getting configured and what they're coming to care about. So at the lowest kind of levels, we've got systems that are controlling things that we see in even simple organisms like risk aversion and threat sensitivity and reward sensitivity, and things like delay discounting. How long are you willing to wait for a reward? How quickly do you discount what the reward value is, depending on how far away it is in the future. And all of those things are kind of parameters of you. Could say tuning of control circuits that will all inform the way that an organism tends to behave. And those, I think, are good kind of things to think about for where very basic personality traits come from. Those could be things like extroversion which is sort of a level of I mean, in animals or infants, it's called surgency. It's really arousal a level of energy, a level of reward sensitivity versus neuroticism which might be kind of a level of punishment sensitivity or threat sensitivity and so on. And those things vary and they inform the behavior of individuals. But what I want to really emphasize is that they don't determine it on a moment to moment basis. What we're doing right now is not determined by our score on extroversion or agreeableness or neuroticism. So what they do, though, is they inform the way an organism adapts to the world. So each of us and this is true of other organisms too forms these sort of characteristic adaptations to the things that we've learned about in the world. So we don't act just because we're extroverted. We act because we're in a certain situation and we know the people there or we don't. And we kind of know the social norms and we have some desires and maybe we're out on the pole, as they say in Ireland. And all of these things are going to determine or influence I want to say not determine influence. How we behave in that moment and also whether we're shy or not is going to have some influence as well. That distinction between personality and these characteristic adaptations that emerge is really important because that's a trajectory. Again, it's a historical process. You can't make sense of the behavior of an organism right now like it's a robot that just has some tunings. It has a history that has affected its habits of behavior and habits of thought. And ultimately, those habits of thought become so kind of ingrained that they become habits of character. They're meta habits. It's not just in this kind of circumstance. I know that I should tend to behave this way but generally speaking, I am the kind of person who behaves this way across many different kind of circumstances. So those character traits emerge. And this is a kind of a rejoinder to people who would say well, your brain is just configured a certain way and you had no control over that. And therefore, A you have no free will right now and B we can't really assign responsibility to you because you had no control over the way your brain is configured which entails everything that I was just talking about. And I would say, well, first of all, that's a circular argument. It assumed. It's only true if you never had control in any moment. If you do have control in any moment and guiding your behavior through time then the way that your character emerges is something that you had a hand in. And this goes back to very ancient sort of writings on moral philosophy and so on. Cicero, for example, had really sort of interesting things to say about this. He wrote these letters to his son, who was off basically studying at university in Greece, and a lot of it was kind of advice on how he should behave, what were good ways to behave in this work, which is called On Duties. And he talks about this. He talks about the way that, first of all, our behavior is constrained by being human beings, which we generally don't seem too worried about, but it's I know, yeah. Biggest constraint on our biggest one. [01:35:20] Speaker B: I know it's ridiculous. [01:35:21] Speaker A: Human beings. Then it's constrained by our individual natures. That's the thing that people get hung up on, which is a tiny fraction. Right? It's a little, little variation on this big theme of human nature, and then it gets constrained or informed by our experiences. But then he also says, but then a lot of it is just up to us. Right? There's an up to usness about it. And it was because of that that we have some responsibility for thinking about our behavior through time, for thinking about our own character and for cultivating virtues and trying to live in a way that we think is a good way to live. However that is informed by societal norms, parental response or example or indoctrination, religion, whatever moral, societal kind of touchstones you may have, that idea of character being an emergent thing that you have a hand in is, I think, a powerful one. [01:36:26] Speaker B: Man, I don't have free will in that I have to go wake up my children fairly soon for school. I know that after this, you're going to go hunt a woolly mammoth or something. And everything we've talked about. I have so many more questions. I know I just repeat this ad nauseam when I talk to people, but it just opens up more and more questions, everything that we're talking about. However, I want to make sure that we get to the question of artificial agency, artificial intelligence, and whether they can. I mean, you actually end the book with a short prologue on artificial agency. I think I have that, yeah. Artificial agents. The epilogue. Sorry, not the prologue. And so what do artificial agents what does artificial intelligence need to have free will, or do we want it to have free will, et cetera? [01:37:16] Speaker A: Yeah, it's a great question. I think you could ask in the first instance, what does artificial intelligence need to have intelligence? And in many cases, when we're talking about artificial general intelligence, right. If that's what people are after, if they would like to do that, and we can come back to reasons why they should or shouldn't, but if they want to do that. If you wanted to create an artificial system that you would think is actually intelligent in a general way then well, it's the generalization. That's the key thing, right? And so what you might need is a system that is able to not just respond to loads and loads of data and pick out patterns and tons and tons of data, but to abstract it's able to abstract principles, categories, causal relations, types of relations between things such that it knows it can sort of make an analogy in a new situation. Oh, this is like that thing that I know about. It has these dynamics or principles that I think I know about from this other instance and therefore that can guide my behavior here. And so when we use the term intelligence in natural system the payoff is always some kind of behavior, right? We see intelligence in some adaptive, appropriate behavior that an organism comes to usually under novel circumstances. That's how it cashes out. So intelligent is as intelligent does, I think is the motto there. Right? [01:38:43] Speaker B: I like that. [01:38:44] Speaker A: So what's interesting if you think about an artificial agent is, okay, well, what would it take just to get to that to that level of intelligence where ultimately, if it's cashing out in that way, then the system has to be able to do things and it has to be able to do things in the world. So it may have to be an embodied agent. It could be a simulated world. But it has to be able to act. It has to be able to intervene on the world in a way that allows it to make causal inferences. Because if it's just getting data and it's pulling out correlations maybe those data have causal have causal relations in them but it can't know what's a real causal relation and what isn't. And if it's acting, then it may need some reason to act one way versus another, right? It may need some kind of utility function, master function like what we have to stay alive, that anchors purpose and meaning and value. Things have to be meaningful. They're not just about things. They have to be for something. At least that's how it works in natural systems. So you may need an embodied system that can act on the world, that can learn as an individual, that can draw these abstract relations about things and that can then generalize to other sorts of scenarios. If you get all of that and it's doing it in a way that we think is adaptive and appropriate then I think you might have artificial general intelligence. But in order to do that, you may have to have made an agent, I think not a passive system that's just being fed data and operating on it and giving some kind of an output. It may have to be something embedded in the world in a way that grounds its drive to abstract things. It has to have a reason to do that. And that reason has to be relative to some goal because reasons are always just relative to something, right? They're normative. And without that normativity in the system, you may not get agency. So to build intelligence, you may have to build an intelligence. [01:40:51] Speaker B: You could set its homeostatic set point. We could externally set it. Does it need to auto regenerate? Does it need to maintain itself? I mean, is there something special about life, is a question I keep coming back to. [01:41:04] Speaker A: Yeah, that's a really good question, and I have an intuition about it that there is, but it's hard to articulate. [01:41:12] Speaker B: That's all I have, too. I keep saying I can't articulate it. [01:41:17] Speaker A: That becomes tricky. As I said, we have this master utility function, which is staying alive. And what that means is regenerating the sets of constraints that keep all the parts together. Right. So collectively, all the parts embody constraints that keep all the other parts in the same organization. That's what being alive is. [01:41:42] Speaker B: And it's through those constraints that you've made the argument and built up that we have things like free will. [01:41:47] Speaker A: Absolutely. I think that's where the normativity is anchored. It's where meaning and purpose and value get some reference point. But it's also the kind of functional architecture that when you start building levels, it can have some sort of referential aspects to things out in the world, can have some internal representations that have meaning. And it's the meaning of those things that can drive I don't want to say drive that the system uses to inform its behavior. So that's one way to do it. And I think you could probably build an artificial system that has that kind of architecture to it. Whether that's the only way to do it, I don't know. Maybe you can have a master utility function that isn't keeping my architecture persisting in this pattern. Maybe it could be some other function that then it can scaffold new goals and sub goals off of. I think that's a really good but very much open question whether this idea of this artificial intelligence as an entity would have to be actually an artificial life one. [01:42:58] Speaker B: If you stay on average, you're going to take another six years. But are you in your conversations with folks these days? This happened with innate, which you had conversations, and it made you think of the ideas that came up you develop in free agents. Do you have that yet, or is that still brewing for the germ? [01:43:17] Speaker A: I have the germ of an idea that is interesting to me because it's coming from both genetics and neuroscience. Cool. And that idea is that organisms in some way, in order to do all the stuff that we've been talking about, to have all this kind of control, they basically come to embody a model of the world. And they do that. I mean, we talk about predictive models and generative models and so on in neuroscience all the time. A model of ourself, a model of the world, we can sort of run simulations, we can make predictions of what would be a good outcome and so on. And in genetics, that idea is there as well. The idea that the genome in some way has a model of the organism in a way that's difficult to understand, how that emerges, how it gets decoded or decompressed. And that model is also, in a sense, a model of the environment, because it's a history of all the adaptations that have been successful in the ancestors that led to that being that necessarily reflect the fittedness of the organism to the environment. So they're jointly about the organism and the environment in that sense. And so it feels to me like there's some underlying principles there. There's a sort of a through line where evolution may make this sort of idea of a generative model of the organism that includes a model of the environment that allows the development to build a brain that then can do the same thing, but on an individual timescale as opposed to over millions of years. And so, yeah, it feels like there's an underlying principle there that's pretty important, this idea of embodying a model of the world, but that has these different instantiations that might be interesting to explore. [01:45:06] Speaker B: That's a cool idea. All right, we'll end on a cliffhanger. My kids are totally going to be late for school. And that's okay because I've enjoyed talking with you. No, it's my so. And thanks for hanging with me and I really appreciate the book. And keep writing. They're really they're really enjoyable. So I appreciate it. [01:45:20] Speaker A: Cheers. Thanks a lot, Paul. Pleasure as always. [01:45:39] Speaker B: Inspired. If you value this podcast, consider supporting it through Patreon to access full versions of all the episodes and to join our discord community. Or if you want to learn more about the intersection of neuroscience and AI, consider signing up for my online course, NeuroAI The Quest to Explain Intelligence. Go to braininspired co. To learn more to get in touch with me, email Paul at braininspired co. You're hearing music by The New Year. Find them at the New year. Net. Thank you. Thank you for your support. See you next time.

Other Episodes

Episode 0

March 08, 2023 01:23:27
Episode Cover

BI 162 Earl K. Miller: Thoughts are an Emergent Property

Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and...

Listen

Episode 0

November 23, 2020 01:38:57
Episode Cover

BI 090 Chris Eliasmith: Building the Human Brain

Chris and I discuss his Spaun large scale model of the human brain (Semantic Pointer Architecture Unified Network), as detailed in his book How...

Listen

Episode 0

January 15, 2019 01:09:27
Episode Cover

BI 024 Tim Behrens: Cognitive Maps

Show notes: Tim’s Neuroscience homepage: Follow Tim on Twitter: @behrenstim. Edward Tolman’s cognitive maps work: Cognitive maps in rats and men. Place Cells and...

Listen