BI 073 Megan Peters: Consciousness and Metacognition

June 10, 2020 01:25:10
BI 073 Megan Peters: Consciousness and Metacognition
Brain Inspired
BI 073 Megan Peters: Consciousness and Metacognition

Jun 10 2020 | 01:25:10

/

Show Notes

 

 

 

Megan and I discuss her work using metacognition as a way to study subjective awareness, or confidence. We talk about using computational and neural network models to probe how decisions are related to our confidence, the current state of the science of consciousness, and her newest project using fMRI decoded neurofeedback to induce particular brain states in subjects so we can learn about conscious and unconscious brain processing.

 

Notes:

 

View Full Transcript

Episode Transcript

[00:00:01] Speaker A: Like that's the really exciting thing about doing consciousness science right now, when computational neuroscience is exploding, when AI is exploding. Because now suddenly all this stuff, all these questions of how do we assess consciousness in other agents are suddenly like extremely topical and extremely relevant in a way that they weren't even 15 years ago. And so I had this series of very terrifying moments towards the end of my PhD and also then in the middle of my postdoc when it became very real what I was up against and how, whether it was imposter syndrome or the truth, I don't know. But how woefully inadequate I felt to be competitive in what I really wanted to be doing. This is brain inspired. [00:01:12] Speaker B: Consciousness. Metacognition, confidence, Functional magnetic resonance imaging, Decoded neural feedback. Do I really need to say anything more? Hey everyone, it's Paul. On this episode I spoke with Megan Peters, a neuroscientist at University of California, Irvine, working at the intellectually challenging, sometimes thorny topic of the neural basis of our subjective awareness and related phenomena that I just mentioned. I use the word challenging because what I observe and what I experienced studying neuroscience is a general trend for many of us to enter neuroscience as ambitious, naive, well intentioned scientists wanting to figure out how brains give rise to minds. Then we learn just a little bit and realize that may not even be a coherent question, and our conception of consciousness isn't as clear as we thought. And we realize there are a billion of other open questions that are more tractable. And our careers are then spent as a series of gaining traction on these billion other questions until we've lost sight of our original naive, yet lofty ambition. Some people like Megan Peters manage to keep that original ambition alive despite sometimes daunting obstacles. I find this something to behold and to admire, and I enjoy learning about the various projects she's working on. Go to the Show Notes at BrandInspired Co Podcast 73 where I link to some of the work that we talk about. If you value this podcast and you want to support it and hear the full versions of all the episodes and occasional separate bonus episodes, you can do that for next to nothing through patreon, go to BrainInspired Co and click the red Patreon button there. Alright, I hope you enjoy this little stroll Megan and I take down the consciousness road. I just learned that you sang Acapella in a previous life and now I just want to talk about that the whole time. How would that be? [00:03:29] Speaker A: Let's talk about science a little bit too, but maybe we can touch upon. [00:03:31] Speaker B: That all right, we'll see if we get to it. Hey, thank you for being on the show. Welcome to the podcast. [00:03:37] Speaker A: Well, thanks so much for having me. This is really an honor and a privilege. I'm really excited to be here. [00:03:42] Speaker B: Oh, geez. The honor is mine, because this is, like, coming full circle for me, back to my graduate school days and what I studied and what I have always been interested in, and what I don't understand why everyone else is not also interested in these big questions. So it's understandable, but also so odd to me that so many neuroscientists are really hesitant to talk about consciousness. They don't want to answer questions about it. [00:04:12] Speaker A: And, you know, it's kind of a dirty word. [00:04:15] Speaker B: Yeah, it kind of is a dirty. It can be a dirty word. Yeah. But you. You have a lot of moxie. You have a lot of metal or chutzpah. I don't know, courageousness. Or maybe it's just a lingering naivete. So I guess we'll find out today which of those it is. So it's been a long time on this podcast since we have really focused on consciousness, and I would love to start off talking about consciousness. And this is the topic that drew you into science or into neuroscience originally, correct? [00:04:47] Speaker A: Yeah, it totally is. It was something that I was really starting to get interested in all the way back in high school, this idea of, you know, how do our brains create this phenomenological experience, the subjective aspects of our reality that we carry around in our heads. So it's the phenomenal consciousness that's kind of always driven me since high school, and I first started thinking about this stuff, and that's kind of carried me through all the way through college and through graduate and my postdoc, and now in my own independent science career. It's always been that qualitative aspect of perception that I'm really interested in. [00:05:27] Speaker B: Yeah, you've managed against all odds to stay focused on some of these bigger picture questions. [00:05:34] Speaker A: Well, that's the retroactive storytelling. Right? Like, there have definitely been forays into other. Other things. You know, my PhD was in multisensory perception, which has a flavor of consciousness that we're knitting together. These experiences that we have from multiple different sensory modalities into, like, a coherent percept. But it wasn't focused on consciousness per se. It was really more focused on perception. And then there were also some other experiences where in. I did infant language development for a couple years, and I went and lived abroad and didn't do science. For a year. And so I always came back to consciousness, though, really, at the end. [00:06:13] Speaker B: Abroad. Was that with the acapella group or was that, okay, all right, I'll stay away from the acapella. [00:06:20] Speaker A: No, but I went and lived in Japan for a year after my undergrad just to go teach English and try to figure out what I wanted to do with the rest of my life. [00:06:30] Speaker B: So this is just a tangent immediately, but do you recommend that something to that effect. [00:06:36] Speaker A: Oh, like going and doing something completely different and living somewhere else? [00:06:40] Speaker B: Yeah. I mean, were you soul searching or were you figuring out what was the. Because a lot of people go and then don't come back, essentially. Or don't get back on track, right? [00:06:50] Speaker A: Yeah. I always knew I wanted to come back and go to grad school. I hadn't gotten the chance to do like a study abroad during college. And just because I transferred during my undergrad and so it kind of put me behind, I had to stay really on track for my studies. So I wanted to have that experience, to go and live in a completely different culture and learn to really more fluently speak another language. And so this was my chance. And so I do recommend it. I recommend going and living somewhere completely different and doing something completely different, even if you know where you want to come back to, because it changes how you think about things. It changes your perspective in a really powerful way. [00:07:29] Speaker B: Oh, for sure. That's cool. Well, okay. I put the word out on Patreon to my supporters that I was going to be speaking with you, and you wouldn't believe the flurry of questions that came in. Actually, you probably would, but one of them was the standard. I would love to hear a definition of consciousness, a clear definition of consciousness. So. So I guess let's get this out of the way. When you study consciousness and you already said phenomenal, I don't know if you said subjective experience yet, but when you study consciousness, what is it that you're studying? What do you mean when you say consciousness? [00:08:04] Speaker A: Yeah, it's definitely the subjective aspects of our experiences. And I'm really glad that we're doing this, this definition or taxonomy kind of thing at the beginning because I think it's really important and there's all these different ways to define what we think of as consciousness or even self consciousness, which is kind of a different thing. So the type of consciousness that I study is not so much the distinction between sleep and wakefulness or coma and wakefulness. So state consciousness like that is totally fascinating and has all sorts of really important Clinical implications. But the part that I'm really interested in is the hard problem of consciousness. The phenomenal experience that we have, the personal, ineffable, intrinsic, private qualia that we have about our world. [00:08:55] Speaker B: There you go. Qualia right up front there. [00:08:57] Speaker A: Yeah, sure. Why not? [00:08:59] Speaker B: Really? Like, does anyone mean wakefulness anymore when. [00:09:02] Speaker A: Oh, sure. [00:09:03] Speaker B: When they say consciousness. Well, anyone who doesn't study sleep and wakeful and coma and stage. [00:09:11] Speaker A: I think so. [00:09:12] Speaker B: You think so? [00:09:13] Speaker A: All right, I think so. [00:09:15] Speaker B: Because to me, it's just. There's the standard. Okay, that's what this person is really interested in when they say consciousness. But maybe that's just what I'm interested in. [00:09:24] Speaker A: Well, I think that it's important to keep track of the fact that this entire field of study for coma or sleep versus wakefulness, there's an aspect of studying consciousness that studying qualia and studying the subjective aspects of our experiences. There's a whole substrate there. You have to be awake to have qualia. And you can imagine robots that are not awake, that have no consciousness, that are zombies and that kind of therefore are in a coma. Like, you know, their bodies are reacting, but they don't have that experience. So I think that. [00:10:04] Speaker B: Right. [00:10:05] Speaker A: You know, and also it has, like, all these clinical implications. Right. Like, it's really important for us to understand how to scientifically assess the level of consciousness whether you're in a coma or minimally conscious or locked in, all of those things. So, yeah, I think you're right that when colloquially we talk about consciousness, we talk about the qualitative experiences, the fact that there's something that it's like to be us, but scientifically, there are also these really other important aspects to it. So I'm glad we got to talk about it a little bit and just put them all on the table. [00:10:37] Speaker B: Yeah, no, I'm not denying that those are important topics, for sure. I just wonder if that distinction needs to be made all the time, and I guess it still does. So. But you don't just study consciousness. I mean, let's, you know, be clear. You've. I mean, it's all related to consciousness, what you study, the vast majority. Majority of it, but you study metacognition and confidence. And I could, you know, we can go on and talk about the different levels of. Well, you know what. And we will talk about what these different terms actually mean and how they're related. But you have used a, you know, a bunch of different methods to study these topics and different tasks, and you've studied different phenomena all related to these topics. So do you want to just give a really broad picture of how you approach the study of consciousness and metacognition and confidence? [00:11:29] Speaker A: Yeah, sure. So I guess we're going to get into a little bit later the kind of distinctions between confidence and metacognition versus consciousness. But broadly speaking, I think that they're also tightly linked, even though they're not the same thing. So we tend to use in. In my lab and my research, just kind of a combination of all the useful techniques that we've collected from all of these different disciplines to try to all point them at metacognition and subjective experience. So we use lots of tools from cognitive science, from computational neuroscience, from machine learning, psychology, and then we also borrow stuff from bioengineering and neuroengineering. But then we've got also this healthy dose of philosophy that we want to make sure that gets into the mix. Because as many. As much as you can focus on the behavioral experiments, the computational models, the machine learning and the neuroimaging and the theory, you have to make sure that ultimately this package of data and of theoretical interpretations that you're building is getting at the question that you really want to answer. And that's why I think that having a strong influence from philosophy in this type of work is really critical, because we could really get bogged down the trees and miss the forest, so to speak. So there's kind of an overview of, generally speaking, what we're doing. [00:12:57] Speaker B: Great. We'll get a lot dirtier here in just a little bit. But for a long time, it wasn't cool to study consciousness. And then I don't know if it was Christoph Koch and Francis Crick's Neural Correlates of Consciousness work that made it cool again. But then it became cool again. And I kind of feel like the pendulum has swung back a little bit to not as cool anymore. And I wonder if you feel the same way, because I've heard you say that it's one of the most exciting times and one of the most frustrating times right now to be studying consciousness. So why is that? [00:13:38] Speaker A: Oh, lots of big questions. Okay, so I agree with you that it didn't used to be cool to study consciousness, and then it started to be cool. And. And so there were some of these papers back in the 90s where it was like, we want to go from a stream to a flood of trying to actually open the floodgates to studying consciousness with this scientific rigor, using all these tools that are usually pointed at other disciplines and point them all at consciousness. So, okay, I don't think that it's uncool to be studying it now. I think that it's cooler than it used to be. And we're not quite to the uncool spot yet because there's this nice interface now between consciousness science and what consciousness science is maturing into, and also a lot of other hot topics right now, like machine learning and AI and stuff that really, in the past only seven or so years has become exciting and viable. Right? We got ourselves out of the AI winter, we solved backpropagation, so to speak, and now all this stuff is becoming hot again. And so I don't actually agree that it's kind of not cool to be studying consciousness. I think it's maybe cool in a different way, but I don't think it's not cool so much. [00:14:54] Speaker B: I agree with you. I don't think it's not cool. I can't believe I'm saying cool so much. But I agree that it's not uncool now because it still feels like there's a lot of momentum behind it. But I feel like the backlash is becoming a little bit louder recently. But I really don't know. So you're in the thick of it. And that's why I was asking you if you feel any of that or what the. What the feeling is right now in that world. [00:15:22] Speaker A: Well, there's certainly a lot of controversy, and I think that part of that controversy is that consciousness is one of those fields that's really easy to fall into pseudoscience and traps and kind of we're all fascinated with how do we know that, we know that we exist. And it sounds a little bit like all of us just sitting around eating cookies and smoking or something like that, but it's. But like there's a real science behind consciousness science, right? There is a real neuroscience, a real computational, theoretical focus on it. And so I think that some of the most exciting controversies that are happening right now that make it feel like the field is disjointed and maybe susceptible to some of these not so rigorous influences are because we're now really getting. Starting to get it right. We're starting to make these connections among theories to try to specify which theories are falsifiable, which ones are not falsifiable, and how to actually go about pushing on them a little bit, breaking these theories instead of just having them be nice kind of, you know, theoretically motivated pieces that you read in a journal that publishes opinions. That may be why it seems like, we're infighting a little bit because we're actually starting to dig our heels in and to try to sort this out in a little bit more rigorous fashion, which is, I think, a really exciting time to be part of this field, even though it's hard to get it right. [00:16:57] Speaker B: So what is the frustrating part? Just that there are. So that's the exciting part. It's an exciting time to study consciousness. And is the slight infighting frustrating? Of course, I'm really not in this world anymore, so I don't see it. [00:17:10] Speaker A: I mean, yeah, but I don't know that that's unique to consciousness science. Right. I think that every field's got it. Every field's got its competing theories, its incompatible hypotheses, but we are now, as a field, starting to do this in a formal way. And so that's, as I said, the exciting part, the frustrating part, is that as the field of the scientific study of consciousness, especially from neuroscience perspective, becomes more mature and starts to rely more and more on these extremely sophisticated neuroimaging techniques and computational analyses and so on, we then start to experience in a very real way, the frustration that comes with all of neuroscience research, which is that as beautiful as our neuroimaging techniques are, and as sophisticated as our computational analyses are, they're really all just kind of terrible at trying to get at the real questions that we want to get at. Right. And so especially in human neuroscience, where we want to study, like, qualitative aspects of experience beyond, you know, monkeys saccotting to different targets on the screen, or rats sitting and waiting for their reward to report their confidence. We're really limited in human neuroscience. Right. We're just not allowed to go sticking electrodes in people's brains without a good reason. And so that's frustrating, but I don't think it's frustrating or it's not unique to consciousness science in that way. [00:18:41] Speaker B: Yeah. Well, that's interesting. So I know that you're a completely unbiased and objective perfect scientist, but totally. Do you have background beliefs, you know, as you study these things about, you know, like, panpsychism is very popular these days? I don't know. I'm shaking my head and kind of rolling my eyes and wearing my emotions on my sleeve. But do you subscribe to any worldview, you know, regarding, you know, what level of organism it takes to have consciousness, or are rocks imbued with certain amounts of consciousness, et cetera? Do you have that worked out in your head? [00:19:17] Speaker A: Well, I don't think rocks are Conscious. So I don't think that I'm a panpsychist. I don't think that we have good evidence that would in any way suggest that rocks or any other kind of inanimate objects are conscious. And so let's start at the other end of the hierarchy first. And we say, okay, probably you are conscious. Probably I am conscious. Even though we aren't able to actually prove that. Fine, probably it's possible. [00:19:45] Speaker B: That's the frustrating part. Even though we can't prove it. Fine, let's move on. [00:19:49] Speaker A: Yes, fine, let's move on. Let's just move on with our interruption. No, that's fine. So probably we're conscious. And probably if we created a sufficiently complex artificial being, I don't see a problem with the possibility that it could be conscious, that it could have phenomenological experience. I don't think we're there yet at all. I don't think your iPhone is conscious. But I do think that in theory, there's nothing maybe specific to the wetware that we carry around in our heads that forces that kind of implementation. But I don't think we're there yet at all. I think we're quite far from that, in fact. And then in between, you know, rocks and us, there's a whole lot of other stuff. And, you know, for example, like, I think my dog probably has phenomenological experience. Like, I think that, you know, a mouse in a lab probably has some level of phenomenological experience. And there may be differences. They're probably not reflectively self aware, but like when. So my dog is, she's a husky and we live in la and so she has this massive amount of fur and she's just like constantly hot and she's constantly, constantly panting, but I think she feels hot. I don't think that it's just like this, you know, thermostat in her head that's like, engage the panting mechanism, just like the thermostat in your house. So I don't think my thermostat is conscious. I think my dog is conscious. At least she has phenomenological experience. And mice? Yeah, probably fish, pill bugs, I don't know. So it kind of goes down the hierarchy from there. And probably not so much rocks and trees. [00:21:30] Speaker B: Okay, so it's not a fundamental property of matter. [00:21:33] Speaker A: I don't think so, no. [00:21:34] Speaker B: Yeah, I just, I don't, I can't. Yeah, I can't even begin to be convinced that that would be the case. Anyway, this is just my latest rant and also, you've not met my dog. And I'm sure your dog has a different level of consciousness than mine. Mine is close to zero, if it's at all above zero. I don't wish you to meet my dog for that reason. Perhaps. So you've been doing this a long time now? Well, relatively. You're still a very young person, but a relatively long time. And been thinking about it. How has your conception of consciousness changed since you were interested in it? You could say since high school, I guess, because that's when you really started getting interested in it. Huh. Has anything surprised you? You know, and how's it changed? [00:22:21] Speaker A: How has my perspective on consciousness changed? Well, okay, so like I said, I really started to get interested in this stuff in high school. And I was always interested in the phenomenology of consciousness, that, you know, something that it's like. And then when I got to college and I got to read, you know, what is it like to be a bat? And all of these kind of philosophy of mind arguments, I thought that was just incredibly fascinating. And one thing, though, that I was really interested in during college was the reflective aspect of consciousness. So the fact that we know that we know that we exist. So almost like an epistemological argument more than the phenomenology kind of side of philosophy of mind. And even though that is still really interesting to me, I think that it is very possible to separate that from the phenomenology itself. So you could potentially, you know, create some sort of algorithmic program that imbues an artificial agent, say, with self knowledge. It knows that it is an agent. It knows that it exists. You know, your Roomba knows where it is in space kind of thing, and it knows that it's a Roomba, you know, on some level, but it doesn't experience that knowledge. So I think the way that my understanding has changed is that I used to think that these two things were really very tightly linked. And then as I learned a lot more about both aspects of our experiences, that I feel like they can really be teased apart and that I really want to focus on the phenomenology aspect. [00:23:56] Speaker B: Oh, that's interesting. So do you feel like you have moved closer to understanding consciousness? And by that, I. Because I struggle to formulate what it is that I actually want when I say that I would like to understand something like consciousness. And the latest thing that I have sort of settled on in my head is that I think I would like to be able to articulate a question that makes sense to Me, and I don't feel that close to being able to do that with consciousness. Sometimes I feel close and then I realize I'm not. And it always goes back and forth. But do you feel like you are closer to understanding the phenomenology or any of it? [00:24:39] Speaker A: Well, so there's kind of the cheating answer to this question, which is that when I was 16, I probably didn't know what I was talking about. But the more serious answer to the question, do I feel like I'm closer. [00:24:55] Speaker B: Or that we are. Sorry to interrupt as a science and. Or you. [00:24:59] Speaker A: So I think in one way, I think, yeah, we've come closer even in the past 10, 15 years. And this isn't partly just the fact that I'm not, you know, 16 anymore. But I think that the. As I was saying before, I think that the theories of consciousness are starting to crystallize and that we're really starting to home in on the kinds of experiments that we would need to do in order to really like, not just test those hypotheses, but like potentially falsify them. Which is really kind of the whole point. You don't want to go seeking confirmatory ev, want to go try to break your theory. And I think that as a field, we're starting to get there. So I know that we're probably going to talk about theories that link consciousness and metacognition and stuff like that in a little bit. But one of the reasons that I think that we are starting to get there is that one of these theories, I think holds particular promise not just from a theoretical perspective, but from like an experimental, practical, kind of experimental hygiene perspective, so to speak. And that it also relies on some seriously mature science in other areas that we can now draw upon and we can redirect towards pointing it at consciousness. So I do think we're getting closer. I don't think this is a five year project. So closer is all relative here. I don't think that without some sort of massive paradigm shift that we don't even know what it looks like yet. I don't think we're going to solve the hard problem of consciousness anytime soon. [00:26:36] Speaker B: So you think it will take some sort of a breakthrough. We're not going to brute force our way, little step by little step, and then all of a sudden we just have enough data and theory and some falsifiable things and then we're going to be satisfied. Do you think it's going to take a breakthrough? [00:26:56] Speaker A: Well, it depends on what you mean by breakthrough, I guess. I think that the idea of iteratively accumulating knowledge towards a breakthrough understanding is maybe appropriate so that it's not that we need to suddenly, accidentally create Skynet in order to understand consciousness. That level of accidental explosion of understanding. I think. No, I wouldn't say necessarily we need that, although that would be nice. But I think that the way that we interpret the incremental knowledge that we are gathering, that could fundamentally shift even as we continue to gather that incremental knowledge. So maybe we need something like that. [00:27:41] Speaker B: Okay. All right. Well, speaking of being young, although I guess so. When I was in graduate school, I was already probably 57 years old or something, but it's still. I did not know the word metacognition before graduate school, I believe. But when I was in graduate school and I was figuring out what it was, you know, what my PhD project was going to be, I wanted to figure out what was the closest thing that I could study to consciousness using an animal model. And that thing turned out to be, at least my answer at that point was metacognition. [00:28:19] Speaker A: So it's a good answer. [00:28:21] Speaker B: Yeah, well, I thought you might agree with that. I know we're kindred spirits in this respect. So definition time again, if you'll entertain it. What is metacognition? [00:28:33] Speaker A: What is metacognition? Okay, my favorite way of defining metacognition is to use some cute little toy examples because I think it really drives home the message. So, in particular, the type of metacognition that I think is closely related to consciousness would be perceptual metacognition. Although that's not the only kind, but it means that. So from the perspective of using a cute example, we can use this one, which is that you imagine that you. It's dark and it's raining, and you're driving down this foggy road at night, and you think you see something up ahead in the world, and you have to decide what that thing is. So in this little toy example, this could be a deer or a car or a tree or like a hippopotamus or something like that. And so your brain is tasked with this, with this decision that it has to make about the most likely identity of the thing that's out there in the world. But then what comes along for free with all of this decision making is also your sense of confidence. So, like, you decided that it's a car. How sure are you that it's a car and not a tree and not a hippopotamus? The sense of confidence. This is the perceptual metacognition system at work, it's evaluating not only the relative amount of evidence out there for multiple different alternatives, multiple different interpretations, but it's also evaluating the quality and fidelity of the decision making process. So is the outcome of this decision likely to be true? So this is a perceptual metacognition example, but you can see then how it extends to all sorts of other domains. So like if you're a doctor trying to diagnose patients and you have like the X rays and the biometric data and the, you know, this, that, and the other thing you have to decide is this, you know, lung cancer or is it the flu? And your decision is one part of this, but your confidence is another part. So your confidence is going to tell you, am I sure enough to make a diagnosis and prescribe this medication or am I not sure enough, even though I think it's the same disease, But I'm still going to now have to go run more tests. So same thing with eyewitness testimony. Right? We don't care whether you saw the guy and you say you saw the guy. We care how sure you are that you saw the guy. And so this is the metacognitive system in memory and perception and cognitive decision making. [00:30:57] Speaker B: So are confidence in metacognition just completely interchangeable then? So I thought you were going to go on to talk about different forms of metacognitive processes that confidence was not yoked to, but are they interchangeable? [00:31:09] Speaker A: Oh, metacognition in terms of like other thinking about thinking in terms of learning or. [00:31:14] Speaker B: Yeah, you started off talking about perceptual metacognition and then tied confidence to that. Right. And started giving examples on it. And then I started wondering, well, are there types of metacognitive processes Again, it's been a while for me that confidence doesn't come along for the ride with. [00:31:32] Speaker A: So like just any type of thinking about thinking. Sure. [00:31:37] Speaker B: Okay. Yeah. So I just didn't know if you had examples because the way that it is written in the literature, it's hard to distinguish whether something different is meant by confidence and metacognition or if they're just synonyms, which is totally fine. I just want to make sure I understand what we're talking about. [00:31:55] Speaker A: Yeah, I think that confidence is the output of the metacognitive system at work. [00:32:00] Speaker B: Ah, okay. [00:32:01] Speaker A: So you can kind of think about it that way. That the thing that really sets the metacognitive system apart from like the first order system, which is like, is This a deer or car or tree or like which disease is it? Or that kind of thing. In my mind, the metacognitive system, the defining feature is that it is about the decision. It is not about the external world or the content of the memory or whatever. It's about the fidelity of the memory, the fidelity of the decisional process. It's about an internal process, it's not about the exterior world. That would be the defining feature of a metacognitive process in my mind is that it's introspective in nature. [00:32:40] Speaker B: So potentially there could be some non confidence oriented process. Well, if confidence is the output, especially so the metacognitive process is just the self referential aboutness of the system and confidence is the output of that system. All right, good. I mean, these are hard things, so it's worth digging in a little bit and taking our time somewhat. But you believe that, if I'm not mistaken, you believe that metacognition is the thing that currently separates us from whatever potential like deep learning or AI systems currently have, Is that right? [00:33:22] Speaker A: I think that it's definitely one of the things. I think that there's probably a lot of other things that separate us as well. Sure. But I think that the metacognitive system, in the sense that it has outputs like confidence, but that also those confidence outputs or other outputs of the metacognitive system can drive all sorts of other aspects of the system. I do think this is something that is fundamentally different between us and especially our current iterations of AI, because the output of, I mean, you're a metacognition researcher, so you know that the output of the metacognitive system isn't just, you know, acute confidence judgment. It's that that confidence judgment means something. It changes the way that you continue to sample the environment. It changes the way you update your own internal statistical model of the environment. So if you want to build a system that flexibly adapts to different contexts and that it can learn to generalize from out to outside its training set and it doesn't engage in catastrophic forgetting when it learns a new task and it doesn't need 10 bajillion examples in order to learn something, I do think that the metacognitive system will help us make strides towards solving those problems that we currently have in AI. [00:34:38] Speaker B: Is meta learning under that domain? Because there's a lot of work on meta learning. [00:34:43] Speaker A: I'd say so, yeah. I think that meta learning, where you are learning how to update yourself basically under different contexts and there are a lot of. It's not exactly my field of expertise. I'm not an AI researcher. But the idea that you build in a system that points at itself, that evaluates itself, in addition to evaluating the external world, I think that that is a metacognitive system, and I think that it will help us make strides in that domain. [00:35:13] Speaker B: Yeah, that makes sense. I'm now realizing that I'm experiencing the four letters, M, E, T, A meta. And we've said it a bunch already, and it's like I'm reliving my graduate school days because there's metacognition, there's meta learning, there's metamemory. And I said meta, meta, meta so much. It's so strange. And everything seemed meta all of a sudden. And you know what? And how many levels of meta are there? And. Well, I won't take us down that road, but let's talk a little bit about your actual research, because your account of confidence differs from the story that's sort of dominated the literature for a while now. Your most recent account, the punchline, I would say, and you can correct me if I'm wrong, is that confidence maybe, or one punchline, is that confidence may be an evolutionary byproduct. I'm not sure if byproduct is the right word of simple detection. And this is backed up by some modeling work and some neural recordings in monkey superior colliculus that we'll talk about, I think. But you've come to this point by way of testing blind sight in people, stimulating people's brains with magnetic pulses, measuring people's brains using electrocorticography, which is eeg, which is under the skull. So sitting an EEG net on the brain, essentially, and many other things here. Do you want to just give us the sort of greatest hits version of this line of work that's brought you to where you are now? [00:36:41] Speaker A: Sure, I can try. Yeah. So I think that you're right, that there is this. This intrinsic connection between metacognitive computations and the computations just underlying peer detection. And so I think that this is one of the reasons why there does seem to be a very tight coupling between consciousness and metacognition. And that by studying metacognition and by understanding the neural and computational correlates of metacognition, we might actually take a tiny step towards understanding phenomenology. But so, okay, the background you mentioned the blindsight paper, so I guess we can start there. So I'm sure that a lot of our listeners here are Familiar with blindsight, but just in case they aren't blindsight. Is this neurological condition very rare, where under some very specific cases, damage to primary visual cortex due to disease or injury, you damage the primary visual cortex, the V1. And instead of completely having the patient go blind, the patient experiences a very particular type of blindness where they feel like they can't see. They claim to have no visual experience, no visual qualia. Right. So no kind of subjective sense of the visual world, and yet they can still make decisions about visually, visually extracted information. So one of the most famous examples is the patient who you. You set him at the end of the hallway and you put obstacles in the hallway and you say, navigate down this hallway and this patient. There's a very striking video, actually that you can go find online where you see the patient navigating just fine around the obstacles. But then you say, you know, you essentially ask him how he does that, and he says, I don't even know that there's a hallway there. [00:38:34] Speaker B: Got lucky. [00:38:35] Speaker A: Yeah, exactly. It just, you know, felt like it, or those kinds of things. And so this is a very fundamental disconnect between the efficient and effective processing of visual information and the subjective experience of seeing. [00:38:50] Speaker B: Right. [00:38:50] Speaker A: So very cool from a scientific point of view to be able to try to separate out the neural processes that go with the zombie brain part of vision and the you experiencing it part of vision. Right. So that's cool, except that there are like, what, three patients in the world who have this very particular, rare kind of blindsight. And of course we could, you know, and people have created it in monkeys as well by lesioning the visual cortex and so on. But this really isn't a good way to get statistical power. And damaging the brain also comes with all sorts of other potential consequences. [00:39:25] Speaker B: Yeah. [00:39:26] Speaker A: So, you know, ideally what we would do is we would find some non invasive, non permanent way of disconnecting the visual processing bit from the subjective experiencing bit in like college students. And so we can do like, you know, 50 or 100 people at a time. And so the standard in the field has been to use visual masking, which is like, I show you a thing and then I show you another thing, like right after it, I flash like a second thing, like right after it, and the second thing makes the first thing hard to see. And so then I can ask you questions about that first thing, like, what was the identity of that stimulus, what was the orientation of it? That kind of thing. And so what's been reported is that if I fiddle with the masking stimulus and the timing of this stimulus in a very particular way. I can create conditions where you can discriminate the target. You can tell what it is, but you claim you can't see it. And so this looks an awful lot like blindsight. Great. Like, now I can go do this in, like, hundreds of college students and, like, publish my paper in Nature or something. So this is great. But as a scientific tool for studying consciousness, like, what I want to be able to do is then put someone, like, in a brain, in an MRI machine or do EEG and be like, oh, this is the part that does consciousness while visual processing is held constant. Right. So I create conditions where you can see it and you can do the task, and where you can't see it and you can still do the task. And then, like, the visual processing is matched and only the subjective bit changes. [00:41:03] Speaker B: So though it's only whatever difference you see in the brain is the consciousness bit. [00:41:08] Speaker A: Exactly. That's the idea. There's a kind of slight nagging problem with this way of doing things, which was not something that I discovered or that my postdoctoral mentor, Hakuan Lau, and I discovered together. It was brought up back in the 60s, which is that just because someone says they don't see the thing doesn't mean they actually don't see the thing. Like, it's kind of an annoying argument, but from an experimental hygiene point of view, it's really important. [00:41:39] Speaker B: That applies to monkeys as well, right? [00:41:43] Speaker A: It applies to everything. And even if you have really good experiments and good participants, this is a logical argument. It's not that you don't necessarily believe the person. It's that if they report no experience, what that really just means is that that whatever experience that they had fell below some internal threshold for reporting that they saw the thing or that they have some sort of confidence. So again, we didn't make this up. We didn't identify it. Like, this has been around for half a century. But what we did do, Hawkwon and I came up with this kind of cute way of maybe trying to test this without relying on that internal threshold criterion where we show you a thing and we ask you to discriminate it and we mask it so it's hard to see. And then we show you another thing and we ask you to discriminate it and we mask it so it's hard to see. And then we ask you which of the two you saw better or which one you felt more confident in. So now you're not allowed to say, I didn't see it, right. You have to pick one. And the sneaky thing that we do is we actually make one of them completely physically invisible. It's just actually not there. [00:42:47] Speaker B: Right? [00:42:48] Speaker A: So I show you a thing, make you discriminate it, show you nothing, make you discriminate it. And then you have to pick which one you felt more confident in. And so the idea is that if the masking truly, completely turns off the conscious experience, but it doesn't completely destroy your ability to do the visual task to begin with, then we should be able to create a condition where you can do the task and you can get it, like, I don't know, 60% correct or something like that. But when we ask you which one you saw better, you are basically 50, 50, choosing the one where it's there and the one where it's not there. Because subjectively they are exactly the same. [00:43:26] Speaker B: Right, because you're forced to choose between the two. [00:43:28] Speaker A: Yeah, exactly. Because you're forced to choose. So that's what we tried to do. And we were like, okay, everybody knows that blindsight exists in normal observers. We're totally going to find this beautiful little effect. And it didn't work. And no matter what we did, it didn't work. We changed the order of the questions, we changed the type of masking that we did. We later tried it with tms. Nothing worked. [00:43:51] Speaker B: What was your when immediately nothing worked. When the first things weren't working, was it just huge disappointment or was it the right thing? You're supposed to say where it's a science experiment and you think, ah, it's an interesting result either way. [00:44:05] Speaker A: Yeah, well, the first thing we thought was, well, we just don't have enough statistical power. [00:44:11] Speaker B: Okay, we're not doing it right. [00:44:12] Speaker A: What we're doing, what we're doing is trying to prove a negative right? Proving that this doesn't exist. And so we need to go have our people not do one hour per participant of data collection. We need like six hours of data per participant. So we brought people back and showed them stripes in darkened rooms for like six hours at a time. And then we started to be convinced that, no, this really does look like this is real. That no matter what we do, as soon as you can discriminate the identity of that target, you have some ability to, like, identify which one had the target in it. As soon as you can do the task with visual information, you have some subjective experience. There's no dissociation there. And so while this was disappointing from the perspective that there's a lot of really beautiful Neuroimaging work and modeling work and sophisticated science built on this kind of matched performance. But one case is conscious and one case is unconscious kind of stuff. This now meant that I was tasked with going and telling all these senior researchers that they might be incorrect, which was difficult. I think that it's not the case that blindsight is impossible to get in normal human intact observers. It's that. But it's a heck of a lot more difficult to get to than we thought. You can't just go masking something and assume that it's unconscious. So that's kind of the story that we've settled on, is that this is a lot harder to get at than we thought. And so we need to be a lot more careful in making conclusions about, oh, well, we masked this and we didn't mask that. And one of them is conscious and the other one is not. And so therefore, we go look at what the brain is doing, and aha, there's the consciousness bit. Like, it's not that simple. And so that's the punchline of that line of studies. Yeah, exactly. So, yeah, that was kind of the paper that kicked it all off in terms of consciousness. [00:46:17] Speaker B: Immediately you found out that it's so much messier than you'd ever wanted it to be. [00:46:21] Speaker A: Yeah. [00:46:22] Speaker B: But so one of the things that happened, for instance, is when you stimulated using tms, it actually just increased the confidence in a way that did not match necessarily. The confidence increased with the detection, with the discrimination, like you were just saying. And it did not match the actual accurate level of confidence that one would hope for in the task. Is that correct? [00:46:48] Speaker A: Yeah, that's exactly right. The crazy thing about the TMS study was that TMS to primary visual cortex is assumed to kind of inject noise into the visual system. Right. And it's. It's been shown many, many times that when you do TMS to primary visual cortex that you reduce performance on the task, you really mess up people's ability to use visual information, which makes sense. And so you would expect that confidence would also go down. Like, if I zap you and I make you terrible at the task, then your confidence should also be reduced if your confidence is optimal in the sense that if your system is able to evaluate its own decisional process in a way that optimally relies on all the information available to it, it's an ideal observer. And that's not what we found. What we found was consistent with some other studies that had also done TMS to V1 in the past. But the basic punchline is that we can screw up your ability to do the task and make you feel more confident. And that's just nuts, right? So it shows this really important separation between the metacognitive confidence system and the kind of type 1 zombie brain decision making system, in that they probably aren't relying on the same computations, the same types of information, or if they are, they're doing different stuff with it. And so this was, using the computational models to reveal what that stuff was, was really kind of powerful. [00:48:18] Speaker B: This most recent work that you're alluding. [00:48:20] Speaker A: To, or so the TMS paper had a little bit of modeling in it too, just like the Elife paper. But then, yeah, you're right, there have been a couple papers since then that start to kind of say, okay, well, here are like five or four candidate models for how there is this separation between the Type 1 system, which is like, is it a car or a tree or a hippopotamus, and the Type 2 system, which is like, how confident do you feel? [00:48:47] Speaker B: Yeah, well, this is, you know, when I started out this account where confidence and the decision accuracy are not congruent, I guess most recent accounts of confidence is that it's kind of a readout of the decision, of the probability of correctness. So that. So they are inherently congruent in most accounts of confidence relative to decisions. And I don't know if we're skipping over too much to talk about the most recent work, but. So you guys developed a. Well, it's a neural network model. It's not a deep learning model, but it's a stochastic accumulator type neural network model, which is in my wheelhouse, I guess, to account for how confidence and the decisions might not be congruent. And maybe you can just walk us through the idea there. [00:49:38] Speaker A: Yeah, sure. So this model, I was really drawing upon some insights that had started back in 2012 with a paper by Ariel Zilberberg and colleagues. And then there were a number of papers that were, that I was also on, but that also were published by some other members of my postdoctoral group and my kind of wider academic family, so to speak, where it seemed like what was happening is that once you make a decision, once your kind of zombie brain, your type 1 system makes a decision that instead of just reading out the stuff that went into that decision, there's a transformation of the information such that any information that is inconsistent with your choice gets kind of downweighted or ignored a little bit. So it's like this confirmation bias and like we all do this in cognitive decisions, right? Like, you know, you pick your favorite political candidate and then you're suddenly like, la, la, la. I can't hear you when, like, anybody tells you you're wrong. But it seems like it's happening all the way down in perception too, which is kind of crazy. [00:50:48] Speaker B: I feel like I can verify that just in my own mother, though, but that's a different story. Sorry. [00:50:57] Speaker A: No, it's okay. Okay. So the perceptual system, though, like the confirmation bias at the perceptual level, it suggests that once you've made a decision that this thing is a car and it's not a tree and it's not a deer, that the only thing you care about is how much it looks like a car, how much carness there is. And so this is where then the detection aspect kind of comes into play. So what matters is not like, the relative evidence for car or tree or deer or hippo. It matters just the overall detectability of the car. Aspects of the stimulus that's out there in the world. Those seem to be a primary driver in your confidence judgment. So with the neural network model, we wanted to figure out, well, this is very nice, we can write down some equations for this. But, like, how in the world could this be implemented in the squishy wetware that we have in our heads? So we came up with this accumulator model where the basic kind of cartoon level idea is that or some of the units in the network are very highly normalized, which means that they are inhibited by neurons that have opposing tuning preferences. And other neurons in this network are not very normalized, which means they just kind of accumulate evidence for whatever it is they like, and they don't listen to the surrounding network. The central hypothesis was that the level of normalization tuning, which is what this is called, of a particular neuron dictates whether it does type one or type two decisions. So that the more normalized neurons, which are very good at kind of averaging out the noise, so to speak, that those would kind of drive the type 1 decisions. The actual just signal processing. But then the units that care about how much evidence there is in general, like, do you have enough evidence to make a decision to begin with? Or how much can you detect what's out there in the world? Those are the ones that drive the confidence judgment. [00:52:55] Speaker B: It's a strange thing. A few questions just clarification wise, because I can't remember. So you have a bunch of different units, and they all have different levels of normalization and what you're saying is when there's high level of normalization, there's this balance because they're inhibited by other units that have different preferences. And then there's a set with lower levels of normalizations, which are the detection units, which can just kind of run away with the signal and wildly say, ah, it's there in the model. And just the way that you think about it, is there a gradient across all the different levels of normalization or are these really like two separate populations? [00:53:34] Speaker A: Yeah, we're thinking of it as a gradient in the simplest form. To just see whether it worked. We built it just kind of two levels, but we've also scaled it up and done some other simulations and thought of it as a lot of a gradient. So the level of normalization dictates states the level of contribution to the decisions or the confidence judgments, not just like this binary thing. [00:53:55] Speaker B: And then, so during the task, right, you have this discrimination to make, and then there's the confidence report afterward. And it's unclear whether to me, not just in your work, but just in general, you know, let's say. So let's say you're spot on and confidence rides on the back of detection, the detection system. Well, well, detection seems to happen fast, right? You would want to detect that there's something coming at you, whether it's a tiger or a loved one or something. You know, there's a big difference there. But so you detect it first so that your system can react. And then you'd have the confidence signal on top of that. And that's almost preceding the discrimination that you actually have to then make of whether it is a tiger or a loved one coming at you. And I'm wondering if there's an order of operation, you know, and sorry, I don't. I probably could have dug deeper in your models to just answer this, but, you know, does the confidence turn on with the detection because it has to come in the model. Doesn't it come after the discrimination? And if this is too in the weeds, that's totally fine and we can move on. [00:55:02] Speaker A: No, that's good. I think this is actually a really important question, and I don't know that we have fully the answer to it yet, that there are some models, including some earlier versions of this model from us, that suggest that they happen kind of at the same time, that you can read out the decisions and the confidence judgments at the same time just from different populations of neurons or from the same population of neuron in some other models. Then there's also the two stage dynamic signal detection theory version, like Plespac Abusemier back in the early 2000s where you know, there's this like the second stage of accumulation. Yes, right. And so we're playing with all of these variants right now to see which one actually fits the behavioral data the best. Because the challenge is that you can use like TMS and you can also do these other stimulus manipulations where you really mess up confidence and accuracy. You push them in opposite directions. And a lot of these models of like this is the confidence is a readout of the probability of being correct. They don't do that. Like if you, if you make someone less accurate, how are you going to make them more confident according to those kinds of models? And so trying to get our models to break in the same way that the humans break, essentially that's our prior or our priority. And so we're actually exploring these variants where it's like, is it simultaneous readout or is it sequential readout? And we're trying to figure out which one fits these kinds of wacky situations the best. [00:56:38] Speaker B: I mean, these are the kinds of questions that I went into. Just completely naive and well, of course confidence comes after a decision. Oh wait, no, they're part of the same system and they emerge at the same time. Wait, no, confidence comes before and it just becomes very messy and confusing. And that's why people like you are doing such great work to sort it all out for us. So thanks. So that's kind of. [00:57:01] Speaker A: You should come do it. [00:57:03] Speaker B: Oh, I'm retired. [00:57:04] Speaker A: Come do some projects with us. [00:57:06] Speaker B: I'll come be a subject. Yeah. So I don't do. I don't know if you want to talk about the superior caliculus recording data as well that accompanies it before we leave off and start talking about your most recent exciting big project. [00:57:21] Speaker A: Cool. Sure, yeah. I'll just mention it very briefly just because it's such a relatively smaller portion of this particular manuscript, which is that the whole point of the model is that we have this diversity of normalization that some units are really strongly inhibited by their neighbors and others really don't care about their neighbors. And this property of sensory cortex has been identified in other primary and kind of secondary sensory regions. So John Munsell has some really nice work on this. Marlene Cohen has some really nice work on this where they show that, that V4 and MT have some of these more and less normalized units. But so far to my knowledge it has not been shown in areas that have accumulator units. And so that's what we were hoping to find with the superior colliculus recording. So we basically just borrowed these recordings from Michelle Basso because they were part of another project and they happened to be appropriate for answering this question. And it turns out that, yes, we saw some evidence, some preliminary evidence that some of these accumulator neurons in the superior colliculus also showed this diversity of normalization tuning. And so that suggests that maybe we're onto something. But we're going to need a lot more work before we can make any bigger conclusions than that. [00:58:46] Speaker B: Yeah. All right, cool. There's so much neural data out there in non human animal models that needs to be repurposed. And, and it's just sitting around. But I guess it has to be the right question. But it's a shame that so much of it is just sitting in databases that my postdoc advisor just made this point all the time to us because we would have, one of us would have a new idea to tweak the experiment and then collect for another few weeks. And his point was, okay, well what's going to happen to the other data? It's just going to sit there then, and it needs to be used for what it was for and, and who's going to use the old data? And it was a very good point. So you're continuing on and so you're testing different models and that's cool, you've got the neural data to support your hypotheses. But I know that you have a new recent big project. I don't know if it's still gearing up or if it's geared up, but you're going to be doing closed loop neural stimulation work with real time FMRI decoded neurofeedback. What is that? Can you tell us just about that project? Because it's really cool. [00:59:53] Speaker A: Yeah, sure. Thanks for asking. This is still ramping up and actually we've got. Even when I was a postdoc, I was collecting some data on an early beta version of this project, which turns out didn't work for a number of reasons, as science goes, right. Just to orient everybody, the idea of decoded neurofeedback is logically similar to any kind of neurofeedback where like I stick an EEG on your head or I stick you in an MRI scanner and I say like, change your brain activity and if you get it right, I'll give you money, or you get to get points, or you get to move this cursor around on the screen or something. But the thing that makes it very different from the kind of standard biofeedback approaches is that in EEG neurofeedback, where you're changing, like, alpha power, or in standard univariate functional MRI neurofeedback, where it's like, turn up the activity in the amygdala or something, the decoded neurofeedback is based on specific spatial patterns of neural activity. So it kind of proceeds in two phases, where first I have to learn the patterns that go with what I want to induce you to think about. So I have to show you stuff while your brain is being scanned. So I show you, like, dogs and cats and chairs and red things and green things and so on so that I can learn the patterns of activity that go with whatever thought I want to implant in your head. And so after I learn those patterns. [01:01:28] Speaker B: Of activity, how long does that take? How many? [01:01:33] Speaker A: It depends usually. So we've found that there are lots of. Well, we don't need to get into the details so much, but there are some ways of doing this so that you only need to have, like, an hour and a half of data. [01:01:45] Speaker B: Oh, wow. [01:01:46] Speaker A: So that you can learn some specific spatial patterns. Because there are ways to combine, like, my brain with your brain, with a whole bunch of other people's brains. And so you could then train the classifiers that way. So it's nice in that sense that I don't have to have, like, 10 hours of data to learn the patterns of activity in your head. Yeah. So once we've learned those patterns, however we do it, then I bring you back and I put you in the scanner and I say, here, look at this, like, noise or TV snow on the screen and change your brain activity. And if you get it right, I'll give you money. And under the hood, what's happening is I'm reading your brain FMRI responses, your bold response, essentially in as real time as the MRI allows. And so I do real time motion correction and real time realignment, and then real time comparison of the brain activity pattern being produced by you, like, four seconds ago to the pattern that I know goes with the thing that I want you to think about. So then we get a metric that says, how well do the patterns match? And then that metric is fed back to the subject, and all of this happens in about six seconds. And four of those seconds, four to five of those seconds are the hemodynamic response function delay. So we really have a very small amount of time to do this, but it's been shown that over like three or five or ten days of training, what will happen is you, the subject, will learn how to produce a particular target pattern of brain activity that goes with spiders and not snakes or something. And that you can get better at doing this over time. And that the consequences of doing this mean that you will not only learn how to produce this pattern of brain activity, but it will also impact your cognition in other ways. So there are two particular ways that I think this is very fascinating. One is that if I ask you were you thinking about spiders or snakes and the target was spider, you have no idea that it was spider. You're like, I don't know, I was thinking about sandwiches. I have no idea what you were asking me to think about. So it's totally unconscious. But it also then has these lasting consequences. So there was this study a couple years ago by my colleague Vincent de Mochel in pnas, and they induced spiders and not snakes and showed that the amygdala was less reactive to viewing spiders after the kind of unconscious exposure therapy than before. And so even though the people had no idea that they were thinking about spiders for three days, their brains knew. And so then there were these kind of lasting consequences, lasting effects which were really fascinating. [01:04:27] Speaker B: That is just terrifying to think of how much we're being influenced by processes that are not to the level of our own consciousness, let's say. I mean, it's just all the time and we have no control over any of it. [01:04:40] Speaker A: It's terrifying. Luckily, though, with this technique, we're not implanting super complicated ideas. It's like, you know, spider or a chair. We're not really quite to total call yet. [01:04:51] Speaker B: Oh, I don't mean what you're doing is terrifying. I just mean in your daily life activity, as you go move through the world, there are all these processes that happen that you're not aware of that are affecting whether you're going to be afraid of spiders, for instance. [01:05:02] Speaker A: That is a whole other conversation. [01:05:05] Speaker B: Yes. Yeah, yeah. You're not evil yet. I don't believe yet that you're evil. So what do you guys. You can use this for so many things. What are you planning to use it for? [01:05:14] Speaker A: So this is what I think is the most exciting, and this is what we're now gearing up to do in my lab is. So far this technique has been used to target patterns of activity that largely go with. They're the patterns that we learned from just showing you stuff and kind of using machine learning or classification techniques to say, well, what Are the patterns that go with red stuff or with spiders or with cats or chairs or something. But I think that there's a whole other level of what we could be doing with this technique because it is so spatially precise. It's so much more spatially precise than anything else that we have out access to in humans in terms of non invasive neurostimulation. So like TMS or transcranial magnetic stimulation. Transcranial electrical stimulation, these are so they're maybe spatially precise in terms of being able to zap you with TMS in a particular spot. But like what it does right there is really not. It's like on or off. Right. [01:06:09] Speaker B: In a harrowing way. [01:06:12] Speaker A: Yeah. It's pretty crazy that we're allowed to do that. Right. So, but this, like if I were to find a pattern of brain activity that goes not with red stuff or spiders or cats, but instead I find a pattern of activity that goes with this particular parameter manipulation in my computational model that if I, if I fit a computational model and I understand that this pattern of brain activity goes with the likelihood of being more conscious or less conscious, or of having a faster evidence accumulation or a slower evidence accumulation in a drift diffusion kind of framework or other aspects of these computational approaches that we can fit to your behavior, and we go find the neural correlates of those, then we can push on those neural correlates. Now we have a causal way of not only testing which brain patterns go with different stimuli, but which computational models are actually responsible for the generation of these metacognitive computations and the resulting confidence judgments. So that's my goal. [01:07:15] Speaker B: So you can really match, do some model testing directly compared to brain responses that are induced through feedback. Beautiful. [01:07:25] Speaker A: That's the idea. [01:07:26] Speaker B: Yeah, man. It's exciting stuff. [01:07:28] Speaker A: We'll see if it works. [01:07:29] Speaker B: Yeah, I'm sure it's going to work perfectly, just like everything else in science. [01:07:33] Speaker A: Sure. [01:07:35] Speaker B: Great. Well, good luck. Thank you. Okay, so I want to sort of draw back now again into the bigger questions. So we dove down into your research, which was exciting and fun. I always have this fear of missing out every interview I do. I think, oh, it sounds so fun. Why am I not in science? You invited me back though. That was very kind of you. You view metacognition as really underlying our consciousness, am I right? [01:08:04] Speaker A: Yeah. It's part of this link that you mentioned before between metacognition and detection. Right. That there seems to be this kind of fundamental connection that what the metacognitive system is doing is like, you Know, is this. What is the relative evidence for this being a car or a tree or a deer or also nothing. It might be nothing. Right. And so that's a possibility that the metacognitive system needs to evaluate. And so it's, you know, tightly linked to detection. Like where did the, where did the information come from? Is it outside my head? Did it come from inside my head? Did I hallucinate it? So I think that there's these tight links. Absolutely. I also think that there's a way that we can really get at studying consciousness by studying metacognition. [01:08:47] Speaker B: Well, I don't know. I mean, I was thinking this from the top as well, because when you say consciousness, it's this grandiose term and somewhat amorphous, and people have different definitions and conceptions. But when you say metacognition, it somehow seems a little more tractable. And I wonder if that's, that's a reason why studying metacognition in order to study consciousness at some later level seems like maybe a more tractable way to proceed. [01:09:16] Speaker A: Well, yeah, I think it's definitely a more tractable way to proceed than some of the other options out there. But also there's this whole theory of consciousness that really very tightly links it to metacognition, which is the collection of higher order theories of consciousness. This idea that there's some system that kind of points internally as opposed to externally, and it either evaluates introspectively the content of the first order representation, or it re represents that first order representation, depending on which version of the higher order theories that you subscribe to. So I think that there's not just this experimental convenience, and some of the computational work that we've shown may connect consciousness and detection and metacognition, but there's also this really nice theoretical component to it, which is that there may be a system like the metacognitive system, which has the entire job of evaluating the quality, the fidelity, the strength of the signal or something in that first order representation to decide does this make it into consciousness or not? And if it does make it into consciousness, what does it look like? What is the dimensionality? What are the bits of it that make it into consciousness and others that kind of fall by the wayside. What are the qualities of the stuff that makes it into consciousness? So the higher order, the set of higher order theories of consciousness really kind of formalizes this idea. [01:10:47] Speaker B: So in that sense, the metacognitive process would give rise to consciousness by dint of its self referential internal looking Processing, sure, yeah. [01:11:02] Speaker A: It could be part of that generative process. Right. I think that according to these theories, it seems like a metacognitive or self referential system may be necessary to the generation of consciousness. I don't think I would ever argue that it's sufficient though, especially because we don't know what the rest of that generative process looks like. So it seems like this is a critical component that without it, at least in us, consciousness would not arise. Perhaps, but that's not the whole story. Like, we've got a long way to go. There's a big explanatory gap there. [01:11:36] Speaker B: So how many components do you think is it going to take? This is the speculative part of the show. So you can just say a number. [01:11:45] Speaker A: And be, oh boy, too many to count, or just one. [01:11:51] Speaker B: It doesn't even have to be a whole number. But can you have metacognition without consciousness then? [01:11:58] Speaker A: Yeah, I think you probably could. You can imagine like something that looks like metacognition, which would be like that self referential component. Like I could probably build some sort of metacognitive processing into like my Roomba. And you know, it would have some sort of state of evaluating whether it has enough information to decide whether that's a staircase or not to like go over the edge. So I think that you could build something that looks like metacognition into a non conscious entity. [01:12:27] Speaker B: I think we ended up defining it operationally as keeping track of your decisions. And when you do that, you can just completely remove all conscious subjective experience out of it and be totally fine. Which was disappointing, but the scientifically responsible way to operationalize it at the time for us. So. And that kind of sounds what you're talking about with the Roomba just keeping track of where the stairs are, et cetera. It's funny that you use that example. We literally just yesterday got a Roomba, which has been good. I don't know if it's metacognitive. [01:13:00] Speaker A: How's it working out for you? [01:13:01] Speaker B: Yeah, my children definitely think it is. So earlier you mentioned that there's in principle no reason why AI or non biological systems could not be coded into consciousness, could not experience some subjective sense of the world in principle. So you're not one of these people that thinks that there's really something fundamental to the life or metabolic sort of processes or being embodied in. What about just embodiment in general? Sorry, you're shaking your head, so I'm just. You didn't say anything, so I'm answering your question. You Say, no, I'm not one of those people. What about embodiment in general? Will a robot, you know, will an AI need to have a body like a robot for it to be conscious? Or can we really just create consciousness computationally completely separate from biological processes? [01:13:53] Speaker A: I think so, yeah. When I was shaking my head, that really was me saying that. No, I don't think that there's something necessarily fundamentally special about biology in creating consciousness. Embodiment, though, is kind of an interesting idea that if you take out embodiment at the strictest level and you say not only is this thing not embodied, but it has like no sensory inputs, it has no way of connecting with anything besides itself, then I don't know what that would really look like in terms of a conscious agent, how it would be able. Because if we think about what the metacognitive system is doing in the generation of consciousness, if we think that that's part of the story which, you know, you take it or leave it. But if you think that that's part of the story, then part of what that system is doing is deciding is this signal or is this noise in terms of the content of this representation? And if it's signal, did it come from out there or did it come from in here, inside my head? And if you don't have those distinctions to make, your metacognitive system is going to not have like the decisions to make that it needs to make in order to be the metacognitive system, so to speak. So I think that embodiment, sensory experience or sensory inputs. Sensory inputs for sure, I think would be something that would be kind of a critical component, embodiment too, because then you can kind of have a sense of yourself in space. And then we get back to the self reflective, you know, self awareness aspect. Yeah, the model of yourself in the world and how interaction with the world drives the prediction engine that you carry around inside your head and how that interacts with your ability to tell whether this is signal, a noise and all those things. [01:15:36] Speaker B: You even mentioned that we can't really know whether each other is conscious. Right. We presume that. I know that's sort of said with a hint of something. What's the term? Oh, I can't think of it. I'll move on. Yeah. Tongue in cheek. Thank you. Yeah. But there really is not a litmus test for we can't measure the degree of subjective experience someone is having. And how are we going to assess it in an AI system if we can't really assess it amongst Each other. She's rolling her eyes, folks. It's becoming extremely. [01:16:19] Speaker A: The thing is that I was really being kind of cheeky when I said that before, right? Because this is like, this is not the. Like, I didn't come up with this as being kind of a silly philosophical argument, right? Yes, yes. You know, how do we know that you're not a philosophical zombie? That I'm not a philosophical zombie? You know, it's. It's one of the oldest questions that we have and annoying. But, like, suddenly it's very relevant, right? Like, that's the really exciting thing about doing consciousness science right now. When computational neuroscience is exploding, when AI is exploding. Because now suddenly all this stuff, all these questions of how do we assess consciousness in other ages are suddenly, like, extremely topical and extremely relevant in a way that they weren't even 15 years ago. So I don't know. I don't know what the answer is. I do know that we could, you know, if we try to connect maybe metacognition to consciousness and we try to measure metacognition and the computations that go along with it, because in us, at least, feeling sure in something feels like something just like, you know, being hot or seeing red or whatever. Like, there are qualitative aspects to first order perception. There are also qualitative aspects, aspects to metacognition for us. So maybe that's one way that we could measure it. But we still have that logical leap to say, well, just because I can measure it in you or me doesn't mean that you or I are actually conscious. [01:17:44] Speaker B: So it's time to start building metacognition into AI and see what happens. [01:17:49] Speaker A: Sure, let's do it. [01:17:50] Speaker B: Yeah. Okay. Maybe I will come join your lab. Who knows? Okay, Megan, so just a few more broad questions, if you're up for it here. So, sure. I opened up with how rare it is that people will even entertain the notion of speaking about consciousness or even speculating or, you know, a lot of people say that we're not ready yet, that there's, you know, we don't understand the topic well enough to ask, for instance, falsifiable questions. And. And I get a lot of. Maybe not a lot, but some people have responded that, look, there's a lot of very important stuff to do before we can tackle besides consciousness or before we can tackle consciousness. What do you think of those sorts of responses, that hesitancy to even want to study or talk about it? [01:18:42] Speaker A: Well, I mean, the cheating answer is you could say this about a lot of stuff. Right. That, you know, if not now, then when? Why? You know, why is now not the right time? But I think also that the non cheating answer is related to some of the other stuff that we've talked about, about the confluence of psychology and cognitive science with AI and philosophy right now that this really feels like it's actually a very pressing topic in a way that it wasn't even 15 years ago. So now that we're starting to have very real, very concrete ways in which we need to think about, you know, as our iPhones get smarter, do they start to have feelings? When are we. When, when is it going to stop being allowable to kick your Roomba? Because it did something stupid. Right. Because now it has consciousness or now it has phenomenology. And like this isn't just science fiction anymore. And so I think that it's really important that we understand what necessary substrates go into consciousness, what that generative structure looks like as much as we can. We've got a long way to go in that regard and how to measure it appropriately, not just in each other in kind of a tongue in cheek way, but also really like quantifiable metrics of levels of awareness, levels of phenomenology. I think that if not now, then I don't know when would be a better time to really start asking these questions in earnest. [01:20:10] Speaker B: What is something that you wish you knew going into a neuroscience. I'll say career. I mean, we can start grad school days or whatever. Something that you wish someone had told you or wish that you had known. [01:20:25] Speaker A: Yeah. I think that I went into my PhD program very naive about career prospects in this field, about. I did not understand what the statistics of the job market looked like. I didn't understand what the timeline did. [01:20:42] Speaker B: You know, the statistics or is it that you heard them? But it didn't register. Okay, I didn't either. Yeah, sorry to interrupt. [01:20:47] Speaker A: I mean, I don't even remember like hearing them. Maybe I did and I just like ignored it. I don't know, didn't matter. Yeah, but it like I always knew that I wanted a career in academia and so obviously this was the way to do it. And so I was gonna, you know, go do the lab manager gig and I was going to do the PhD and then I was going to do the postdoc and then obviously I was going to be a professor. And I didn't really think about what that meant practically and what that meant statistically. And so I had this series of very terrifying moments towards the end of my PhD and also then in the middle of my postdoc, when it became very real what I was up against and how. Whether it was imposter syndrome or the truth, I don't know. But how woefully inadequate I felt to be competitive in what I really wanted to be doing. And that was very scary. And I had, you know, this series of moments where it was like, oh, I'm going to have to move to some random little place or I'm not going to get to do this at all. Even worse. And I think also a lot of this could have been mitigated by educating myself earlier, which would have been a good idea, but maybe you would never. [01:22:05] Speaker B: Have started if you really knew the truth. [01:22:07] Speaker A: Maybe I would have saved myself a lot of stress, though, but I also would have been better at looking into the alternatives. So for me, it was always like academia. That was the goal. That was the one goalpost that I was always aiming at, and there were no deviations allowed. And I think if I had allowed myself the flexibility to recognize that there are plenty of worthy and exciting and really interesting and rewarding things to do that are not in the ivory tower, that I would have saved myself a lot of headache and maybe found myself just as happy in industry, too. But I wasn't even let myself think about that, which made it super terrifying. When you look at the job market statistics and you say, oh, there are 300 applicants for every tenure track position. Great. Like, this is like wanting to become an astronaut. Like, I did. Definitely did not know that when I went in. [01:23:05] Speaker B: Yeah, that's. That's still on my list. I still plan on becoming an astronaut. So we'll see. [01:23:10] Speaker A: Good. [01:23:11] Speaker B: Megan, thank you so much for taking so much time with me here. Thank you for saying yes to me. I appreciate that. You can't say no to anything. It is. Has contributed much to my audience here and me. So thanks for the time. [01:23:23] Speaker A: It's really been my pleasure, honestly. I really enjoy your podcast. I really enjoy. I enjoyed this conversation very much. And so thanks for bringing me in and for having this conversation, for spending so much time chatting. It's really been a pleasure. [01:23:37] Speaker B: Oh, and I wanted to thank you also. I learned that I can probably go inside and not kick my Roomba, but maybe kick my dog. We're going to. We'll see. We'll see. [01:23:47] Speaker A: Don't do that. Your dog is more conscious than you think. [01:23:51] Speaker B: Okay. Okay. That's a great note to end on. Thanks, Megan. Brain Inspired is a production of Me and you. I don't do advertisements. You can support the show through Patreon for a trifling amount and get access to the full versions of all the episodes, plus bonus episodes that focus more on the cultural side but still have science. Go to Braininspired Co and find the red Patreon button there to get in touch with me. Email Paul BrainInspired co. The music you hear is by the New Year. Find [email protected] thank you for your support. See you next time.

Other Episodes

Episode

March 27, 2019 01:04:40
Episode Cover

BI 030 Jay McClelland: Mathematical Reasoning and PDP

Jay's homepage at Stanford.Implementing mathematical reasoning in machines:The video lecture.The paper.Parallel Distributed Processing by Rumelhart and McClelland.Complimentary Learning Systems Theory and Its Recent Update.Episode...

Listen

Episode 0

March 09, 2021 00:42:32
Episode Cover

BI 100.1 Special: What Has Improved Your Career or Well-being?

Brain Inspired turns 100 (episodes) today! To celebrate, my patreon supporters helped me create a list of questions to ask my previous guests, many...

Listen

Episode 0

August 15, 2024 01:27:51
Episode Cover

BI 191 Damian Kelty-Stephen: Fractal Turbulent Cascading Intelligence

Support the show to get full episodes and join the Discord community. Damian Kelty-Stephen is an experimental psychologist at State University of New York...

Listen