[00:00:03] Speaker A: For me, I feel the lack of clear definition is exactly what attracts me to the problem. Because in some ways, then that means that someone needs to clarify those concepts.
[00:00:14] Speaker B: People coming into the field are going to be key for this. I think Hakwan rightly so, reminds us as a community that we shouldn't be complacent, that we do need to cultivate and mentor people who are interested in this area and for them to gain the right kind of skill set to come into it.
[00:00:35] Speaker A: Steve and I agree on a lot of things, and maybe that's one thing I like. I like working with Steve is that his more positive outlook often complements my darker, pessimistic outlook to things.
[00:00:49] Speaker C: Screw it. I've got tenure, I've got awards in various fields, and now I can do this consciousness. And maybe we. Maybe the field needed that, you know, outsider effect.
[00:01:00] Speaker B: Yeah, I mean, maybe it needed some of them and not others.
That's the diplomatic way.
[00:01:06] Speaker C: Let's go down the list here.
This is brain Inspired.
[00:01:27] Speaker D: Welcome to the Consciousness Extravaganza Podcast. I'm Paul and I believe there should be a podcast called the Consciousness Extravaganza Podcast. I'm too busy to make that, but I am happy to have Hakwan Lau and Steve Fleming on for this episode. About one of my favorite topics, consciousness. Hakwon runs the Consciousness and Metacognition Lab at the University of California, Los Angeles, and Steve runs the Meta Lab at University College, London. And they're both interested in pursuing cognitive computational accounts of consciousness and the related phenomenon of metacognition and striving to inch the science of consciousness forward on solid empirical footing. And indeed, they collaborate frequently on these topics. As you probably know, there are a plethora of consciousness theories out there, and there are more being added all the time. The main one that Hakuan and more recently Steve have been associated with is the family of higher order theories. Roughly, higher order theories posit that some part or circuitry in the brain has the capacity and mechanism to represent what's going on in other parts of the brain. So frontal cortex may be able to represent the contents of visual cortex and some of the associated statistics of that visual cortex processing.
So the second order metacognitive representation in frontal cortex in this example is like perceiving our perceptions. So like I said, there are many other genres of consciousness theories. Some are well known, including the global workspace theory, integrated information theory, and so on. And we touch on a few of these as we go along. Since both Hawkwan and Steve are developing Computational models to explain some aspects of consciousness. We talk about one of each of their recent models. In Hakwon's case, he is working on a reality monitoring model which addresses how a higher order system of awareness might develop or might have evolved as a sort of reality monitoring system, asking if your perceptions are from external reality or are generated by your internal processing, your imagination. So he likens this to a generative adversarial network, or gan. In AI, these are the kinds of networks that are responsible for producing some of the AI made artwork, like portraits of people who don't exist and so on. The way a GAN works, it has two sub networks that interact. So one network is called the generator, which produces data. In the AI art example, this would be a potential work of art, and the other subnetwork, the discriminator, judges or discriminates whether the generated data or art is good enough to accept as art. So in Hawkwon's reality monitoring model, your brain's generator provides the perceptual data and your brain's discriminator judges whether that data is from the outside world or from your imagination. And Steve, on the other hand, is working on what he calls a higher order state space model. So whereas Hakwon's gan, like reality monitoring model, distinguishes between reality and imagination, Steve's model signals when awareness of some object is present and then can probe the content of that awareness. So, for example, I'm aware I'm seeing.
[00:04:55] Speaker C: Something and that thing is an apple.
[00:04:58] Speaker D: But the model also actively signals when there's an absence of awareness, and in that case there's no content to determine.
[00:05:05] Speaker C: There's no difference between being unaware of.
[00:05:08] Speaker D: An apple and unaware of an orange in the model. So that's obviously a very cursory introduction there. They describe the models a little bit more, we discuss them, we also compare their models a little bit. We discuss consciousness in AI, functions of consciousness and lots of other topics. And there's a big chunk in the beginning here where we discuss the nature of studying consciousness. So the social and career aspects of studying something so unknown and often controversial as consciousness is, and how it differs from other research areas and how it also doesn't differ so much. So we talk a little bit about a lot of topics. So I hope that this whets your appetite to dig deeper in the show notes to learn more. So go to BrainInspired Co podcast 99. Support the podcast on Patreon if you find that it's expanding your mind and advancing and improving your, your mental world or some of your own research, which.
[00:06:05] Speaker C: I hope it is.
[00:06:06] Speaker D: You get to hear full versions of all the episodes. And there's some other stuff you get too. So go to BrainInspired Co and click on the Patreon button there. I find it unbelievably satisfying and gratifying to talk about so many fascinating topics. I hope that you take a moment from time to time yourself to just appreciate what a privilege it is to spend some time thinking about these things and listening to so many good people like Hakwon and like Steve.
[00:06:31] Speaker C: Enjoy.
So I have been reflecting lately about alternative ways that my vocational path could have turned out. And back in my graduate school days, I was, you know, if I'd continued that line of research, I was on my way to a research career, much like you guys have, perhaps.
But once I got my PhD and went on to a postdoc, that changed. So I had Megan Peters on and Hakwon. Megan was in your lab for a spell there and now has her own lab. But I thought it'd be Fun for episode 99 before I hit 100 here to have on some people that resembled what could have been my counterfactual path.
So thanks for coming on, guys.
[00:07:16] Speaker B: It's a real pleasure to be here.
[00:07:17] Speaker A: Yeah, thank you. Likewise.
[00:07:19] Speaker C: So we're going to get into some gritty, detailed research topics about consciousness and Steve before I even begin here, I just saw on Twitter the other day, which I tried to spend as little time as possible on, but I saw that you're actually coming out with a book about all this which you didn't tell me about before we set this up. What's the book?
[00:07:38] Speaker B: So it's not specifically on consciousness, but it's more on metacognition and self awareness. So it's entitled Know Thyself and yeah, it's coming out, being published by Basic Books in the US at the end of April and also John Murray's in the uk similar time.
And it covers the whole range of work on metacognition, ranging from animals all the way up to humans. Covers the development, evolution of metacognition. So lots of interesting things to talk about, but perhaps a little different to consciousness itself.
[00:08:11] Speaker C: It's probably got your political leanings work in it as well. I would imagine it has, yes. Yeah. Okay. So there's a lot of stuff that we're not going to get to that is in that book that we won't talk about on the show. Maybe I can convince you to come back on.
[00:08:24] Speaker B: I'd be delighted to do that.
[00:08:26] Speaker C: Well, we'll see. Hang on. I'll ask that question at the end of this interview and see if you have the same reaction. Okay, so. But before we get into the real, you know, research topic stuff, just for my own personal benefit, I really, I want to ask about the social aspects and just your personal feelings and reflections about studying consciousness. And I know you've both studied a lot of things other than consciousness, things like metacognition and that's all interrelated, et cetera. But. So I'll start with this. So a lot of people on my podcast have either on or off air suggested that we shouldn't even be studying consciousness. And there are various reasons. So I want to throw a few reasons out to you here and you guys can, you can rebut and respond to why those are not good reasons to not study consciousness. So. So one, the first thing, and I think Jeff Hawkins said this even on an episode, is that there are many more important things to be studying which are actually tractable. And so consciousness shouldn't be one of those things that we should be focusing on. What say you?
[00:09:28] Speaker A: I think maybe first reaction I would have is like, to each their own. And I think science works because different people do different things. If we all rush to like, okay, now Covid is the biggest problem and everyone just go and work on it, science wouldn't really work. I mean, different people should pursue different things. And that's like one of the beauty of being a scientist, the perks. And we can just do whatever we want. And to the extent that is actually somewhat tractable, and so the other part is really the tractability part, then I actually don't find it so intractable.
Ultimately, we are measuring something that is not so elusive or nebulous. It's a little bit trickier. We are really measuring something via self report. And. But, but so are many other people in psychology. I mean, if you study episodic memory, if you study just. Or memories in general or psychiatric diseases, quite often self report is a key component. And we are just studying that. We're focusing on that and via that, inferring people's subjective experience. And so it's a bit like you can't, you can't touch the stars, but you can watch them through your telescope. And likewise, I can't directly measure your experience, but I have a pretty good indirect inference through your subjective reports or self reports. That seems to me not. I don't quite see why it's by nature intractable.
[00:10:54] Speaker C: Steve, I know that you've always been interested in consciousness, but you've kind of winded your way there through metacognition. So there's a case to be made that metacognition is more tractable. And maybe the study of confidence and how that relates to our social being and stuff, maybe that's more important than phenomenal experience. And by the way, when we say consciousness, we're just going to assume we mean phenomenal subjective experience, not waking sleeping states throughout the entire episode here. So what do you think when someone says, why are you doing that? Aren't there more important things to do?
[00:11:30] Speaker B: No, I would echo a lot of what Hakon said there. I think that it is a privilege for each of us in science to in some sense, decide what we find most interesting. And we want to be excited to get up in the morning and go and work on that problem. And everyone's going to have a different sense of what the most interesting thing is. I mean, I think that the idea of tractability, though, I get somewhat frustrated with this because I think the transitive notion of consciousness, being conscious of something, is completely tractable. As Hakwon was saying in the lab, you know, we can start to measure that in terms of subjective reports. There's some things that we're aware of at certain times that other things we're not aware of. We can ask subjects that we can model their reports in various ways. We can start to write down models of the mental or neural representations that would, you know, enable that kind of report. The frustrating thing sometimes is that people assume that if we're doing that kind of research, then we must have to also solve the mind body problem at the same time. Right. So you must somehow have to, like, grapple with that at the same time. And I think that that's. That's just not the case. Like, that's. We don't have to grapple with it any more than someone researching emotion or episodic memory has to grapple with it.
[00:12:48] Speaker A: Right.
[00:12:48] Speaker B: So we're looking at something quite constrained, and there's a type of computation that allows the human mind to become aware of certain mental states and not others. And that's what we're interested in.
[00:13:03] Speaker C: I mean, I suppose tractability has a lot to do with operationalizing terms and phenomena that you're studying. So I was going to ask about. People argue that there's just no scientific way to study subjective awareness. So you guys kind of addressed that with the tractability question. But what about the question of consciousness not being a well defined phenomenon? And don't worry, I'M not going to ask you to both define consciousness. I'll ask you a related question in a few minutes.
But yeah, what about the aspect that it's not well defined enough or it's not characterized enough, let's say, to really study it.
[00:13:44] Speaker B: I mean, I think that that lack of precise, constrained definition, that would also affect other fields of cognitive neuroscience. So, you know, everything we're studying here in terms of psychology is a somewhat fuzzy concept, but as long as it's grounded in empirical criteria.
[00:14:03] Speaker C: Don't tell psychologists that though.
They don't like that.
[00:14:09] Speaker B: I think the reason it seems different is that it goes back to my previous point that like people interpret it as meaning grappling with the hard problem, the intrinsic subjectivity aspect. And I think that, I don't know, I used to be bothered by this a lot and I'm less bothered by it now.
[00:14:26] Speaker C: That's interesting. Yeah.
[00:14:27] Speaker A: For me, I feel the lack of clear definition is exactly what attracts me to the problem. Because in some ways then that means that someone needs to clarify those concepts, like what Toveng has done for memory. Then someone should be doing this work for consciousness. And again, it kind of relates to trying to find ways to make myself useful. Having a bit of philosophy background and my work is quite interdisciplinary. I feel this is exactly the kind of groundwork that I like to do. Whereas if you put me in a field that is very well defined, everything is already written in equations, I would just be doing the kind of geeky work that may not be so suitable for me because I'm not really a geek and I'm more like a soft interdisciplinary jack of all trade that actually would fit my skill set. So I think. And that might fit some other of your audience skillset too, if they like to think about concepts and try to pin them down, sharpen them, make them tractable, then that's a good field to be in.
[00:15:23] Speaker C: Is it fair to say then that you might prefer or enjoy working in a Kunian pre paradigmatic sort of state, if that rings a bell to you?
[00:15:33] Speaker A: Yes, I think roughly, yeah, more or less so, yeah.
[00:15:36] Speaker C: Because once the paradigm is set, then we're all just doing, oh, what does Kuhn call it then we're all just kind of worker bees or whatever doing the normal science. Is that what's called normal science? Yeah, that's right. Hakwon, you've written recently about gurus and back scratchers and how research and consciousness progresses with a little model. Tell me about gurus and back scratchers.
[00:16:01] Speaker A: Yes, that's A little fun and maybe slightly provocative paper that's putting it mildly. But the terms gurus and back scratchers were not like we calling names of other people. They are actually technical terms in the literature. So Dan Sperber created the term gurus to refer to. I mean they are both terms come from discussions of philosophy of science.
So in a sense they are technical terms. So we're just quote unquote using them and not trying to come up with insults for other people. And we didn't refer to any current contemporary colleagues.
[00:16:34] Speaker C: It has an insulting ring to it, those terms though, don't they?
[00:16:37] Speaker A: I think so. I think that might be part of the intended effect to raise awareness of these problems. But we don't particularly point fingers and say you are a guru, you're a backspreader. I think in some ways we all are. That's the point. I think it's just to me the hardest problem.
If the fuel has a problem that is really intractable, that is not to me is not even the problem of mind body problem itself, even that. I think that we may be able to inch towards it. The really difficult, challenging problem for me is the socio historical aspect. Just we have so many people who think of this as not a normal scientific problem because. And then they just come along and then they just say whatever they want and then usually they are already very well established in the careers through some other means and then they just come in and say things that clearly they would not have accepted. It will be up to their standard if you judge it by their previous research. But they feel where I come into study consciousness, this is my last retirement party or something and they're going to go big or go out and just say something rather radical and unhinged. And then the rest of the field usually, yeah, often those are the gurus. Yeah, those would be what Dan Sperber called the gurus. And because of the status and it creates, I mean usually the reaction is too kind. Some people like myself, you can probably already guess I'm not a big fan of this kind of stuff. But then at the same time there's a combination of appeasement or other motivations that mostly people are not as negative about this kind of high sounding, high brow speculation than I am. And we somehow accept that. And some people even benefit from that by associating themselves with these gurus. And that's what we call the back scratching. And I think both phenomena are very rampant. And I'm not saying I'm completely free from any of this. I'm part of this field. And probably at some point some other people would say, there you are being 2% guru and 5% back scratcher or maybe more. But I think we all are guilty to some extent. But the point is not to just say individuals have done anything wrong. But that really creates a kind of vicious cycle to some extent.
People often don't take the literature seriously and they just come in and say something. There's so many new ideas, like really too many new ideas relative to new experimental paradigms and new findings that are robustly replicated.
[00:19:08] Speaker C: Yeah. So the overall idea, though is that you have an expert in some field, let's say, oh, let's say physics, just to be random, who then decides, well, I need to.
Consciousness is not solved. So here I'm a physicist, so people respect my opinion, or they don't even have to think that, but they offer an opinion. And because they are a renowned physicist, then a bunch of back scratchers say, well, we must trust that person in all realms of science because they're an expert in their own domain.
And the point is that that potentially hinders progress in consciousness science. Is that right?
[00:19:46] Speaker A: Yeah. I think in that case I can mention that Dan Sperber actually used the case of Nobel laureate Roger Penrose as an example of a guru. I mean, we didn't say that he was one, but Dan Sperber, who coined the terms, said that his case exactly in the domain of a theoretical physicist coming to speculate on the nature of consciousness might be considered an example. The backstretching is the gurus I can actually understand better. The back scratching is in some ways more complicated. So those people, some of them, I think, are genuinely impressed by the stature of those people. And some of them actually probably have something to gain by being up in the popularizing domain and by mentioning these great names. They kind of associate themselves at that level. But I don't think anyone is so evil to just wake up in the morning and look at a mirror and say, okay, today I'm going to be a sycophant.
I don't think people do that. I think it's somewhat maybe implicit in the way that the game is set up or the competition is set up, because there isn't a whole lot of public funding in this field. So it's not like you can actually just write a normal NIH grant and just say, okay, I'm going to attack Roger Penrose ideas, and probably wouldn't really work. And so because of the lack of the paucity of Public funding, a lot of it becomes private funding. And I think private donors are in some ways more easily impressed by this kind of association. If you drop names and say, oh, I had tea with Sir Penrose last month at Oxford and we discussed about this idea that probably wouldn't fly so much in an NIH review panel, but it might fly better in a private donation situation. Of course, I'm just making entirely hypothetical. I did not have tea ever with Sir Penrose, but I meant. So you can see that why this kind of back scratching might be somewhat unique in our field.
[00:21:41] Speaker C: Yeah, it's complicated.
[00:21:43] Speaker B: But I wonder whether one of the reasons for that situation, which I also recognize in consciousness science, is because it has been taboo for so long for junior researchers to be getting in and doing this from the word go. And so in a way, you need. Traditionally, it's been the case that people only turn to consciousness when they are feeling secure and senior enough to kind of risk doing so. And I wonder when that's the point where these kind of this intersection of freedom and security and concern for legacy thinking, okay, this is a big problem. It's going to go for it late in life. That's kind of been a feature of consciousness science. But I'm optimistic that hopefully will shift now that it's becoming more accepted. And okay. For early career researchers to be working on these problems. They're now getting steeped in the methods, you know, doing good psychophysics, good modeling and so on.
I don't know, maybe I'm just being naive and too optimistic, but I hope that that position that Hackman sketched there will eventually dissolve and will become a field more like other fields where that's not so much of a problem.
[00:23:00] Speaker C: I mean, it's kind of interesting because you could actually then in that model, think of the gurus.
We have to have some gratitude toward them for bringing regular sort of more inside, like the neuroscience type and psychology type of research on consciousness, bringing that into the more normal domain of science, you know, because they were established and were able to like say, screw it, I've got tenure, I've got a awards in various fields and now I can do this consciousness. And maybe the field needed that outsider effect.
[00:23:36] Speaker B: Yeah, I mean, maybe it needed some of them and not others.
That's the diplomatic way.
[00:23:42] Speaker C: Let's go down the list here.
Okay, well, let's talk about the great unknown a little bit and we're going to start kind of broad and then we're going to get down into some of the recent models that you Guys have established here by way of talking about consciousness. In AI, there are a bunch of different theories, and as has already been mentioned, there are more and more and more and more. And it seems like there should be less and less, but it seems, you know, the amount of theories keeps growing. Eventually we'll. Let's say eventually we'll have a satisfying explanation for consciousness. Right. How much of, you know, all of the current and historical debates among the different theories do you think will just disappear, like into oblivion versus will all of or many of the theories still have carve out some space within the eventual acceptable explanation? Is it going to be like the elan vital that people always refer to that used to be considered important for an explanation of life that then just disappeared once, even though we still don't know what life is, but we accept some surrounding characteristics and such?
[00:24:56] Speaker B: Yeah, I would think that the analogy to vitalism and explanations of life is a useful one. And I would be of the opinion that when we have an explanation of the functional aspects of consciousness, then this notion of a kind of independent fundamental magical property that's irreducible will. The force of that problem will start to dissolve. So I think just like we understand things like DNA and homeostasis and so on, we don't go around saying, okay, we understand all that, but what's the magic of life?
Why is this thing living?
I think that that will dissolve. Now, I think like I said before, the, you know, the mind body relationship is going to remain a feature of psychology, of just doing science of the mind. I don't think that's going to disappear, but I think its force as a problem in science will diminish in a similar way. I don't know what Hakwon thinks about that. He's more steeped in the philosophy than I am.
[00:26:05] Speaker A: Well, I dabble a little bit more, but it doesn't mean I have anything more informed to say than you already did.
[00:26:11] Speaker C: That's that slightly slouched spine of his coming through there.
[00:26:14] Speaker A: He dabbles, yes. Now try to dodge the bullet or something. But no, I think we think very similar on this issue, Steve. And I feel that historically the kind of cognitive approach has always been there, and I mean not always, but has been there for half a century or more. And I think it's been making steady progress. And I think my money will be on that. I will bet on this eventually dominating and accounting for the more tractable aspects. So there will be some untractable aspects. And some people might be panpsychists and they believe in metaphysical views or they think metaphysical views are interesting or valuable. I don't deny that, but I just feel that those might be what we think are the less scientifically tractable problems. But if you talk about the kind of consciousness that we study, we basically try to distinguish why some brain processes are introspectable and some are intrinsically not introspectible. Even when you try, some are accessible, not just accessed at the moment, but some are in principle possibly accessed by you, and some are just close for that access. So figuring out these kind of issues, I think the cognitive approach will ultimately dominate and will we start to make good predictions and maybe applications, essentially then people will be like the case of life. Then people would think, okay, these guys have pretty much figured out the important aspects that are worth figuring out. And there might be some philosophical leftover problems and that's that.
[00:27:52] Speaker C: Do you think an ultimate explanation will feel intuitive or do you think we'll just have to get used to it? Like relativity or even the idea of gravity, Newton's gravity, we all grew up with it, so it doesn't feel foreign, I suppose. But when you really think about it, it doesn't make any sense.
And we just have to kind of get used to physical laws, right? And then I guess they start to feel somewhat intuitive. Do you think that that's going to happen with an explanation of consciousness?
[00:28:20] Speaker A: Yeah, I think so. I think that's the beauty or the attraction of the cognitive approach because once you try to write down exactly what are the cognitive mechanisms, essentially you are saying that the brain essentially computes or do something that is akin to computation, and then that can implement certain kind of algorithms. And once you write down the algorithm, you can actually write the program yourself. And then once you're at that level of understanding, things are going to be intuitive enough. I mean, intuitive as in my computer is intuitive. I don't understand every component of it, but I understand, okay, some memory presumably work in this mechanistic way to display probably work in this mechanistic way. I get enough of a grasp. But I don't think it's a philosophical, mind bending situation like quantum physics.
[00:29:06] Speaker C: Do you have something to add?
[00:29:08] Speaker B: No, I would agree with that. I think it's very hard to know what intuitions will be like. I mean, I think the intuition of someone 200 years ago walking into this room, seeing us on a zoom call across three continents would be very, very bizarre.
So I think that, yeah, I would agree that at the moment I can see how people can be taken in by the force of an explanation, saying, well, there's going to be something left over. There's going to be something that's counterintuitive about a mechanistic explanation of consciousness. But I think as that research program progresses, as that framework gets explanatory power, then the intuitions will evolve with it.
[00:29:55] Speaker C: So, you know, all scientists have biases and good science, best we can do is try to work around those biases. Right. But all science also still proceeds by guesses about what's true. And then you can, you know, test those guesses and the circle continues. So I want to ask you both two questions and they're related. One is what do you want be true about consciousness? So this relates to the biases. And the other is, I don't know if you have a specific label like when you wake up in the morning, as hakwon did about 20 minutes ago, if you wake up and you have a ready made label for what you actually believe to be true and what you want to be true and what you believe to be true don't necessarily need to be the same thing, especially if you're a really good scientist. Right.
[00:30:42] Speaker D: So what do you want to be.
[00:30:43] Speaker C: True about consciousness and what do you believe is true? And they can be the same thing, I suppose.
[00:30:48] Speaker B: What are the options?
[00:30:50] Speaker C: Well, so you both subscribe to like the higher order theories of consciousness. Higher order theory of consciousness. Right. And.
Well, I don't know, I wanted to leave it open ended so you could say that you want and believe higher order theories to be true and that's fine and we can move on. But you know, is it for me, like I just every. Let's say I read about integrated information theory and then I read about higher order theory, and then I read about global workspace and there's something attractive about all of those. Panpsychism is not in that list. We can come back to that. But there's something a little bit sticky about a lot of those ideas. And then I move on to the other one and it's the new bright, shiny thing. And then I go, oh, that makes sense, this aspect makes sense. And then some things don't make sense. And so I'm constantly sort of grasping for the right question and what even is the right question.
[00:31:40] Speaker B: Yeah, no, I mean, so I think that I want the, like we just talked about in the previous, in the previous segment, I want the cognitive framework to be a useful one for explaining consciousness because I think that if we can't make progress within that framework, then I don't think we're going to have something that looks like a satisfactory explanation.
And I think that to test those kind of frameworks, we need to be looking at contrast cases. We need to not just be thinking of consciousness as a fundamental property of systems. We need to be looking at empirical data that contrasts aware versus unaware states and so on. So that's what I would like to be true because otherwise it's hard to make progress on what an explanation would look like.
[00:32:30] Speaker A: Yeah, I have something similar. I would really want the animal models to be sufficient. That is, I really want that if someone studies consciousness in monkeys or even rodents, I would really want that to be a good model. As in they.
I basically go about assuming that they have very similar conscious experiences as we do. Because to my mind that's really the only way to do good neuroscience. I mean, human neuroscience is important, but so far still we rely a lot on those models.
[00:33:00] Speaker C: Oh no, I'm anticipating what's coming about what you believe.
[00:33:05] Speaker A: No, I actually don't firmly believe otherwise, but my collaborator and mentor and friend Joe Ladu often gives me a slightly hard time about this because he, he of course studied rodents and, and is known for that work very much. But he's, he actually is a skeptic. Sometimes he would say, well, I'm, I'm, I'm sure they have some experience. I mean he wouldn't even say sure. He would say, presumably they have some very simple experience. But when it comes to some of the more self referential experience, like emotion. Yeah. Do you think a monkey would ever feel full blown jealousy? Like, like, like grown adults would and.
[00:33:40] Speaker C: Like elaborated like we do?
[00:33:42] Speaker A: Yeah, exactly. Even when we think about, even small children, they might have some sort of simple envy. But the emotional, as far back as I recall, my emotional life wasn't as rich as it was now.
So there are differences that might ultimately limit the usefulness of the animal models. So I try to make peace by studying the simple gabor patches and hope that the monkey would see it the same way I do. But that's a bit of wishful thinking. We would never fully know for sure. Right?
[00:34:16] Speaker B: Yeah, yeah. That's interesting to hear that you're maybe debating on either side of Joe with that. Because I would probably side more on Joe's side in the sense that, and this is shading into our work on metacognition more than maybe consciousness itself. But I mean, we've been looking a lot, and Hakwan has too, at the involvement of these anterior prefrontal regions that are Considerably. And there are new subdivisions that can be discovered in frontoperial regions that don't seem to exist in even the macaque. So the involvement of those in kind of creating this computational platform for higher order thoughts, which that's one broad brush way of thinking about what those regions are doing, that to me also puts me on maybe the side of we're going to need better techniques for human neuroscience. We can't just rely on animal models.
[00:35:11] Speaker A: That's interesting.
I think Steve and I basically agree on virtually everything. But this is maybe one point that our views might diverge. I'm more of a deflationary higher order person and sometimes people would quiz whether are you really a higher order theorist? I say, what's in the name? My view is. My view is I like to think of the higher order view as a kind of happy medium between a global workspace view and a local sensory recurrency type of view. So I'm not very.
So there's a joke question we ask ourselves or each other in the higher order theorist community is how high are you? And I think Steve probably is higher than I am. I'm not as high as he is.
[00:35:53] Speaker C: Oh man, I'm super low. If that's the metric.
[00:35:56] Speaker B: Yeah.
[00:35:58] Speaker C: I want to be high, not super.
[00:36:00] Speaker A: Low, not as low as the local recurrences.
[00:36:03] Speaker B: I mean, not literally. I know it is evening here. Yeah.
I mean I do think that just a knee jerk focus on say one brain region is not so helpful. But I think that the fact that the greatly expanded, for want of a better word association cortex in humans, so the prefrontal parietal system, even with respect to other primates. Right. So just in terms of sheer cortical neurons that obviously we don't understand how that's working in any given level of detail in relation to say higher order thought. But the fact that we have more recursive power there and seem to be able to generate these rich metacognitive models that to me would feel like. I still think we can get a lot of insight into the computations underpinning such models in the monkey. But I wouldn't be so sure about going down to rats to do that.
[00:36:57] Speaker A: Yeah, same here. I think rodents are a little bit different, but I think for monkeys, I think to kind of anticipate or put it in a quick brush, I think our difference may be that unlike Chris Frith, I think that consciousness is not explicit metacognition. So all these further high up hierarchy like involving frontal pole and all these mechanisms they may be important for explicit metacognition, that is explicit reflection, you think about yourself in a situation, et cetera. But I think for consciousness, in my view and in some others view is just a kind of very minimal implicit metacognition that your brain does regardless of what your intention is. So as I'm just like looking at the screen, basically my brain is already deciding that the sensory activity reflects the state of the world right now rather than my own imagination. And that happens automatically and that breaks down in dreams, et cetera. And so I think that I would assume it's rather implicit. It's presumably common between us and the macaques. But yeah, this is some wishful thinking there. We haven't been able to fully test it out. I'm not sure we ever can.
[00:38:10] Speaker C: I'm often, when I wake up from a dream, I often have the thought like, you know, the thought that that didn't make any sense or it was just readily acceptable in a dream. And then I wake up and think, well, now I feel like that was not. I was not very confident that that's what I should have been doing, you know, or something like that. So I don't know. We won't get into dreams here, you know, because that's what no one wants to hear about each of our dreams. We could discuss each of our dreams from last night, but let's.
[00:38:35] Speaker B: I think lucid dreaming is an interesting case.
[00:38:37] Speaker C: Oh yeah, I have a friend who's really into that, so. Oh yeah. You're not writing grants about that, are you, Steve?
[00:38:43] Speaker B: No, I just. I've only ever had one lucid dream when I was sleeping on a boat, when I was overtired. But it was. Yeah, it was great.
[00:38:50] Speaker C: What did you get to do?
[00:38:51] Speaker B: We'll just leave it that I just swam around the boat, like I was flying around the boat. But I mean, there is amazing recent work. There's only been two or three studies on this, but showing that the neural correlates of lucidity in dreams seems to be very similar to the neural correlates of waking metacognitive reflection. So that would seem to line up in the sense that this kind of second order reflection on our experience seems to be absent in dreams. But it can come back online when we're lucid.
[00:39:25] Speaker A: Yes.
So essentially, I think lucid dreaming is a great example of preserved explicit metacognition at the in. That is coupled with a failure of implicit metacognition. Right. So what I mean is dreams are almost by definition, according to the way we've been talking is failure of implicit metacognition, as in you're confusing your endogenous spontaneous sensory firing as if it's reflecting the state of the world right now.
[00:39:55] Speaker C: Yeah.
[00:39:55] Speaker A: So if your implicit metacognition is doing its proper job, it should not. It should not be letting you see things as if they are in the outside world. So just having the qualitative sensations in dreams means that your implicit metacognition fail. And mostly when it happens, your explicit metacognition also fails. So in dreams, you don't know you're dreaming, it just happens to you. But lucid dreaming is a case when they come apart. So going back to what I talked about earlier, I would think that probably monkeys would not have lucid dreams because they may not really have explicit metacognition at that level. But I presume that they dream qualitatively, they have qualitative subjective experiences when they dream. That is, they have implicit metacarnation, just like we do.
[00:40:38] Speaker B: So training a monkey to lucid dream, that's one thing you can pitch to the nih. Let's see how that goes down.
[00:40:43] Speaker C: That's my second graduate career life. It was hard enough training them to report, to wager on their decisions. Jesus.
Anyway, now you've got me thinking about lucid dreaming monkeys. Let's back up here and let's just talk about higher order theory just for a second, because then we'll. And then we'll get into your models that you've both been working on, and then we'll bring it into AI, perhaps. So I don't know where I got this. I think I copied and pasted this from one of your papers here. So here's my little definition of what.
[00:41:15] Speaker D: A higher order theory is.
[00:41:16] Speaker C: A mental state X is conscious if and only if one has a higher order representation to the effect that one is currently representing X. Whose paper is that from? Can you tell?
[00:41:29] Speaker B: Must be hack ones. It's got if and only if in.
[00:41:32] Speaker C: You're both prolific, so you probably don't even remember the.
[00:41:34] Speaker A: No, I think it's Steve. I think it's because I don't use these XYCs in my.
[00:41:39] Speaker B: But I don't use if and only if.
[00:41:41] Speaker C: Maybe I wrote it. Maybe I wrote it anyway.
[00:41:44] Speaker A: Damn, this is going to be embarrassing.
[00:41:49] Speaker C: So does that sound right? I mean, that roughly is what a higher order theory is, correct?
[00:41:54] Speaker A: Yeah, I would say so. More or less. Yeah.
[00:41:57] Speaker C: So, I mean, this is distinguished from. Okay, so the whole distinguishing factor about a higher order theory is that we have to have some second order sort of process that is representing our first order perceptual processes and ongoing things like in early sensory cortices, that they're. And that's why areas like prefrontal cortex are brought into this, because they're sort of higher in the hierarchical structure of the brain. So that by the time it gets to prefrontal cortex, you're able to then, I don't know, is it right to say, have a model of those representations? What's the difference between a second order having a second order representation and having a model?
[00:42:38] Speaker B: I mean, I would say that a model has parameters. So I think both are going to be involved in a, a higher order theory.
So you can think of a model as being broader than a representation. So a model would say represent the signal to noise statistics of perception, whereas a higher order representation would point to particular content. So it'd be like more like targeting particular content. But I think that both are needed to effectively form beliefs at this higher level of the network. So we do a lot of our modeling, for instance, in Bayesian networks, and there it just becomes very transparent how knowing the statistics of lower levels in the network enable you to create useful representations at the higher order level.
[00:43:31] Speaker A: Yeah, I would think very similar. Likewise, I think the model. There's a distinction between an explicit model based kind of representation. And I think that might be where our views diverge a little bit. I think the higher order mechanism does not really have to have an explicit model. As if you just record all the neurons from whatever mechanism and circuits responsible for the higher order stuff, you don't necessarily be able to extract the whole model from there. And rather it might be just a more implicit procedural mechanism that somehow can refer to the first order sensory activity and through some sort of downstream gating mechanism, essentially decide that a first order sensory activity is reflecting the world right now. Or is it just something else? Is it just noise, et cetera? So to me it's more deflationary the higher order stuff. I wouldn't call it an explicit model per se.
[00:44:28] Speaker C: So like V1 isn't necessarily modeling the incoming visual information, but the visual information is captured within v1. Is that the same then? The second order representation is capturing the incoming visual information that's processed over a few different layers. In cortex we're talking about visual awareness now, of course.
So like V1 doesn't have a model of the world. Is that analogous to the higher order representation not having a model of the earlier first order activity?
[00:45:05] Speaker A: I think so. I would Say sometimes I got into trouble saying that because other people would say, well, then extra strike areas receives input from V1. Then is extra strike area a representation of V1? I would say in that case, no, because to extra strike content, to the extent that you try to understand content, is challenging. But I think more people would be inclined to think it's more appropriate and more useful to think of extra strike areas as still referring to features of stimulus in the world. So extra stride areas are not about V1, it receives input from V1, but is ultimately still about the things in the outside world. Whereas I think the higher order mechanism is ultimately really about the first order sensory activity. It basically tries to say, well, this activity is truthfully representing the world right now, and this is just noise, this is just my imagination, etc. So it's about the nature of the stimulus, not the stimulus, but about the nature of the activity first, the sensory activity itself. So even with this aboutness, you don't have to build an explicit model though. You can just function as if it is about the first order sensory activity.
[00:46:11] Speaker C: But there does need to be a pretty clear separation in processing. Then. The reason why I ask is because my recent very amateurish thoughts about the recent shining light in my head is, you know, let's say you had like whatever brain you have, let's say you have one that's not elaborated as much, right? Doesn't have a granular prefrontal cortex or something, or the newer elaborated prefrontal cortical areas, but it ends, let's say it ends at V2, right? And so then would you have subjective awareness of, you know, the contents of V1 if, like a brain ended at V2, for instance. And what you're saying is, no, I mean, this is an impossible question, obviously. But what you're saying is know that, that. So my idea, right, is like wherever the brain ends is where it's going to loop back around and then it's going to be about where that recurrence begins, right? So if you just have V1, then you're about brainstem, right? Just those low emotions, right? And there's some phenomenal experience, blah, blah, blah. But what you're saying is that there needs to be some sort of clear distinction between the sensory first order representations and the higher order representation.
[00:47:18] Speaker A: Yeah, I would think so.
Basically, to answer directly, I think if you just have V2, we would say that you probably won't be conscious per se.
[00:47:26] Speaker D: Damn it.
[00:47:26] Speaker C: This means I have to go on to a new shiny idea.
[00:47:29] Speaker B: Now, I Mean, I'm wondering whether one useful way of thinking about the difference between just a hierarchy or not just, but say the perceptual, the ventral stream, for instance, where there's multiple areas that are in a quasi hierarchical arrangement. And what we mean by higher order representation is that there's an important aspect to those higher order representations that are tracking something second order, which I think is the connection here to metacognition. And I know you had Megan Peterson recently and she's been working on similar ideas, that one key aspect of the computations that support subjective experience is this ability to track confidence in first order representations. So there needs to be some aspect that's tracking the second order statistics of these first order representations. It can't just be the next level in the hierarchy receiving input from lower levels.
[00:48:31] Speaker C: Do you guys think about the minimum necessary conditions to call something a higher order? To have some area that is about some first order representation? Do you think in those terms or is a minimum necessity? Is that still kind of a fuzzy notion?
[00:48:50] Speaker B: Yeah, I would say it's still quite graded. And I don't think there's going to be a sharp dividing line. I think there's going to be more, more elaborate second order representations that can, you know, form.
I think one important aspect in the way that we've been modeling or thinking about modeling this is the notion of abstraction. If you can have a higher order representation that's tracking very abstract facts about the system. So not just say individual aspects of perceptual content, but something about the signal to noise statistics across the whole system. And that provides the useful background conditions to track what Hack1 was talking about. Like am I imagining something or am I perceiving it? So it's those kind of abstract second order statistics that I think are important.
[00:49:44] Speaker A: No, I think my view is very similar. I try to stay away from these very sharp, hard and fast logical terms like sufficiency and necessity. Sometimes we don't really use them right and they're too rigid. But basically, roughly, my view is similar here with Steve's.
[00:50:00] Speaker C: Well, why don't we talk about both of your recent accounts of, you know, your recent modeling accounts of what's going on here and then, and then we'll bring it into, bring in some AI fun as well afterwards. So Hakwon, since you've, you know, you've already mentioned a few of the ideas that you've presented in your reality monitoring account, you know, maybe I'll just leave it open to you. Can you describe this generative adversarial network reality monitoring account of consciousness that you've proposed.
[00:50:31] Speaker A: Yeah, so I always see that my views are almost never original. And it's basically just some variant of David Rosenthal's view or some other high order theories that we've kind of stolen from or borrowed from philosophy literature. But my job, I feel has been to try to express those ideas in more mechanistic terms by mechanistic meaning more in actual neurobiological implementation of some sort of algorithm.
So a lot of people sometimes misunderstood the high order theory as if, oh, so you have this little thought in your head about you being in a certain state. And so that seems to require a lot of cognitive demand for you to be conscious. So you have to be capable of having thoughts. But if you really read into the literature, they don't really mean that. What they meant is just they needed a word to say some representation. And I think Rosenthal in particular argued well, the representation should be more thought like than perception like. And some other people would think actually it's more perception like, so it's like a higher order in a sense looking at a first order sensory activity. And I find that more appealing. But I'm just not so sure about the thought and perception distinction anyway. And as a non philosopher, I'm not obligated to resolve all these issues from Immanuel Kant about these distinctions. So I just thought what do I think about in terms of neural circuits, what it does.
And presumably it does something. When you have a prefrontal circuit that monitors your first order sensory areas and then signals to oneself what that you are actually having a reliable and legitimate first order sensory activity, what does it do? And the first thing that came to mind is, well, you need to do that to distinguish between your self generated imagination versus your externally stimulated sensory perception, because they activate pretty much the very similar neuronal population and create not identical, but highly similar activity. So almost like there is a need for your brain to resolve that ambiguity. When you're imagining a cat, you shouldn't hallucinate a cat being out there. And you also see, sometimes it breaks down when you hallucinate or when you dream. Exactly. You mistook your own internally generated activity as if it's triggered by the external world. And so that I think might be what the higher order mechanism, whatever is thought like or perception like it's doing. And once you think about that, then we can borrow some lingo from current AI. Turns out that actually Sam Gershman at Harvard also wrote about this kind of stuff. He's also recently been on this podcast, but he published way too many papers, were good papers. He's so prolific that he probably never didn't get to this point. But he also made a similar point as in my pre print, that presumably for your brain to be able to do predictive coding you need some engine like that, because that's an engineering argument. So in the past decade or more in the AI literature that has really exploded. A lot of people are realizing that while the older feed forward only neural network models are great, but they are not really sufficient and we should look into how people do things in the brain. And of course our brains are capable of something akin to predictive coding. We have top down processes. So people then start to build in those top down processes in the network models and then they kind of hit a little bit of a hurdle at some point because they realize that you can engineer those feedback connections and try to make it like having top down processes. But the problem is training those networks take a lot of time.
So then the generative adversarial network becomes a trick that Ian Goodfellows came up with I think in his PhD or something like that. And he suggested, well actually one very easy way is to build your top down generative model, but alongside build another thing called a discriminator. And the discriminator is kind of like a critic to the generative model. So when the generative model generate a top down initiated representation that is like an imagination or imagery, then you have a discriminator that tried to look at it and say whether it's good enough. So essentially the discriminator is like a forgery detector. If it looks at your imagination and say, well, your imagination is no good, it's nothing like a real externally triggered real thing at all, then it would penalize the generator. And if the discriminator fails to catch the forgery, then the generator would then win a point over the discriminator. So then you pit the two of them against each other and then they would actually compete. And then as they compete, they would both learn from each other and then they would both grow very fast, kind of like rivaling siblings.
So in that sense it becomes an engineering trick just to train the networks. And you can borrow from that idea then presumably our brain in order to have predictive coding, if it's not entirely genetically hardwired, then maybe we have a similar engine too. And Sam Gershman and I both feel, well then that might be exactly. The prefrontal cortex function is to, I mean, part of some circuits in the Prefrontal cortex might be exactly playing this discriminator role to look at your own sensory activity and say, well, this looks like imagination. And it looks. Oh, it doesn't look like imagination. It looks like an externally triggered representation. And it has a function of then stimulating the growth of your predictive coding capacities and also then allows you to not confuse imagination with reality. And so that's a kind of long roundabout way of saying, seems like a lot of modern concept, but it's really just a way of saying, well how does the higher order thoughts or how the perceptions in the philosophical literature, how did it came about? Also maybe there's a very congruent neurobiological story to that.
[00:56:33] Speaker C: I just had the thought while you were speaking about the discriminator and generator and I don't know the answer for this in AI and by the way, I don't know if it was Ian Goodfellows PhD work, but I know that he and his friends were out for beers and they conceived of the idea and then he went home that night and wrote it up. And thus was born. So it was one night.
[00:56:55] Speaker A: That's exactly right, yeah.
[00:56:56] Speaker C: Anyway, that's what you can do when you're in AI. Don't you want to be an AI Instead?
You just go code it up. But anyway, I thought about the sort of the granularity of both in again generative adversarial network and in our subjectivity.
[00:57:12] Speaker D: Like so a waking hallucination, right?
[00:57:15] Speaker C: So, so you're kind of cut off from the incoming stimuli or your, I guess you could say your discriminator is rejecting the generator's input, right? Or not, I don't know. How would you say it?
Is it rejecting the generator's? In a waking hallucination, is the discriminator rejecting the incoming input or just cut off from it?
[00:57:39] Speaker A: So yeah, I would think that in that case, I would say the forgery detector, I.e. the discriminator gave your generative model such an easy pass. So your generator is just generating something dulgenously that doesn't look like external trigger input at all. It should have been easily spotted as a self generated forgery. But Your implicit metacognition, I.e. your discriminator presumably went to sleep over lunch and just gave it an easy pass and consider a very obvious forgery as real. And that's why you hallucinate.
[00:58:14] Speaker C: And in that case though you have to, because hallucinations last more than 300 milliseconds, right? So then the winner of that battle has to be granted the Winning spot for some time for hallucination to come to completion. So there must be some. Once you cross that threshold, it must be resonant at those levels.
Sorry, I feel like I'm being really unclear because I guess when you're talking about unclear topics and I don't have the right vocabulary, then it just sounds like a mess anyway. But does that make sense to you?
[00:58:48] Speaker A: No, it does. It is quite tricky and to be fair, it hasn't been explained very clearly in the literature.
It's our job. We haven't done our job well. It's quite confusing, but I would think of the. Yeah, you're right. So basically we're saying that during your entire dream or your whole episode of hallucination, your discriminator would be failing its job almost consistently. It just suddenly, presumably some mechanism there is not working properly. It gave us such a low threshold for considering what is it to be a legit representation of the state of the world right now. And some people would think this is very implausible, but actually it fits a little bit with the known physiology of dreams. So people would sometimes say that in dreams your prefrontal cortex seems to be not very active, and sometimes it's being used as an argument against high order theories and said, well, because you're high order thought theorists and for you to be conscious, you need the prefrontal cortex to be active. And I would say, well, actually, no, if you think about it exactly. In this case, we would want the prefrontal cortex to be not working properly for a long period of time because you are essentially, you have a failure of explicit. Sorry, failure of implicit metacognition.
So the low activity in prefrontal cortex actually should count in our favor, at least I would think so.
[01:00:14] Speaker C: Okay. Okay. All right, well, let's go ahead and Steve, let's bring in your higher order state space model here. But so this is the idea that you have a. That higher order representation is basically the output or the state of a generative predictive model. And you can correct me what I just said, and then I'd love you just to describe that work.
[01:00:37] Speaker B: Yeah. So I feel like we're kind of creeping up on the higher order view from a different direction. So our strategy on this was to work backwards from the properties of what we can measure in the lab, which are these subjective reports of awareness and these kind of reports. You know, you come into a, into one of these experiments, you might be flash stimuli, and you can use things like masking or flash suppression to control whether people Perceive things or not, then you might be asked to rate your awareness of seeing things on some, on some scale. And the interesting thing there is that you can create situations that look like the stimulus is being processed to some degree, affecting behavior in various ways, but people still are unaware of it sometimes, or aware of it other times. So there seems to be some property of awareness that's dissociable from the general job of, of perceptual processing. And one way of approaching this is to think, well, what people are actually reporting in those kind of experiments. What the data we can gather on consciousness in the lab is basically a factorized representation that they can apply to all different types of content. So I can ask you, are you aware of the dog? And you can respond to that. I can equally ask you, are you aware of the Gabor patch? You can respond to that. So what I mean by factorization is there's some property of awareness that you can interrogate, you can compute over and respond accordingly. And you can apply that to tell me about your awareness of all manner of things, perceptual things, your memories, your emotions, and so on. And so what is interesting about this is when you start thinking about awareness as being this factorized state in a generative model is that the state space becomes very asymmetric. So what I mean by that is that by definition, when there's the absence of awareness, there's also the absence of perceptual content lower down the state space. So another way of saying that is like being aware of a red thing is a similar state as being. Sorry, being unaware of a red thing is a similar state as being unaware of a blue thing or being unaware of a dog, for instance. So this kind of state of being unaware of things is asymmetric to the state of being aware of things. So that all sounds quite lofty and philosophical, but when you start writing down a model like this, basically what you can do is treat this state, this aspect of the generative model as the most abstract level of the system. So it's effectively creating some higher order commentary on whether there is content lower down the generative model. And effectively what it's doing is kind of tagging the situation in the perceptual generative model as having signal in it or having nothing in it. And so this is where I think there's interesting commonalities with hack one's view, because there's this idea of some kind of higher order monitor that's tracking whether there's signal or noise lower down the system. And so this then allows us to create various empirical predictions about the existence of this symmetric higher order state, which should track both commentaries of things being there, of seeing things, of perceiving things, but it should also symmetrically track our comments of being unaware of things. And this is where we diverge from, say, the global workspace theory. So global workspace theory would say there's some kind of threshold where when you become aware of things, you get broadcast through the brain. Whereas in our model you need these higher order states to be tracking not only the broadcast of content, but you also need them to be actively representing the absence of content lower down the system. So it's these active representations of absence that we've been working on, and this is work that's been done by my PhD student Matan Mazur. And there's interesting data suggesting that in these prefrontal regions, both in monkeys and humans, there are these neural representations that actively represent the absence of stimulation. And that would be very consistent with this idea that you have higher order states that are tracking the properties of lower order generative models.
[01:05:06] Speaker C: So it's not tracking the content within the absence, so it's not tracking the infinite amount of things that could be present, it's just tracking that there is an absence.
[01:05:16] Speaker B: That's right. So one prediction that we're trying to design experiments to test at the moment is that these, you can think of them as kind of low dimensional abstract codes of whether things are in the lower order generative model.
And one prediction is that those codes should generalize over content. So my representation of the absence of red should be similar to the representation of the absence of blue, and so on. And so we're testing those predictions in imaging experiments at the moment.
[01:05:43] Speaker C: So maybe, Steve, since you're the last one to describe the model that you've been working on, what do you hate about Hawkwon's ideas here?
I kind of want you guys to go back and forth and I'm sure you've done this before because you're friends. What do you see as sort of important differences and maybe some of the similarities between your higher order state space theory and Hackwan's reality monitoring?
[01:06:09] Speaker B: Yeah, sure. So I think actually we're missing a key aspect of Hackwan's model, which is this ability to distinguish reality from imagination or top down generation from perception. So we're working within this hierarchical generative model. It's a Bayesian network, but we can run it top down. We can kind of allow the model to quotes, imagine, hallucinate, and it won't tell the difference from perceiving. So we're actually working on this. So a postdoc in my group, Nadina Dykstra, she's working on extending the model to incorporate this idea that the abstract representations. And this connects back to what I was saying earlier about the need to have second order statistics represented at these abstract levels as well. So if you can represent the precision of lower levels of the system or something about the noisiness of that signal, then you might be able to not only represent are you seeing something or not? But you can also represent another dimension which is not only am I seeing something or not, but is it perceived or imagined? So you get this kind of 2D higher order abstract representation. And that is something we're borrowing heavily from, from hackwan's model. And hopefully we'll be able to expand the higher order state. Space 2.0 will hopefully have exactly that kind of aspect to it.
[01:07:38] Speaker C: Hakwan, do you agree with that assessment?
[01:07:42] Speaker A: Yeah, totally. I feel it's exactly the same way that I think when we do models of a realistic circuit, we usually focus on one aspect or one task and then we build a model and then, and then the model can only do that one task. So in our case, our model, we haven't actually, we've been building it, it hasn't been very successful yet in the actual implementation. So Taylor Webb in my lab has been doing these kind of neural network modeling. But I think we exactly get to the same point that when we can actually get the reality perception distinction, then we would start to worry about the other aspect, which we've been building a different model in parallel. Because of my former work in Metacognition. We also want to know when, when does the higher order state decide that there is nothing out there, no meaningful information, everything is just noise, or when does it have meaningful information? So ultimately I think the higher order state has to do these kind of three, at least three option distinction, whether it's just noise or whether it's internally generated or externally triggered. And I think eventually our two models presumably will then at that point converge and become something very similar.
[01:08:47] Speaker C: This kind of comes to the point of modeling the broader aspects of modeling and having, you know, is it likely that we will have 40 models, a family of 40 models that together account for, let's say, consciousness? Or do we really need to combine them and build an uber model? Because like you just said, you have these models that account for very specific things. So why not have 40 different model models accounting for the 40 different specific things that don't actually need to be joined up to serve as a satisfying explanation for consciousness.
[01:09:21] Speaker B: I mean, I think a lot of the cognitive models in the literature at the moment are there's more commonalities there than there are differences. So this is something we write about in our higher order state space paper that a lot of the work on the initial data that kind of prompted these global workspace frameworks from Stan Dehen's lab.
We can think of that in a slightly different way, which is, rather than it being broadcast through the system, this is reflecting prediction errors at lots of different levels of the network when it concludes that it's, quote, seeing something. And so in a way, it's just a different mathematical way of formulating the same idea that you need something that's global, something that's abstract, something that's hierarchical.
The kind of labels that we attach to models in different papers I think are less important. I think it's the concepts that underpin them. And there's a lot of commonalities in the literature.
[01:10:20] Speaker C: I mean, let's talk about AI consciousness. Let's go ahead and bring it in. I suppose so.
[01:10:25] Speaker D: Hakwon, a few years ago you wrote a paper with Stan Dehaan and Sid Koiter called what is Consciousness and Could Machines have It?
[01:10:34] Speaker C: That includes some of the reality monitoring ideas, but also kind of combining higher order theory and global workspace theory. And Nicholas Hsieh has written about this as well.
So maybe even before that I should start. Neither of you have an issue with the idea of developing consciousness in machines necessarily. It's possible.
[01:10:55] Speaker D: Correct.
[01:10:56] Speaker C: Are you both pro consciousness in machines?
[01:10:59] Speaker A: Yeah, I think in the way that we are committed to a cognitive approach, ultimately that's the, the kind of bullet that we have to bite. If you think the cognitive neuroscience of consciousness could be complete, that must mean that some algorithm implemented by biological machines would be sufficient to describe consciousness. And then to the extent that they can be implemented in biological machines, it's very likely you can find some other substrate to implement it. So in that sense, some sort of robot would do the same algorithm. And if that's ultimately what matters, then it is a very strange and unsettling entailment implication of the theory. But I think we have to accept that to the extent we're committed to this kind of approach.
[01:11:41] Speaker C: And Steve, you're just pro.
[01:11:44] Speaker B: I mean, yeah, I would agree with that. I think a consequence of the cognitive approach is that you accept a broadly functionalist view of how it could be implemented. I think, though, that the kind of unsettling aspects, it comes back to this idea about what our intuitions are for, whether experience is something magical and holistic that can't be broken down into its component parts.
And I think once we make progress on the kind of computational cognitive theories of consciousness, then the aspects of that functionality that we might think are useful to have in machines will seem less magical.
[01:12:26] Speaker C: I think yeah, there's an overarching desire to make things less magical. I think in the science of consciousness in general, in the neuroscience of which I think is a great thing, because we don't want it to be magical. So Hakuan, I alluded to that paper where you guys talk about potential computations from higher order theory and computations from global workspace theory. Each of those has computations that might be important to implement consciousness in a machine. Do I have that right?
[01:12:57] Speaker A: Yeah, more or less. So the co authors are Stan Dayan and Sigudair and we published the paper I think 2017 if I'm right or 18. I come to think that the paper is not successful.
In some ways the paper is a way to reconcile my kind of view with Withstands. And I think we agree that global broadcast and the kind of implicit metacognition that I'm going for are two different components. And we agree that it's kind of othlock, but I think we never fully agreed which one is more priority, which one is kind of later stage. Yeah, which one is more primary. So I like to think that the implicit metacognition is more primary. And then from the implicit metacognition, that mechanism of that reality discriminator or the higher order engine then signals what kind of first order information should be broadcasted and how they would impact our later stage, high level cognitive reasoning, belief formation, et cetera. So the implicit metacognition is the more primary is the gating mechanism, if you like.
The mechanism of consciousness, then on that view is exactly at the interface between perception and cognition.
So I would think global workspace is a great account of higher order cognition, but has maybe not as much to do with the raw subjective experience per se. Subjective experience are what is being gated at this interface for perceptual signals to enter higher order cognition. I think we try to dance around a little bit. I think Stan has slightly different ideas. Stan probably think that global broadcast is more primary and from global broadcast and on top of that you can have explicit metacognition. I kind of agree, but I don't think explicit metacognition is really what is at the heart of the problem of these kind of subjective experience issues? So there we have a bit of disagreement. We kind of dance around that a little bit. And in the paper, I think we did an okay job. That basically when you have three authors who each have their own views and we end up and also writing for science, we have to be reasonably accessible. It cannot be kind of this kind of nitty gritty arguments back and forth.
[01:15:05] Speaker C: I feel like that comes through. That comes through in the writing. Yeah, you can kind of tell.
[01:15:10] Speaker A: Yeah, yeah. I feel a little bit contrived and constrained there sometimes. But I think we did as much as we can given our different views. But I think despite that, I think even if you let me go back and edit it, have a final word and I edit that paper and take out what I don't agree with Stan and et cetera, and have my say, I think that would still be unsatisfying. I come to think that maybe one extra ingredient is needed in that paper. We never really talk about the nature of the first order representation. And that's where my maybe lack of complete loyalty to the higher order CAM is coming through. I think the higher order mechanism is ultimately important in the form of an implicit metacognition, but I think the nature of the first order representation is also important. In particular, we need something what I call analog first order representation. So if your first order representations are just digital, they're just like this signal signals red, this signal signals green, this signal signals blues, et cetera. Then you won't really have qualitative experiences. Even if your higher order engine refers to the first order activity and say, okay, this red signal is now correctly representing your world right now, then I think you would be aware of red, but you would not have qualitative experience.
And I think the qualitative experience probably come about when you have a kind of analog representations, when you know that red is more like pink and orange and purple and brown than silver and gold and black and white. So you have this kind of a graded kind of color space in your head. You know that some colors are more similar than each other, some colors are more different from others. And that again is borrowing an idea from the philosophy literature, sometimes called the mental quality space theory. And so essentially you know what it is like to be seeing red, because when you see red is redder than everything else you've seen.
And red looks the way it does because it looks kind of pinkish and orange. Ish and not so bluish. So you have these Kind of automatic similarity relations encoded in the representation in your repertoire. And we have actually, through this kind of modeling exercise, we have thought it through and probably if you have first order sensory analog signals, you can almost get it for free. And I think that would never emphasize that point I come to think is maybe the most important when it comes to building conscious AI. We never emphasize that in that paper. Sorry, I just dropped a completely new idea that I probably haven't expressed it in print yet.
[01:17:44] Speaker C: No, no, that's okay. I just don't know where to go.
[01:17:47] Speaker A: It will be in my book, but my book will not be out for a while.
[01:17:51] Speaker C: Are you writing book?
[01:17:52] Speaker A: Yeah, but it's a monograph. It's not a trade book.
So I will have a monograph contracted with oup, hopefully out this year or next year.
[01:18:03] Speaker D: Oh, okay, great.
[01:18:04] Speaker C: Well, by the way, congrats to both of you on the book. Steve. I didn't say congrats, but that's awesome.
[01:18:09] Speaker B: Thanks.
[01:18:10] Speaker C: So I recently bit the bullet and consciousness in AI is becoming a more popular topic, I suppose. I recently bit the bullet and watched a Yahshua Bengio talk about the Consciousness Prior and his idea that basically if you just add attention in various forms of it to deep learning, that is the key to develop consciousness in AI. And he relates this to the System two of Daniel Kahneman's System one and System two.
I just, I kind of want to. I had been avoiding it because it's just kind of hyped. And I watched the video. I never read the Consciousness Prior paper, but my biases were correct. It's hyped. It's basically what I conclude here. But I'm wondering if you guys feel the same way, like what your take is on the Consciousness prior of binge, and maybe not necessarily even that in.
[01:19:09] Speaker D: Particular, but just in general, this idea.
[01:19:11] Speaker C: Of, of this recent push about building consciousness in AI and using the deep learning framework to do it and so on.
[01:19:20] Speaker B: So I think inevitably whenever you use the two words consciousness and AI in the same sentence, it's going to get hyped.
I do think that the proposal from Yoshua Bengjo is highlighting some important ideas that are shared with general higher order approaches. This idea we need to create abstract, communicable representations about our mental states.
[01:19:50] Speaker C: That's not a new idea though.
[01:19:52] Speaker B: Sure.
[01:19:56] Speaker C: I don't want to downgrade Joshua Bengio, but I just had the thought, is this a case of the guru and back scratchers?
So I don't want to single anyone out, but yeah, so I mean, okay.
[01:20:10] Speaker B: So if, yeah, if it's a sociological point about overhyped, one particular paper being overhyped, then I don't know, I don't have a strong opinion on that. But the broader point about thinking hard about the functionality, you know, why we would need to have something that looks like awareness in AI? Why would we want it in the first place? So deep learning is being fantastically successful in lots of domains, but it doesn't look conscious in any sense of being able to comment on its processing, explain what it's doing, and so on. So I think when we start thinking about those things, what is it that consciousness enables us to do as humans?
[01:20:51] Speaker C: The function of consciousness, the function.
[01:20:54] Speaker B: And I think one aspect that is often overlooked here is the social function. It's the ability to compactly converse, like we're doing now, share ideas about particular topics. Now, obviously that's very, very abstract, but just at lower levels, the ability of two agents to comment on, have I seen something clearly over there? Do I think it's a predator or not? And so on. You get down to this notion of sharing of metacognitive representations that Chris Frith has written a lot about. And people like Bahadur Bahrami have done really beautiful experiments showing that when you have two people who are sharing metacognitive content, they can like their confidence in simple perceptual decisions. They can reach answers jointly that are better than any one of the two of them could have reached alone. So there's this notion of kind of pooling metacognitive environment. Information is really important for the function of social groups. And so I think that kind of aspect, that more functional aspect of awareness is going to be useful to think about. How could we get that into AI?
So we're actually doing some work in collaboration with the Oxford Robotics Institute. We're just getting this started to try and think about these questions. Like, on a practical level, how could we build in, say, abstract notions of, am I confident about that? I know what I'm doing in a robot, and can it communicate that to its human companion, its collaborator, and would that be useful for. And so obviously this is quite a long way away from thinking about intrinsic subjectivity, but it's a more functional notion of awareness that I think is important to think about for AI as it becomes more and more integrated into society. We're going to want the kind of, kind of abstract interactions that we have with each other to also be present in AI systems.
[01:22:52] Speaker C: So you write about this kind of as a response to another paper that is Talking about the need to set out some of the guidelines for what would actually constitute a good science of consciousness.
And within this paper you talk about what we need to also think about the function of consciousness. And this is where you bring up the idea that it's important for social interactions. And you just talking about that actually reminds me some of the early work in infants and how infants use imitation to learn. So even an infant, and I'm not talking about infant consciousness here, but can tell when someone that they are preparing to potentially imitate, they will imitate them if they seem like an expert, if they seem like they know what they're doing, you know, trying to open the door, for instance, or something like that, that's when they'll, they'll discriminate between imitating someone when they do no seem to know what they're doing versus when they don't seem to know what they're doing. So socially that seems like a.
In the same sort of wheelhouse there, because you do need to know, you know, whether that person seems to be able to, you know, know how to open a door. If they don't, then imitating them knocking their head against the door or something is not gonna help you. So that does seem to jibe with the shared experience and coming to an overall better conclusion functionally using a social mechanism.
[01:24:14] Speaker B: Yeah, I mean, there's a potentially beautiful symmetry that I think is still more of on the hypothesis side than supported by a large body of evidence. But there's a potentially beautiful symmetry between awareness of our own mental states, metacognition of ourselves, and, you know, mentalizing about others. And I think in infancy, this is a really fertile period for when you need those kind of dual self modeling, other modeling things to interact. Right. So not only do you need to know what other people know, you need to know when you don't know to ask for help. So the idea that you reach that you hit the limits of your own ability and you need to then turn to some adult who is competent to help you, and you need to pick the right person. And so all of that seems like foundational to bootstrap yourself up towards becoming a functioning member of a human social group.
[01:25:12] Speaker C: I have such an individualistic bent that I'm always resistant to the social account of the functionality of consciousness. And I feel like I'm just giving up more and more and relenting and seeing the value in it. And maybe that is, you know, the theory of mind and the usefulness of being social. It probably is just because I'm so reluctant to accept it. You know, I'm just very individualistic. Do you guys feel that too? Like, I want consciousness to be about me. I want it to be an individual thing.
[01:25:41] Speaker A: You know, I may be more like you. I'm a loner. I'm a socially awkward loner.
[01:25:48] Speaker C: Oh, I'm sorry, you're calling me a socially awkward loner. Okay, but I'm saying I am.
[01:25:53] Speaker A: I didn't say you are.
[01:25:54] Speaker C: Right, you said you're like me.
[01:25:55] Speaker A: No, I'm kidding.
No, I'm like you in the sense that I also think the functions of consciousness actually relates back to this issue in a sense that I don't think the whole enterprise of AI consciousness is overhyped. I feel that if you particularly want to talk about Yoshi Banter, I think he behaved very well. I mean, in the sense that he wrote the paper that he cited the right references. He didn't just jump straight to New York Times and wrote a high profile piece, which actually sometimes happens in our field, as you know. So I think the way that he's overhyped, it's just because his stature is so high that people really look up to what he does in AI field.
[01:26:33] Speaker C: Not his fault, but I think it's not his fault.
[01:26:35] Speaker A: But on the other hand, I see that where you see it may be unsatisfying. Probably that comes down to the fact that in our field we kind of treat the deepest issue in consciousness research is about explaining subjective experience, is about explaining the qualitative aspects of subjective experience or what it is like to have an experience. But all these other aspects that relate to attention and higher cognitive control, explicit metacognition, they are important too. And so as helping others, interacting with other social agents of your member group, et cetera. And I think all these are important and so they are not overhyped per se. But I think there is a bit of lack of satisfaction for people who are in this view and feel. But how does it explain the qualitative nature of the experience? But I think that AI is not completely silent about that either. As I mentioned earlier, I think there are AI models that could have focus on that a bit more. I think the problem is that the appeal may not be so clear. Right. So most of us thinking about consciousness, thinking about building AI robots that would be conscious, we are mostly thinking of having them that can do these higher cognitive functions better. They can play better chess, they can be better childcare takers and that sort of stuff.
But I do think there is some work from the Computer science literature that actually I find inspiring. It's actually a colleague in the UC system at UC San Diego, Senjoy Dasgupta, I think, yes. He's a computer scientist who had looked into the very low level sensory systems in, let's say, fruit flies. And then he's been learning also what does that algorithm do? And that's where earlier I mentioned the analog signal comes from. And as a computer scientist, he found that actually the fruit fly has coding system that seems to be very analog and very, for lack of a better word, very smooth and mixed. That is the different labeled lines are not independent lines. So that is very different from, let's say, the mantis shrimp color vision system. The mantis shrimp, as I remember, has over a dozen different color channels, but they're kind of independent. So it turns out that the mantis shrimp has more photoreceptors types than we do, but they are not very good at discriminating between the colors because they act as almost like independent detection lines. Whereas in the fruit fly, the humble fruit fly, actually they have a more mixed model that is like opponency and, and in fact it's very randomly projected. So you have a very mixed, almost analog and spatially smooth code for olfaction. And then Suentra looked into what it does and it turns out that this is actually a very good mechanism for two kind of difficult computer science problems. It turns out that if you just mimic the fruit fly system, you can actually outperform current, or at least some current popular computer algorithms. So there might be some functional difference. Sorry to taking this a bit in an opposite direction of what Steve was going. I think there are these higher order social and higher cognitive functions that are important, but AI might also contribute to the more simple qualitative subjective experience problems too, and meanwhile discover functions for those things.
[01:29:49] Speaker B: I guess my worry about going, I mean, that's fascinating about the fruit fly. I didn't know about that work. I, I guess my worry about going down the road of thinking that quality spaces is sufficient is that we're ending up in this realm of not really being able to know and not being able to really test what you know. If we built that kind of system, would it be aware? Well, it wouldn't necessarily have the, the ability to comment on its experience and so on. So I don't know, do you think hack one then, that the quality space is the more important thing or the higher order representations are more important? I don't know. This question is very vague, but actually.
[01:30:36] Speaker A: Just basically following the Same playbook from David Rosenthal, my favorite philosopher, to steal ideas from. I think you need both.
So in a sense that. So David Rosenthal also has his own version of mental quality space. So the idea basically being your higher order theory explains your awareness of your sensory signals or your awareness of your sensory process. So you have awareness, but the content may not be qualitative unless you also have mental quality space. So combining two, then we avoid this kind of problem about whether you can just build a little gadget with a mental quality space with analog signals. In that sense, you would say, well, this creature is capable of potentially having qualitative experiences, but without awareness, it's not going to fly. So if you have both, though, then you don't have this problem of not knowing, because you can then ask this creature to do these tasks. Like, basically you show them pink and ask the creature to tell me what it is. Like, in terms of coming up with five other color that would be similar and five other colors that would not be so similar or do novelty detection. So these will ultimately be cognitively testable tasks.
[01:31:51] Speaker C: Guys, as you know, I have about a thousand other questions, so I see that we're coming up on time, so I want to kind of bring it back out. Recently I've talked about the difference between academia and industry and the need for academics to secure their legacy.
[01:32:06] Speaker D: Right.
[01:32:07] Speaker C: And I'm wondering if you guys think, you know, are consciousness researchers. Do you think that they're more or less concerned with their legacy relative to other, let's say, neuroscience and psychology researchers studying more, quote, unquote, tangible things? My guess is they're less concerned with their legacy. What do you think?
[01:32:27] Speaker A: I actually think it might be the opposite. Yeah.
[01:32:30] Speaker B: Yeah. I think that maybe historically they've been more concerned, but this comes back to this problem about people only feeling like they can turn to it later in life. So I feel like in a way the causality is reversed. People who are concerned with legacy maybe then think, okay, let's try and tackle this problem, and if I solve it, then my position in history is secure.
Whereas I think. I think it's. I think now that, like I was saying earlier, now that there's more younger people getting into it and making it their field of study and realizing all the problems that come along with it, they're sufficiently humble by that to not be any different to any other branch of Cognura.
[01:33:12] Speaker A: I think, yeah, I think the same. I think the concern about legacy is there, I think, is on some people's minds, but I think it creates an Interesting bifurcation. So people are either too shy to get into it, or if they are brave enough to get into it, they probably think that their legacy is already kind of guaranteed, or they are mostly hoping to shoot for the moon and really even magnify their already great legacy into something greater. I think that is the kind of attitude that may not be so healthy for the field.
As I mentioned, Steve and I agree on a lot of things, and maybe that's one thing I like working with Steve, is that his more positive outlook often compliments my. My darker, pessimistic outlook to things.
I feel, yes, the few has definitely gone a long way.
I think the cognitive approach has been attractive to a lot of people, and we seem to be doing work that are deemed useful by even people outside of our immediate discipline. But I'm sort of an activist. I always view there is an intrinsic vicious cycle there because of this taboo issue and this legacy issue, that if we are not careful, we will very easily fall back to what we historically was.
And I think now and then there are optimism. So in the 90s, I think something good happened. Like I said, Francis Crick helped to rejuvenate the field and created a lot of media attention. But as he's no longer around with us, and I think now some of the problems kind of come back. I mean, the media attention that was generated in the 90s now in some ways has become sometimes a bit of liability because of the issues I talk about. People just want to compete for media attention rather than peer respect, because peer respect does not really matter as much. Because when your media attention is so huge, you don't really care about what your critics think about your theories anymore. And I think that happens sometimes in the field. So I think. I mean, I don't want to end on such a sour note.
I think things are still going overall pretty okay now. And then there are problems creeping up. I think if you just collectively keep up the attention and to make sure that we are going in the right direction, we should be fine. But I think this is in some ways really community work. It's not like any one person can fix this like Francis did.
[01:35:42] Speaker C: I paused in case you had a comment on that, Steve. So don't worry, Hakwon. We're not going to end on such a. On pessimism here.
[01:35:50] Speaker A: That's right.
[01:35:52] Speaker B: I mean, I guess I had one. I think people coming into the field are going to be key for this. I think Hakwan rightly so reminds us as a community that we shouldn't be complacent, that we do need to cultivate and mentor people who are interested in this area and for them to gain the right kind of skill set to come into it and be ambassadors for rigorous consciousness science. And I think one thing that I sometimes see people getting a misconception over is the idea that because it's the science of consciousness, you can afford to kind of go down the rabbit hole of all the papers, the more speculative end of the papers, and kind of wake up every morning and rehash the mystery of subjectivity. And I think that we're going to need people to come into the field who have the training and the skill set in psychophysics, in maths, in computational modeling, and so on. So just the bread and butter of doing good cognitive computational neuroscience. So I think if we have those kind of people coming in and we show that we are wanting to do that kind of work, then I think we're going to be okay.
[01:37:14] Speaker C: I mean, you guys have talked about how there's a community among consciousness researchers and that it's like kind of a family, but like any other science, there's a broad diversity of staunch opinions within that family, like any family, I suppose. And so I'm kind of curious how you view, you know, whether people are more prone in the. In the consciousness family to dig their heels in and cover their ears and say, I don't hear you when they're talking, when someone's talking about their favorite theory of consciousness, and just dig their heels in, stick with their own theory and to hell with the rest of them. Or is there more of an openness within the consciousness family of kind of making these different theories compatible or either considering the actual merits of other theories. I'm just asking within the consciousness family of science relative to other neuroscientific endeavors, where, again, all opinions are pretty staunch.
[01:38:14] Speaker A: I think my experience about this may be maybe kind of shaped by my subjective experience.
And I like to think that the few has one unique characteristic that is actually both at once similar to what you described, but there's also an antidote to it to that. I think it's because we always have philosophers among us and some very good philosophers who actually know their empirical literature, like Ned Bloch. And I think that creates always a culture where, almost, like philosophers would believe, inspiring. We believe in really arguing really hard with your opponents. So in some ways, we dig our heels. We argue, but we always engage. And I think to the extent that we engage and we listen to our critics, we try to come up with better arguments.
I remember as a trainee going to the same conference ASSC we all go to every year when I come back home, I kind of get this very juvenile feeling, or next year I'll have a better reply to that argument. And then you go back and do the work and then next year you see each other again. And in fact you present your new evidence and say, last year you asked me this, this year I have to reply. And I think that culture in some ways makes it very. Almost like makes things sometimes look sectarian from the outside. But I think actually it's extremely healthy and I think if we keep that, we should be fine. I'm optimistic about that.
[01:39:31] Speaker B: Yeah, I think the debate is generally healthy.
I think that because it's a young field, there is going to be a natural proliferation of theories, ideas, models. And I get the sense that that kind of ship is turning a corner slightly in the sense that there's more focus now on identifying differences between models and comparing them. So the Templeton foundation is running these adversarial collaborations where that's being done at a big, a large scale. And I think that's a great initiative because that really allows people to kind of hopefully start identifying commonalities and differences in a more rigorous way. So yeah, I think we've got to remember that this is a pretty young field in terms of actual empirical science being done a relatively broad scale.
[01:40:32] Speaker C: Well, you guys may be in a family. I could have been in your family. Instead I decided to be homeless. So I get to.
[01:40:39] Speaker B: You are in our family, Paul, come on.
[01:40:42] Speaker C: That's what I was looking for.
Oh, you lie. You consciousness researchers lie.
From my homeless vantage point, it looks like a very warm and fuzzy family. So just to end, Steve, let's start with you. What do you guys. I just want to know if you're working on anything in particular that is going to come out pretty soon that I should be looking for.
[01:41:03] Speaker B: Yeah, I mean recently we felt, I think, pretty energized in our group because we've started to identify ways of testing the higher order state space ideas.
Unfortunately, those imaging projects that we got going have been put on hold because of COVID But I'm hoping that data collection on that is going to be restarting over the summer and we should have data on this towards the end of the year. And really this is focusing on this idea of kind of the telltale signatures of a low dimensional abstract code, a kind of magnitude code that tracks awareness in a way that generalizes over different types of content and we hope to see those signals in prefrontal and parietal cortex. And if we can start identifying those signals, I think that will really provide a rich test bed for these models.
[01:42:02] Speaker C: So you can be doing some contrastive FMRI analyses on it?
[01:42:06] Speaker B: Yeah, well, more really looking at representational similarity. So really saying does the neural pattern that tracks presence versus absence, does that seem to generalize over different types of content when we prior? So a lot of this we're working in a predictive coding framework, so we're kind of giving people priors, so giving them cues about whether they should expect to see something or not. And then we also orthogonally to that give them cues about what type of content they should expect. So one type of cue might tell you, okay, on this trial you're not likely to see anything at all, but if you do, it will be a house, for instance. So we can orthogonally manipulate priors on awareness and priors on content. And then we can start to look at where the prediction errors at those different levels track these different levels of the model.
[01:42:57] Speaker C: Cool. Hakwon, I know you have a book in the works as well. You told me offline, but that's not going to be out for a little while. What can we expect from you?
[01:43:06] Speaker A: Yeah, so I think for most my colleagues the past few years, I think one of my better known piece of work is the neurofeedback work that we do in FMRI using machine learning methods combined with online closed loop FMRI and basically we create a way to non consciously reduce excessive emotional responses, affective responses. So we've been using it to for instance reduce your physiological response, excessive physiological responses for let's say spider phobia. And so before the pandemic we started this project and collaborated with clinical folks and with we actually start to run clinical trials. And that to me was very exciting because I feel related to the issues we talk about too if we for the field to gain more legitimacy, ultimately making useful clinical application is the way to go. And I thought I would just to focus on that. And we have somewhat been stuck because of, because of COVID And so during the pandemic then I worked on sharpening the methods and I thought, okay, my career would be just become honing in on that and just making it work. But during the pandemic I also wrote my book and as I said, the pandemic has done weird things to a lot of us. And so after writing the book, I feel now more ready to think about the theoretical issues again. So the perceptual reality monitoring theory has been extended and we are now more ready to talk about the more kind of raw, touchy feely, subjective experiences. And we have theoretical models. So I'm hoping to start a new line of basic research again to test that. So for people looking for post op positions, I hope you will stay tuned and watch for announcements. I should be hiring soon to do also this kind of new work.
[01:44:57] Speaker D: Very good.
[01:44:58] Speaker C: Well guys, this has been a real treat for me. The cycle continues for you and so this completes a this is episode number 99 here and I'm really glad that I got to spend it with you guys and maybe I'll have a new 100 episode cycle coming up here. Thanks for spending the time with me and for going so long as well.
[01:45:15] Speaker A: Yeah, thank you so much.
[01:45:17] Speaker B: Yeah, thanks so much Paul. This was really fun.
[01:45:33] Speaker D: Brain Inspired is a product collection of me and you. I don't do advertisements. You can support the show through Patreon for a trifling amount and get access to the full versions of all the episodes, plus bonus episodes that focus more on the cultural side but still have science. Go to BrainInspired Co and find the red Patreon button there to get in touch with me.
[01:45:55] Speaker C: Email Paul BrainInspired co.
[01:45:58] Speaker D: The music you hear is by the New Year. Find
[email protected] thank you for your support. See you next time.