[00:00:01] Speaker A: I think people thought I was crazy when I was doing this because, like, when you think of brain model, you think it has to have like, sort of like a lot of complexity to it. This is really like a simple cognitive model.
We used to sit around in graduate school, like in the late 90s and with professors and students and talk about the big picture. Everybody thought it would be 200 years before we had any models that could like, do object recognition.
[00:00:33] Speaker B: This is Brain inspired.
Hey everyone, I'm Paul Middlebrooks and today I speak with Bradley Love. Brad's a professor at University College London and a fellow at the Alan Turing Institute. You'll hear he has a lot of interests. One of the things he's worked on for years is a model to understand how we learn and store concepts. You may remember I had Rodrigo Chian Quiroga on recently who discovered concept cells in the human hippocampus which respond when someone is presented with a specific concept, no matter how that concept is presented. So if that concept is Bradley Love, it could be a picture of him taken from any angle. It could be his written name and so on. And after today, after you learn about the sustain model that captures how concepts are learned and that maps onto particular brain regions encoding these concepts, your concept neuron for Brad will fire. When you think of the sustain model, how meta can you get?
So we talk about concepts and his now decades old model that keeps chugging along. We talk about his cognitive modeling approach to understanding brains and intelligence. And we talk about some of his recent work integrating deep learning models into the study of concepts, plus a few of his other deep learning projects. It's a long episode, but it actually was even longer. This is the first episode for which I saved a segment just for brain Inspired supporters on Patreon, which I'll start doing more often and posting some extra bonus episodes for Patreon supporters as well. So if it sounds like this episode cuts off a bit abruptly at the end, that's because it's missing the last half hour or so where we discuss Brad's outlook on AI and some of his wisdom and advice from his own experiences. Eric, Johan, Bjong, Roslun, Howard and Dennis or Denis. Thank you for your recent support on Patreon and you can expect that bonus episode very soon. Find show
[email protected], where you can also learn how to support the show for super cheap.
This was a fun episode for me and I hope that you enjoy Brad Love.
Brad, I have inspired you to actually connect your analog guitar and microphone system board to your digital Mac computer, finally. So you're welcome.
[00:03:10] Speaker A: Thank you. You're the only one that apparently could get me to do this after sitting around a year, so I'm indebted to you before the show even starts.
[00:03:17] Speaker B: What's. What's. What's our. What's our band name going to be?
[00:03:22] Speaker A: Brainiac's Already taken. So where. I don't know. Brainiac Jr.
[00:03:29] Speaker B: I think Joseph. Joseph Ledoux has a band, I think, called the Amygdaloids, maybe. I don't know if you've heard them.
[00:03:35] Speaker A: Yeah, I've heard of them for sure. I've probably heard them too, a couple of times. I had the pleasure of making a guest appearance in another cognitive neuroscience band, Pavlov's Dog.
Oh, man, that's awesome. I'm definitely not a musical person, but, yeah, years ago, I enjoyed playing in, like, a scrappy lo fi band, but as a kind of hobby, but that fell by the wayside, but maybe in lockdown, since you've inspired me to hook this up. I'll dust off the guitar.
[00:04:05] Speaker B: But you are sitting there with a Fender shirt. T shirt on. I can see a Fender guitar or at least a electric guitar in the background. So I'm ready, man. I'm ready to go when we're done here, so.
[00:04:17] Speaker A: All right. Yeah, I better keep my day job.
[00:04:20] Speaker B: Yeah, well, I don't have a day job, so I need something.
So, Brad, anyway, welcome to the show. It's good to have you on. And thanks for being here.
[00:04:29] Speaker A: Oh, thanks so much for having me.
[00:04:31] Speaker B: There's a lot for us to talk about, and I thought we might just start with your kind of philosophical approach to neuroscience and cognitive science. I call everything neuroscience, by the way. So hopefully that you can get used to that, your philosophical approach, and tying it into the whole idea of Mars levels and approaching cognition and neuroscience through different levels. So I don't know, how would you just describe your philosophical approach to studying intelligence? I don't know if I can ask a broader question than that.
[00:05:04] Speaker A: Oh, no, sure, sure. I mean, maybe I'll jump into the neuroscience approach first, but, yeah, I mean, I've always been found working with computational models very helpful just for organizing my ideas. And, you know, they could also surprise you what they actually predict, which is really what you predict, you know, what your theory predicts. So I've kind of found that formalization process helpful. In particular, I found it helpful to work with relatively simple models that, like I referred to as cognitive models that would be at Mars algorithmic level. So you Know like really briefly and I'm sure your listeners know, but like, you know, computational level is the top level, that's sort of the what, like what's the input output function? And the algorithmic level is like how, you know, how do you, what steps do you go through, what processes and representations do you use? And of course below that's the implementational level, which is like the where. But I found working with pretty simple cognitive models helpful to like kind of distilling the basic principles I'm interested in. And I found them to be like a really useful lens on brain data. We've done with really good collaborators like Tyler Davis and Ali Preston and Mike Mack and others work using these models to link them to bold response and fmri. So it makes it a little bit more theoretical because you take this more top down view on the data and it could be really useful for capturing processes that you think are happening through time, within a trial or across learning trials. So it's sort of like a middle out approach, I guess would be the philosophy. It's not really starting from the top or the bottom, but somewhere in the middle as a bridge between the higher and lower levels.
[00:06:48] Speaker B: So the idea that there hasn't been enough theory in neuroscience, I don't know, I've been hearing that a lot maybe over the past 10 years. Who knows really when it started becoming more and more vocal. But I feel like just as an observer, I see a ton of focus on and celebration of higher levels within Mars level. So either on the computational level or on the algorithmic level, especially these days it seems like the algorithmic level is the right way to approach this, or at least it's celebrated as such. I mean, do you think that the algorithmic level has gained too much celebratory acceptance? From my perspective, it's like the recent 10 years have been like, oh, poor brains. They often seem completely left out of this whole endeavor.
[00:07:41] Speaker A: I mean, I'm not sure if you just look not so many years ago, like in the 2000s, it was quite the opposite. Right? So that was like in terms of more cognitive science, rational Bayesian theories that are at the computational level were all the rage. So at that level there actually isn't any notion of mental process or representation. And if you look in cognitive neuroscience, I mean there was so much like, there's interesting work, but it's largely like FMRI biology where you're just doing these contrasts that aren't linked to any cognitive model at the algorithmic level. So I mean, if there's been this rise in interest in the algorithmic level.
It's fairly recent because definitely then most of neuroscience wasn't model based and most of computational modeling was Bayesian rational. But it makes sense that to bring these things together, a lot of people would say you want to have multilevel theorizing. So it makes sense that people would get interested in the how of cognition to link things together. So I don't think the brains being left behind.
I'm certainly interested in using constraints from neural measures to inform the cognitive models, but the cognitive models really let you see things in the brain data that you couldn't otherwise reveal. So almost in every study we've published, if we just do a standard contrast or something, we don't really see much that's interesting. It's really, the models are helping to reveal these things that aren't super obvious because most of what we're interested in are theoretical things like even error correction, updating. These are theoretical things you need a model to characterize or recognition strength or familiarity.
They're changed through time, both within a trial and across trials, if you're looking at learning studies. So I think the models are really allied with the neuroscience.
[00:09:43] Speaker B: You've been. So you have this sustain model that we'll eventually talk about here, and you've been working on using this model and applying it to brain data as well and you know, sort of mapping on its processes to hippocampus and prefrontal cortex. And you know, we won't go through the whole thing right here, but. So you have used your algorithmic level approach to map on to the implementation level as well. So I mean, this is something you've been doing for a long time. You know, you just, you mentioned the Bayesian approach and you've actually been in the past been critical of the Bayesian brain idea like you were just talking. So the Bayesian brain idea is that brain circuits represent and process information as probability distributions. Right. So it's often used as a normative approach without enough connection to actual brain processing. Is sort of a rough and dirty summary in your brain and behavioral sciences article. But so that was a little while ago. Do you think that the Bayesian approach has come down from the strict computational level and has it gotten better since you suggested it wasn't connected enough with the algorithmic level?
[00:10:59] Speaker A: You know, I think, yeah, I don't want to like pat myself on the back, but yeah, I think, I think so a bit. I mean, I think he's. I think it's got more interesting so Like, I think we were a little bit like, misconstrued. We were never like anti Bayesian. And both Matt and I like, publish papers that take Bayesian approaches and sort of like the sustained successor that's like under perpetual review with Kurt Brunlich is like, has the same basic principles as the original model, but it's like. And a more Bayesian formalization.
I guess what we're really critical of is just leaving behind the other levels of analysis. Because really, if you stick to the computational level, you're just leaving behind so many constraints, like everything about the how of cognition and the neuroscience. I mean, just loads of other endeavors as well. So there's nothing wrong with or even or special with Bayes rule. So you could have an algorithmic model, which is Bayes rule. You could say that's actually the process people use. That's actually the representation people use. So even those models don't have to stick strictly to the computational level.
Yes, that's really what we're critical of. So there's not really any strong claim that the brain doesn't keep track of uncertainty or probability distributions or anything like that. It was really an appeal to kind of leave this sort of rational perch and get down and dirty a little bit with psychology and neuroscience.
[00:12:37] Speaker B: Down and dirty with psychology and neuroscience. You might have just written the title for this episode. We'll see.
So tied into this idea of levels. So there's Mars famous levels that we've just been talking about. But a maybe complementary approach, I want to say, is the mechanistic account. This is most recently probably.
Well, it's been written about by Carl Craver and William Bechdel and basically to approach neuroscience and cognition, cognitive science using a mechanistic approach.
Just to summarize it, a mechanism consists of parts of things and those parts operation and their organization together with the constraints that like the environment imposes and what kind of environment it's in. But the idea is that when you break a process down into its mechanism, that the mechanism actually can bridge different levels of Mars levels hierarchically. You can use a mechanism to look at a level above, look at the computational level, for instance, and look at the level below. I mean, do you see merit in the mechanistic approach? And do you think it's complementary with Mars approach?
[00:14:01] Speaker A: Yeah, no. Interesting question. Yeah, it seems like the levels of mechanism approach is well suited to neuroscience. If you think of something. Yeah, like you said, like a car or something. It has components like an engine and brakes and a radio and all those components of Course, can be decomposed themselves, right. So the engine has a lot of components. But I think what's nice about it, maybe for scientists is that it's just because you're talking about a car at the level of car, like something like higher level. It's not like you say, oh, that's not real, because you're describing the engine just in terms of torque and horsepower, not going into all its components. So I think taking that sort of language might do away with some conceptual confusions in science. And it kind of moves people away from laws and thinking about systems. I think one thing in terms of whether it's complementary to Mars levels, I think one thing it kind of leaves out is the computational level. Because the computational level isn't really concerned with mechanism. It's concerned more with the problem description. Like what's the input output mapping, literally, what's the function you're trying to solve? So if we were doing sorting, it would just be description of the problem, sort the numbers in order. Whereas the algorithmic level would be like, what's the algorithm, what series of steps you go through to sort the numbers. So I don't really know if that's really present in the levels of mechanism.
[00:15:31] Speaker B: So much the way I've read it described is that it is actually separate from sort of the computational or functional in terms of the levels of mechanism nomenclature, it would be called the function. Right. And that function and mechanism actually mutually inform each other. So there is a distinction between, if you want to equate function with computational level, there is a distinction in this literature of the mechanistic levels approach between function and mechanism. But that they mutually inform each other.
[00:16:02] Speaker A: Oh, yeah, no, I could see that. But it seems like the computational level, I mean, I think maybe I'm just being too strict and of course I should go through this more carefully. But it's like, I mean, it's really a computational level account. You really have to characterize the full range of possible inputs. And I'm not sure people working with mechanisms or think about that. This isn't really at all a criticism of it. I mean, there's almost like this sort of kind of like teleological thing to the computational level, like, what's the purpose? Or this. And I kind of like taking that perspective sometimes a step back. But it could be like a creature of the human mind that this level. But I mean, I think it could be a useful perspective to take from time to time. But yeah, I mean, what I like about the levels of mechanism too, that you mentioned is that you don't really have these sort of monolithic levels.
There's just no end in sight. So you realize there is no bottom to it. So I think it could reduce some chauvinism and levels. It makes clearer, I think, what people are trying to explain in terms of how much they unpack the mechanisms in terms of the constituent parts.
[00:17:16] Speaker B: Yeah, one way that it's been put to me, and this was from John Krakauer, I think a long time ago now. And I don't remember if it was actually on the podcast or I met with him and had a beer, and it might have just been over a beer. But he didn't say specifically this with respect to levels of mechanism, but we were talking about Mars level, which he's essentially a proponent of. And he was suggesting that a lot of people make the mistake of applying Mars levels to like size in the brain. Right. So implementation is like neurons, whereas the algorithmic level we need to think of as bigger circuits. And then the computational level is like a functional brain or like the functional. So it's maps onto these, like, physically larger things.
And his point to me was that you can apply Mars levels at every single different level of physical size. So in that sense, with the levels of mechanism approach, the one way it could be complementary is at every level of mechanism, you can still apply the three sorts of Mars levels of analysis approaches at each level. So I'm not sure if that makes complete sense.
[00:18:26] Speaker A: Yeah, I mean, Mars levels, I find that they're still helpful in many ways for understanding one's contribution. But yeah, it makes sense to try to slice them more or apply them and particular context like you're suggesting, because, I mean, you can't just lump all of neuroscience into one level. And I mean, BARS levels are really sort of like borrowed from abstraction, hierarchies and computer science. And where the top level there would be like the application, the bottom level would be below circuits to physics pretty much. And so there you have so many levels you could traverse along the way, way more than three. And you could see how they're all related and how there's more of a richness and detail as you descend. Yeah, so it makes sense that, I mean, I think it's just like a good guidance to think about what your intellectual contribution is. And yeah, so it makes sense that that's not where you would want to stop. Like you're suggesting, you've written that you.
[00:19:30] Speaker B: Think both reduction and emergence are misguided as concepts. So what are your views on emergence? And reduction, if you had to sort.
[00:19:41] Speaker A: Of boil it down, it's funny you ask. So I kind of went on my own little personal intellectual odyssey, writing a recent paper that's just putting the final touches on it before it hits print. And a lot of what I believe before writing the paper changed because I think a lot of times you hear like, oh, emergence, that's cool.
It's like the anthill. It just happens. And it's really. That's how things work. It just emerges.
Things that don't just emerge aren't really interesting mechanisms or models or reduction. Oh, that's really bad. It's just like, you just got to be more open minded. But if you really think about these things like emergence, how people say it, it doesn't really make a lot of sense. And a lot of what people describe as emergent just really isn't. So if you take things like cellular automata like Conway's Game of Life from the initial conditions, it could generate all these fascinating patterns. But there's really just simple local rules and being applied that are generating them means obviously being simulated on a computer. And likewise, when you think of flocking behavior with birds, again, each little agent has some simple rules of behavior that apply locally. And so when we say it emerges, we're just really saying we're not that smart and we can't simulate the whole thing in our head and appreciate it. I mean, it's sort of like saying if I bump my mug of water and it falls off the floor, it's like all the glass broken emerged from me hitting it and it fell. It's like, no, it's like, I could appreciate this. And if we took a space alien that was way smarter, it'd be like nothing emerged in Conway's Game of Life or the flocking Behavior. Basically, if you could simulate it on a computer, you're breaking it down to the components that are generating whatever the higher level observed behavior. And so it's like the higher level behavior is completely reducible to smaller elements interacting. So, yeah, I mean, we could use that in common language, but I think it's a little slippery and it could lead to conceptual confusions.
I had the pleasure as an undergraduate of briefly studying under Jaguar Kim, this philosopher of mine that introduced this really useful notion of supervenience. There's many varieties of it and they're all formalized really well. But the basic idea is if something supervenes with something else, you can't have a change in the higher level thing without some change in the lower Level things. So you couldn't have a change in mental state without some change in physical state. And that's pretty commonsensical unless you, you believe in ghosts or something. But when you start talking about emergence, you're going to have mental states causing mental states and then the physical states aren't causing them. Or you have overdetermination, you have two causes or you get downward causation. All these really weird things. If you actually work through it and think through it or read literatures I'm not expert in and philosophy, it just leads to a little bit of incoherence or at least debate debate. Whereas if we just kind of stick with it's practically emerging. It's not really emerging. It's just a limit of human cognition and our ability to analyze data or the quality of the data as opposed to things aren't reducible. I mean, by the same token, reduction is sort of like this strange bias scientists have because usually things really, I think scientists confuse that things like probably can reduce all the way down to particles, like everything is like at some level physics, I think they confuse that with, that is practically possible to do or even desirable because it's not like right now we're like, oh, how do we help small businesses in this economic crisis? Let's talk to a particle physicist. You would never do that.
[00:23:37] Speaker B: Well, I think a particle physicist would beg to differ because they believe they have the solution to everything, right?
[00:23:43] Speaker A: Oh yeah, of course, of course. I guess there's an icing model for it or something. But I just met, I guess it's the wrong thing because maybe they could actually help, but not by modeling it. It quirks and stuff. But yeah, so I think people confuse that. And so I mean, there's a reason neuroscience itself is a higher level discipline, even the lowest level parts of it.
And it's not going away and economics isn't going away. All these fields aren't going away because they're useful. And practically we can't reduce it. It doesn't mean in principle it's not possible or some super intelligent agent couldn't reduce it down. But practically speaking it's just not reducing. And most of neuroscience isn't formal or understood enough to reduce anyway. I just wish instead we would just focus on what people are trying to explain, relevant data, what are the relevant findings and how good of a job they do actually explaining that data. And then a lot of these sort of debates, even discussion of levels, a lot of it just kind of melts away by just being a little Bit more clear about those things.
[00:24:56] Speaker B: My own personal comfort level as a neuroscientist, I decided at some point, okay, I feel comfortable not worrying about membrane proteins at synapses and their kinematics. And so I don't need to understand it at that level. Even though someone who studies that could say, well, you know, we wouldn't have thoughts if we didn't have membrane proteins at synapses. You know, it's like that's my favorite thing of any sort of explanatory go to. It's like in any sort of documentary. It's, you know, it's like water. We would not exist without water, you know, and it's always like, oh my God, it must be the most important thing ever, water. And it's very important. It's true, you know, so particle physics, we would not, you know, of course you can explain it. You can reduce anything to a certain level. But your view. So then on. Emergence is basically, emergence is our way of describing things just at the level that we can understand things and describe things to make sense to us as humans, because we cannot simulate every little minute detail. So we have to be comfortable with a certain level of description. And that's sort of what emergence is.
[00:26:03] Speaker A: Yeah, it could be. So, yeah, basically there's sort of like this weak emergence or practical emergence or maybe a philosopher called epistemic or something where it's exactly what you said. It's not like saying things don't reduce down to something lower level, like in principle. It's just saying practically, we just can't do it because we're not smart enough. We don't have the right data or the right tools. Yeah. So I think people should be careful because a lot of times it sounds like they're making a stronger claim about the world or the brain or something, as opposed to a claim about where we are in our science.
[00:26:44] Speaker B: Okay, so one more thing before we move on here to concepts and switch gears. You don't like the term biological plausibility?
I'm not just trying to, you know, bring out all your qualms here.
But you don't like that term conceptually. What would you replace it with? Why don't you like it? And what would you replace it with?
[00:27:08] Speaker A: Boy, I really don't like a lot of things. It's like.
[00:27:10] Speaker B: No, we could talk about lots of things that you like.
[00:27:14] Speaker A: Yeah, no, I dislike more things than I like. You've picked up on that?
No, Yeah, I think, yeah. So, yeah, recently, this recent paper I mentioned like levels of Biological plausibility? Yeah. Just basically state the reasons why. I think it's a completely vacuous, misleading and incoherent concept, I guess. What are we really studying? Stepping back, we're trying to determine what is biologically plausible. So just asserting it is almost like assuming the answer. But I think worse than that, it's really something in the eye of the beholder. And again, like what I was emphasizing before, one really has to be clear about findings and data they're trying to explain. So I give this example in the paper. Imagine you have this one model, this deep convolutional network, and it explains activity along the ventral stream really effectively. They're like, hey, we're biologically plausible. We explain the ventral stream activity and these other people are like, no, our model's biologically plausible because we don't use back propagation and our learning rule is consistent with pyramidal cell activity. And it's like, okay, so are they both biologically plausible? Is neither.
They're both justifying their publication in terms of biological plausibility. What's really going on is it's like they basically care about different data sources or findings and they're explaining the data they care about, which is fine. They're basically like, so why don't we just be a little bit more forthcoming with that and just say we're doing model selection.
Model selection is very top down. You select the findings or even the data you care about, and then you could actually select the best model for that data. So basically the two groups in my example are doing different model selections on different data sets. Different aspects of neuroscience. And so what is biological plausibility? It's nothing more than explaining the data you care about the best. So it does no work beyond that. So it's just really misleading. And it's been abuse through time. I mean, I kind of like resonate to the idea of making things that are biologically plausible. Like, who would? And it sounds good. But then I kind of realized, reading papers, a lot of papers, not all, some are great, but are just like really junk. And they're like published. This paper is great because it's biologically plausible and it's better than that model. It's not. I'm like, why?
What is biologically plausible about your model? And it's like sort of face validity almost, you know, 80s. It'd be like connectionist models were biologically plausible because they had like a lot of little things in them and the brain had a lot of little things in them, whereas production systems weren't because you can't open up the brain and see a lot of rules or something.
But then subsequently, what is John Anderson and his colleagues do they apply actir production system to explaining bold response, doing mental arithmetic. So now is that model still not biologically plausible? So I think it's sort of contrary to multilevel theorizing and just like, whatever, just specify the relevant data sources that you're evaluating yourself against, do some model selection, and let's just leave these like that. It's like not a helpful label.
[00:30:28] Speaker B: I don't think it's a badge that people wear to elevate, I guess, their research perhaps. But, you know, it's interesting that you use back propagation and maybe this is where like. So there's this big back and forth now in back propagation specifically of whether it's biologically plausible. And maybe you've hit on the point that. So on the one hand, the idea of supervised learning and that synapses can adjust based on some feedback signal, you know, is fine. And in that sense, quote unquote, backpropagation is biologically plausible. But, you know, on the other hand, we don't have these symmetric connections between neurons. So back propagation, as it's done in connectionist, you know, networks, is physically impossible because it doesn't exist in brains and therefore is not biologically plausible on that level. But these two things are somewhat addressing different levels, like you said.
[00:31:26] Speaker A: Yeah, I mean, it could be different levels or even just different aspects of data at the same level. Like when you make a model or a theory of something, you don't try to explain everything about the brain or behavior. And so like, it's totally fine to have a theory of the ventral stream and just be like, well, we're not explaining why the connections aren't symmetrical, but we're explaining these aspects of the data or findings because nobody explains everything about the brain. So. Yeah, so I just think this label is unhelpful because it makes it almost like you are or you aren't. It's almost like a way to rule out approaches when no approach explains all the data anyway. So if we just do this enough, we could make everything not biologically plausible.
[00:32:12] Speaker B: Yeah, I guess it also depends on how you define back propagation in this, in this context. But, you know, there's also a lot of back and forth about what deep learning is. And there are people who practice deep learning that have a definition, and it seems like the definition changes to include more and more things and people. You know, there's a new finding and people say, well, that's deep learning. You know, there's a new finding in the brain. Oh, well, that's deep learning. Deep learning covers that. And all of a sudden, somehow the deep learning definition expands. And so in the end, deep learning wins somehow, you know, So, I don't know, there's this back and forth.
[00:32:51] Speaker A: Oh, no, definitely. I mean, this is like. I mean, I'm not even that well. And this has been going on my entire life, you know, like, so when I was kind of getting into this, it was like connectionism was like the sort of like uber theory. But of course, like, connectionism isn't a theory. You know, it's like a framework for building theories and models. It's like saying my. Like, I have a theory, it's called Python.
And then, of course, Bayesian models aren't a theory. Again, it's like a really broad framework for building models and theories. And so, I mean, deep learning is just the same. It's not really. I mean, it has some associated characteristics, but you could build very different deep learning theories of the same phenomenon. So, like, it's not really. Obviously it's not really a theory. That's not like being negative. I mean, it's amazing. And you could build amazing theories from it. It's just not a theory. It's like a framework.
[00:33:50] Speaker B: It's a tool. A framework. Yeah.
[00:33:51] Speaker A: Yeah.
[00:33:52] Speaker B: Brad, you really don't like polka music? Why? Do you not like. No, I'm just kidding. Let's switch gears and talk about something that you like.
So I just.
I just interviewed Rodrigo Chian Chiaroga, who is sort of famous for discovering concept cells, and we talked a little bit about. Well, a lot about concept cells and also about lots of other things in this recent book. You talk.
[00:34:19] Speaker A: I really dislike friends. Just kidding. Sorry.
[00:34:22] Speaker B: Yeah, so just to sort of recap, though, concepts and concept learning has been your bread and butter for a long time now. And you've, like I said, you had this sustain model that models how we learn concepts, how they're formed and how they map onto brains and lots of other things. So I'd love to just talk about concepts and concepts learning. And maybe you can just define what a concept is for people who missed it last time.
[00:34:50] Speaker A: Sure, sure. I mean, hard to do. So concepts. I mean, you could tie it to generalization, which, of course, every creature that has to make decisions does. A dog generalizes. It has a concept of its owner. If its owner changes her clothes or haircut, still the owner. The way I think about concepts is that it's not at any particular place in the processing stream. But it has to be some kind of intermediary between perception and action and it has the ability to somewhat decouple from both. And so what I mean is if you think of something like a zebra, if you remove its stripes, you're changing your perception of it, but you know it's still a zebra. Just because you painted the thing, you have some concept of the zebra that persists. Likewise, if the action you have to take in regards to something changes. Like say yesterday the price of oil went to negative US$40 in the US so there it becomes something like, oh, I'm going to pull this out of the ground and sell it and get rich. And it's like, oh God, I got to pay someone to take it away. It really changes how you view it and deal with it. But you're still oil, it's still have the same concept of oil. So it's sort of like this nice intermediary that could link perception and action by or decouple them in some ways as the perceptions and the relevant actions change. So I guess, I guess in terms of, you mentioned sustain in terms of clustering models, all these models, again, some intermediate representation in that model, it has clusters that collapse together related experiences. And so you could think of those as the concepts in some sense that identify the relevant perceptual chunks and link them to responses. Naming, for example, so it wouldn't confuse concepts with words. You could have many different little conceptual chunks that map on to the same name or the same action.
[00:36:52] Speaker B: Well, so concepts in general though are kind of an abstraction then from, like you said, from sensorial perception and from actions responses related to the perceptions. So in that sense, I don't know, it seems like a very high cognitive human skill. Like where would you place abstraction in the, you know, in the hierarchy of human abilities?
[00:37:18] Speaker A: I would place it nowhere special. I mean like again like every animal that makes decisions has to do this. So like, I mean, yeah, I mean just like I said, like your dogs have concepts, pigeons show peak shift behavior, you know, like where you kind of generalize the pattern. Because I mean, what I'm saying is just like kind of sort of like psychology 101, but like, you know, we never really run across this same exact situation or stimulus twice. So like, I mean, got to have some notion. I mean, I don't know, I mean, people could probably link this up with more. I mean, I don't think exactly Mepson with more like sort of model based learning I mean, I don't think it's like really special human ability.
[00:38:00] Speaker B: So are there different. Just pushing on this a little bit because Rodrigo pretty adamant that concepts and super abstraction sort of lead, like you said, lead to generalization and that our level of abstraction, maybe our level of creating concepts divorced from their perceptions and actions and their connection with lots of different areas in our brain, lots of different functions are what give rise to our special human experience. Right, so are there different levels of abstraction that still count as concepts? Right, so we have maybe different levels of abstractions of concepts.
[00:38:44] Speaker A: Yeah, no, that's a great question. I mean, certainly I don't want to devalue that. People have reasoning and abstraction abilities that aren't seen in other species, but I just don't really see them as like maybe we just got a couple of the levers and knobs turned exactly right to eke a bit more out. But it doesn't seem like there's really a qualitative, a huge step. I mean, people argue about this forever and there's BBS papers about this. But if you look at our abstract reasoning abilities, they always have remnants of more concrete knowledge or processes in them. So, for example, if you teach people some rule, they could state the rule, but when they apply the rule, it'll be affected by the similarity of the test item to the previous training items. People could do abstract reasoning like deduction, but how often they make errors as a function of how believable the conclusion is, which has nothing to do with logic. So abstract reasoning, if this, then that, if that, then that. It shouldn't matter what the P's and Q's are, but it does, and how well people reason. That's why people like Chang and Holik talk about pragmatic reasoning schemas. I think that's from way back in the 80s or something.
But everything's like this. Memory retrieval is like this. So why don't we make really distant analogies? You have all these great analogies in the history of science, but in practice we always just make analogies from the domain we're working in to the same domain and one really next to it. Because memory's guided by surface features that guide retrieval.
That's usually what's relevant. All these things are very. There's concrete underbelly to our abstract simple processing. It's all highly contextual. So even when you do some kind of variable binding, if you think of it that way, if someone says, I love my child, I love chocolate, love just changed meaning based on what took that slot, if you think of it that way. I'm not saying we can't do symbol processing or abstract reasoning, but it's not actually really that abstract. There's always this underbelly which suggests it's not completely like completely abstract in some sense or divorced.
Maybe we just have like, you know, we just got like, not like the secret sauce, but just like the settings, like a little bit better just to get a little bit farther than like other animals. But it doesn't seem like a night and day thing. And yeah, to me, yeah, I mean.
[00:41:19] Speaker B: This is, it's an interesting question whether adding just a little bit more of something fundamentally changes the nature of what you can do with that thing. Right. So you're talking about symbol processing and we have language, and that is a very uniquely human thing. And is there something qualitatively different about the way we process that enables us to have language? The way that I see it, I barely have language, I barely have symbol processing. I don't know how you feel about, about it, but it's not like we have opened ourselves up and can explore this eternal new space of processing. With symbol processing, for instance, to me, it's very easily conceivable that we are still, like you said, using the same sorts of processes and grounded in that same evolutionary system essentially, and not doing that much different. But it's different enough that there's a different quality.
[00:42:19] Speaker A: Yeah, no, I mean, that's the way I view it. I'm sure some language people get very upset with me, but it's just another representational format you could use, or it's another cue to meaning. A lot of times when people invoke language, you could actually disrupt their processing. So in learning studies, you ask people what they're doing and they convert they're actually doing into a description and they start doing their description, which is actually like maladaptive. So I mean, it's not like.
I mean, yeah, language could be useful. Obviously we were using it right now to communicate. I mean, I don't know if it's correct, but like, I think a lot of what drives people, I'm sure there's like special things about our brains, but it's also like we've built these really like, complex environments for ourselves that I, I think spur our development. And we have these great ways of building on the complexity from other people have figured out and passing it along. So in some sense, right, like the convolutional networks, why do they work okay, but not perfectly now? It's because they have these richer data sets to train on. And if they had more tasks to do, people are exploring things like self supervised learning, more auxiliary tasks to do with it, more ways of interacting basically ways of just linking things together and making it richer. I think that is a lot of what drives intelligence is not the device, but the input and the interaction and the richness of it. It's like you can't really have someone become super smart that gets no interesting input or has no interesting interactions with the world. So I think we discount that we always think about the device and not sort of the whole like world it's embedded in and how complex the world is that we've made for ourselves. You know, that's all like, you know, crashing down a little right now. But you know, we build these really, really complex environments and we have really complex social interactions with each other. And it just builds and builds over generations. And so, I mean, that's not the whole story, but that has to be part of the story of why we're different.
[00:44:28] Speaker B: Okay, so just getting back to concepts, because we could talk about our unique human abilities and what is unique and what's not. We could talk about that forever.
But let's take a step back.
Like I said, I just talked with Rodrigo about concepts and he found these concept cells in the hippocampus. Jennifer Aniston. Neurons, as people like to call them. We already know, as you mentioned, you do not like friends. However, these concept cells do exist in the brain. You record the cell and it responds to just invariant representations of, in this case, a person, Jennifer Aniston. No matter if you write her name or show her picture, she could be at any different orientation, et cetera, et cetera. But what we didn't talk is about how those concept cells come to respond that way. In other words, how concepts are learned. And that is something that you've been working on for a long time with your sustain model, which I already mentioned a few minutes ago. But I wonder if you could just kind of take us through what the sustain model is and how it addresses how concepts are learned and how they're updated and even stored.
[00:45:42] Speaker A: Sure, yeah, it'd be my pleasure. I mean, so yeah, this work goes all the way back to the.
And the idea was to build a model that could move between two extremes. So on one extreme you have a prototype representation where you really just have one node in memory for the collapses, all your experiences of a category together. So you'd have one node in memory for birds. The other extreme model would be the exemplar model, where you would store a node in memory for each experience, so everything would leave a trace, so there'd actually be no abstraction in memory. Instead, you do the abstraction at the time of decision by activating all these traces. So clustering models navigate those two extremes, and the key to them is like, well, when do you collapse information together and store it form some kind of abstraction? And the way sustain works, it seems like how people do more and more over the years, including the level of brain activity, is that it assumes simplicity. So it basically assumes that experiences collapse together or somewhat averaged together in memory until there's some kind of surprising prediction error that's relevant to the task you're doing. So the example I always use in talks is you're learning about birds and mammals and you see a bat for the first time, and it's small and it has wings and flies, and so you call it a bird. But that is surprisingly incorrect. So now you make this new node centered on that error. So now that could evolve into its own kind of BAT concept or prototype.
[00:47:26] Speaker B: How does that map onto the idea of a cluster?
[00:47:29] Speaker A: Yeah, so these little nodes are the clusters. So the clusters are just, well, they're just representations in your head that are running averages of the running information. So the key is there's really two kinds of updates. When you have a new experience, you could either have an incremental update. Say you have a cluster for cows in your head and you see a cow that's 2% bigger than your cow cluster, you'll just slightly adjust your expectation upwards of how big cows are. You just incrementally adjust that cow cluster. There's no real surprising error there. Whereas if you have some kind of serious prediction error, like we mentioned, with realizing bat is not a bird, then you'll make a whole new node, basically a memory, a whole new cluster.
It's really interesting. It's not published yet, but it should be a preprint soon. Working with Mike Mack, we actually ran this study that was so crazy to back up. We've been using this model like a decade to interrogate hippocampal and medial prefrontal activity. It's been doing a really good job and we just keep pushing it farther and farther to now this thing that we haven't published yet is so ridiculous. Like we really are saying, like, can we tag behaviorally from the model? Like when someone's doing one of these incremental updates, first updates versus like forming a whole new cluster in memory or recruiting a new cluster if you want to think of it that way. And it's really just driven by the order of examples and really minute things. But surprisingly, there is a signature of that in the brain. And Mike Max done a really good job of relating it to this monosynaptic versus trisynaptic pathways that I think Anna Shapiro discussed on one of your previous podcasts. Basically fast learning and slow learning.
I don't know. We've done so many other things with this model and characterizing brain response that I'm excited about. But this actually just kind of seems ridiculous that you come up with this cognitive model in the 90s and it does a good job with behavior, and then in 2005 start thinking about, well, what does this computation relate to in terms of brain systems? Thinking about patient studies and animal lesion studies and so on, and then just kind of coming up with this characterization of these relatively simple computations in the model and how they might relate to the brain, then fleshing that out a model based fmri. It actually works. It's really strange.
[00:50:02] Speaker B: Yeah. Maybe we should talk just how it does seem to map onto the brain. But I should have mentioned that the concept cells were recorded in hippocampus.
[00:50:13] Speaker A: Oh, yeah. This is grafted on the hippocampus. Yeah.
[00:50:17] Speaker B: The sustained model with your conceptual learning model maps partly onto the hippocampus as well. So maybe you could spell out the rough and tumble of how the sustained model maps into the brain.
[00:50:32] Speaker A: Yeah, I mean, first, it's really simple. So much that I think people thought I was crazy when I was doing this, because when you think of brain model, you think it has to have a lot of complexity to it. This is really like a simple cognitive model. But anyway, this idea of a cluster, just thinking about what is a cluster. A cluster bundles together related experiences. So if you have a cluster, say, to encode some surprising event, like encountering a bat and it's a mammal, it's like binding together in memory like that. It's small, it has wings, it flies, and it has fur. So it has to put all these things together. And to me, it seemed a lot like the function of, say, forming an episodic memory where you have to bind together all the elements and the context.
But of course, the hippocampus doesn't work in isolation.
Its connections with medial prefrontal cortex and other areas are crucial towards orienting towards surprising events. And as we propose encoding relevant information. So there's a notion of kind of attending to the relevant aspects of the situation. That's again, really simple and formalized. I mean it's really that. So it's really that simple. But just that we just. The cluster creation operation should be dependent on the hippocampus. We don't really get into like any sort of long term consolidation. Hippocampus is like, you know, not the most like anatomically stable region. And usually we end up studying things, you know, not long term, over weeks and weeks or something. Right, right. But if you're looking at a concept learning study, you basically can see what you would expect the conceptual chunks or clusters that people should learn according to these clustering models. According to Sustain, you basically see the signature of them in the hippocampus. And you could even see if you teach people multiple learning problems using the same stimulus, you could see how the items coded differently in the hippocampus depending on whether people are doing one task or the other task. And you could also see its linkage, functional linkage. Basically it's just correlation and bold response. With ventral medial prefrontal cortex, particularly early in learning, when it has to, the model is supposedly figuring out what are the relevant aspects of the situation.
It looks like this top down attention signal, figuring out what's relevant is that the hippocampus and ventral medial prefrontal cortex are cooperating, which is a lot like the original theory. And I'd be totally happy to be wrong. And we obviously are learning a lot of new things. It's kind of like, I don't know, it's kind of surprising. This is working out. It's nice.
[00:53:21] Speaker B: Is the sustained model, has it changed at all over the years or is it really still in its native original form?
[00:53:29] Speaker A: Yeah, I mean it hasn't changed at all. I mean the model itself again is ridiculously simple, but it just keeps working even. We could even account for individual differences in the representations and the attention weights of the model and relate them to decoding information and bold response.
So it's kind of like crossing individual differences in behavior and brain response. But yeah, the models, I mean all models are really incomplete. And I think I mentioned we have this newer model, it takes the same principles, namely assuming simplicity and then doing this surprise based coding.
But it's more in a Bayesian framework. But it's not really like your typical Bayesian model because what it's trying to learn affects what it samples and what sort of clusters, what knowledge structures it builds, which is really a critical aspect of Sustain that at the time distinguished it from other clustering models like John Anderson's model, because it kind of got lost. But really a lot of the emphasis of that original psych review paper was that what you're trying to predict, what the learner cares about is going to shape what's acquired. So basically what discrimination one's trying to do is going to determine the internal model they build. So it's not doing some kind of generic model building based on all the information available. It's very much tailored to the task at hand, which is a little bit goes back to our previous conversation that people really aren't so abstract or complete in some sense.
It's still a little bit dirty, but yeah, I mean, the model really hasn't changed at all, like not one bit. There is a successor model, but all the principles are there. Just as more it could account for eye movements and information sampling ability. But it's really the same theory just extended a bit because every model is really incomplete.
[00:55:27] Speaker B: Yeah, I'm sort of laughing because it reminds me, I did work with Gordon Logan, who came up with the race model of response inhibition, which just basically posits that your decision of whether to move or not comes down to a race between internal processes of whether to move or not.
And we're building neural models, mechanistic accumulator models. And it's like every lab meeting that we'd have these ideas and there'd be Gordon just saying, yeah, yeah, the race model already accounts for that. And that was always the answer because it was so deceptively simple, yet accounted, yet still accounts for so much.
[00:56:10] Speaker A: Oh, no, I'm familiar with that work. Yeah, it's kind of beautiful. Yeah, it's Joel and Tom Palmeri and all these guys pursuing these. Yeah, I mean, there's a lot of models like that, sort of like the drift diffusion model.
I think Radcliffe's psych review paper was from the late 70s and it sort of was popular in mass psychology, but it kind of sat fallow and then neuroscience dusted it off and took off. But I guess, yeah, I mean, maybe it's hopeful that even though the brain's really complicated in all its details to capture some basic things, there might just be some relatively straightforward, simple operations going on.
[00:56:53] Speaker B: Yeah. Hopefully it's not all just impossibly complex. Do you think that the formation of a concept. So like you said, you have your cluster of birds, your concept of birds, which is instantiated by this cluster, and then you see a bat and I don't know, maybe you're told it's not a bird, and that's very surprising. And then you have to make A new cluster or new concept for a bat, Is that the same as the creation of new knowledge? Can we go that far?
[00:57:26] Speaker A: Yeah, I mean, it's sort of carving the environment up, like, which aspects are relevant to you, and it's almost like a reorganization of how one thinks about things. So, yeah, I think you could call that new knowledge.
[00:57:40] Speaker B: Well, so you just said reorganization, which kind of puts it.
So there's the idea of, like, well, okay, then for a bat, you would put it into your mammal cluster or whatever, or things that are mammals, but also bird, like. Right. But you might not necessarily have that cluster. So you could create a new cluster de novo and therefore you could almost map it. I don't want to say ontologically, but as to the creation of something new de novo knowledge, does that make sense?
[00:58:14] Speaker A: Yeah, no, I mean, definitely. I think I'm just being too concrete.
It could be like the creation thing sounds like a big deal, but, I mean, even right now with Rob Muck, we're working on a version where basically each cluster is like a neuron. So the clusters themselves would have to be like. The units in the model would basically be like, at the neuron level. So the clusters are almost like virtual or something. And then creating a cluster is really like changing the receptive properties of these units to retune them. But it's still at a higher level. It is still creating a cluster.
I think it's really just partitioning the conceptual space up and showing that, oh, no, you thought that this whole region of space was all this one thing, but you actually have to put a split in there.
You are learning something new about that part of conceptual space. That something's going on there that you didn't expect. Like, namely that these are actually strange flying mammals.
[00:59:17] Speaker B: It still maps into your existing conceptual space at that point. It's not so surprising and so new that it's something that you would otherwise have never been able to think of or. You know, I'm not trying to push too much. I just, you know, about the ontological nature of knowledge. But I'm kind of wondering how you conceive of it. So.
[00:59:38] Speaker A: Yeah, no, I mean, it's in the existing space of possibilities. It's just that it's really like, how do you map that space to the actions you want to take or your goals, what you want to do to satisfy your goals? So in this discussion we're having, the only goal is to use the English word bird and mammal correctly. And so, I mean, that's what you're building Knowledge structures splitting up the space of possibilities to correctly achieve that end. And so I think you could describe it in sort of more grandiose language, but I think that's all these concepts are all there is to it, really.
[01:00:17] Speaker B: A feature of forming concepts or of abstracting things in general is that you lose details that are irrelevant to the concept. Right. So for a bat, for instance, it wouldn't necessarily matter that the bat was hungry on a Tuesday. Right. And if that was your one example of the bat, and you have 40 different examples of bats now, and being hungry on a Tuesday is irrelevant with respect to the concept of a bat. I'm wondering if. Well, you've described how episodic and semantic knowledge can blur into each other during that process of losing irrelevant details. Can you just describe that for us?
[01:01:02] Speaker A: Yeah, it's funny. Way back when we formalized this idea, I thought this would upset people. Basically, it says that everything, just like you said, everything starts out as an episode, and then it gets interfered with. Related experiences, if they collapse together in memory and becomes a little bit. There's no magical point, but over time, it becomes more semantic in nature. Yeah. Max actually written this up as focused on this aspect of going from episodes to semantic knowledge. And it's been kind of fleshed out this idea. But yeah, it's exactly that. And these models do a really good job of accounting for human recognition ratings. So it's like you're saying if things sort of collapse together into the same cluster in memory, you have some knowledge, but you basically have almost the histogram of your experiences. You don't have.
This bat was exactly at this location, this one was at that location. It was this shade of gray versus that shade of gray. It's just sort of like all averaged together in memory, but that's just sort of like our days all blur together unless a person in a gorilla suit bursts into the classroom or something. And that'll probably get separated off as its own episode because of the surprise.
But yeah, so, I mean, that's basically it. It's just like you just basically are doing this intersection averaging operation on your experiences until that sort of collapsing of information together leads to a problem. So you assume everything's simple and can just be kept together as a running average. Pretty much. And when that fails, you take note, and then that could become like a new running average in your head. So it's all about like, you're doing some kind of task. You're trying to have some kind of goal, and when the way you're Thinking about things isn't working.
You take note. And that could just keep in mind as an exception or that could kind of turn into something more semantic as time goes on.
[01:03:11] Speaker B: Can clusters get destroyed?
[01:03:14] Speaker A: Yeah, it's really good question. It's funny, like way back, like when I was a graduate student making this model, it's looking at.
[01:03:23] Speaker B: Sorry, but it's funny is everything is when way, you know, way back then when. It's so interesting how this is a very separate. And I'm sorry to interject, but no, no. So many people have this experience that like, whatever, you know, their careers are. They're 50 years into their career. But everything was pretty much set. The roadmap was set by something that happened really early on. It's pretty rare that someone late in their career starts a new path. It's all. It's like there's a few crucial years in there and then boom, it's. It's all set for the, for the future. So I don't know.
[01:03:57] Speaker A: Yeah, I think I hit my high point as an undergraduate, actually.
[01:04:00] Speaker B: Yeah, there you go.
[01:04:01] Speaker A: But yeah, literally my first publication, which gets no credit, was exactly like PageRank 3 years before PageRank. So I should have just like stopped there or something.
[01:04:13] Speaker B: Wow.
[01:04:14] Speaker A: It didn't even get the best student paper award at the conference. I don't know what you got to do.
[01:04:19] Speaker B: You were too early.
[01:04:20] Speaker A: Yeah, I guess. I don't know. But I mean, that's true. But I think I'd be better off if I actually did that. I kind of get into too many things and then, I mean, right now I don't want to change the conversation, but right now we're trying to kind of. We're still working with these models and the only reason we kind of stick with this general theory is because it seems like it's correct. But there's a lot of things I do and try and things kind of move towards, move away. And I mean, I don't know, maybe we can talk what's going on currently. But even moving towards model based neuroscience was like, very different. It was really like. I guess my philosophy model for neuroscience is that you should take like the best cognitive model of the task that explains behavior and kind of use that to work with, to kind of figure out what the brain's doing. So, like, if someone else had a better model of the kind of tasks we were running, well, then I would have just used their model. So.
[01:05:18] Speaker B: Right.
[01:05:18] Speaker A: But I think there is like some truth, but I don't know, I get bored and always want to switch it up anyway.
[01:05:25] Speaker B: But yeah, I also recently talked to Paul Chiesek and he and others like Yuri Bujaki, they have this push where they say we have it all wrong, that our labels for cognition, our concepts, if you will, are all wrong and actually have been for centuries. Concepts like attention, decision making, and so on. And then we're going to have to reconceive of and relabel what we're actually studying because the concepts are wrong, essentially, the labels are wrong. Do you think there's any merit to that? And more generally, are we as humans, are we too concept trigger happy? Can our bias to form concepts? It seems like this is a bias that we have. Like we want to compartmentalize everything and give a label to it. Right. Is that bias to form concepts? If it does indeed exist, does that lead us astray when we're in a realm of unknown or poorly theoretically formed domain, like something like studying brains and minds?
[01:06:32] Speaker A: I like this idea that we might have all wrong sort of frameworks for discussing everything. I mean, one reason this might sound cavalier, but in some sense these discussions don't really inform or affect my research because when I use words like concept, attention, I'm very specific. I've got a formal model, and if someone says that's not attention, it's okay, just whatever you want to call this weight that follows this rule that explains behavior across 30 studies, fine. You could call it gobbledy, gooky, whatever.
I think so. I don't really know of coming up with new ontologies of this. Maybe we should just actually be a little bit more specific in what we mean or what we're claiming. It's almost like back to the model selection discussion we had earlier. So I think these are good things to think about and debate. But I hate to say this because I know everybody doesn't like modeling or formal descriptions, but unfortunately, I think it's really necessary for dealing with these complex issues.
[01:07:44] Speaker B: I'm not sure that everyone doesn't like them. I think that they sort of dominate neuroscience these days.
[01:07:49] Speaker A: All right, well, about time. That's kidding.
[01:07:52] Speaker B: Yeah, well, yeah, it's interesting because I'm reading, having read now about your sustain model, I think, yeah, that's good.
And you kind of forget that when you developed sustain and were approaching things using cognitive models, that that was not the rule of the day. It seems more like the rule of the day now, how to approach things, do you experience that? Do you have that same outlook?
[01:08:19] Speaker A: Yeah, maybe my previous comments are just reflecting this time period. I imprinted on. But yeah, when I was way back in the dark ages, people in psychology were running two by two studies and doing ANOVAs. And there wasn't a lot of type. People were in a real minority and were just considered boring and pointless for the most part, even though they were doing a lot of times very interesting and useful work.
[01:08:49] Speaker B: Now that you say that, I have this recollection even in graduate school. And I really came in very naive. My undergrad wasn't related to my graduate school, but I had this recollection of it. Seemed like everything that we were studying there was like a model being built for it. And I was like, oh God, now I have to learn all the modeling stuff too. And it seemed heavy and hard and maybe that was when modeling was coming on as such a ubiquitous frame too.
[01:09:19] Speaker A: It could be.
That's really interesting perspective. Yeah. But from the other side, like applying these models, simple models like the ones I work with, they're not like, you know, like all these spiking neurons and like tons of details or like membranes, like people are just like, those models aren't real. That's just like fantasy world. Why would you use this? So it's like basically both sides. The psychologists are like, oh, please don't make me like listen to your boring talk about this stuff just to show me, like just show me the mirror test or something. Or I don't know, something like it's not biologically plausible.
[01:09:53] Speaker B: Is that the.
[01:09:55] Speaker A: Oh yeah.
I don't know if they cared about that. Actually, it's really funny. There was almost like a completely different perspective. There was like a perspective then that's. I think it's just sort of died out like as people like retire and stuff that like cognitive psychology was at like a functional level, that the brain was completely irrelevant. I mean, I know it's hard to believe now, but that was like a huge. And there was like, you know, popular philosophers at the time, like Fodor, like pushing it. I mean there's still like a lot of people that feel that way, but they're not like part of this community. And it's like an ever decreasing minority. Yeah. So I mean, it's kind of nice that people appreciate models. I mean, of course models could clarify and they could also obscure. And just having a model, it's only like you do model based work, it's only as good as the model is. So if you use the model as a lens on the brain and that lens is flawed or distorted, then so are all your results. So it's not like a cure all, but it could help incorporate other constraints, especially if the model is justified from other studies to lead to a better result. But it's nice that people are coming around to this a bit more.
[01:11:11] Speaker B: Yeah. All right, well, I want to ask about the hippocampus and then we'll move on to a little more deep learning work because it seems like the hippocampus these days is associated with everything. We've just been talking about it in terms of concepts, forming concepts. But the latest craze in hippocampal research is its association with navigation. I should say latest. It's not really late. It's been around for a long time.
So does two questions, does abstraction. Does concepts, do concepts co opt our navigation system or vice versa? And what does the hippocampus do?
[01:11:56] Speaker A: That's funny. Yeah. So like way, like again, like way back in ancient times when I was thinking about like how this clustering model sustained grafted on the mind, I just viewed like it all as the same learning problem, like learning about space. Space is just like another concept to learn. And I was kind of scared to say that. At one point I crossed paths with Lyn Nadell and I thought he would hate it. He's like, oh no, that seems okay.
Maybe just over time getting confidence and having a good postdoc that wanted to do a deep dive into these issues of navigation and hippocampus. We finally published something recently on this. But yeah, so I think like, obviously people use space to organize their thoughts at times, even in natural language. You say, I'm feeling up today, I'm feeling down.
We understand abstract concepts like time in terms of space, but that's just using representations of space. That's not a learning system or a system at all in the brain. So I think there's really just one general clustering learning system that supports learning concepts. And space is just another concept. It's in some ways limited concept of how it's like studied in rodents in the lab. Yeah. So I guess really neither. It's just like there's just sort of like one computation, one learning system the rule at all. And you change the inputs, you change the task, and you get out things that look like Jennifer Aniston or like, I don't know, like a rodent enclosure.
[01:13:32] Speaker B: Yeah. What about memory? Doesn't the hippocampus do memory?
[01:13:36] Speaker A: Yeah, I mean there's, there's like. Okay, yeah, so the hippocampus does even more than you mentioned. Like, you know, it comes up in like kind of Almost like simulation, like imagination. Imagination, yeah, like people call like mental time travel and stuff. But yeah, but I mean, I see these clustering studies that we do that. I mean, it is of course a kind of memory. It's persistent, like over like, you know, our studies only go on like two hours. But in terms of long term memory, like I said before, we really don't. There's all kinds of ideas in terms of consolidation involving the hippocampus. And I think you've discussed on your podcast complementary learning systems and of course that's contentious, so there's debates about that.
I mostly focus on the acquisition of knowledge and don't wade too deeply. I find it really interesting into the consolidation aspects. I find that reasonable given how again, how much cell turnover there is in the hippocampus. But yeah, so I kind of view the concept learning as encompassing memory as well because it's like again, each cluster starts out as an episodic memory which hippocampus is implicated in. And it only becomes more semantic when you have a bunch of similar, most redundant experiences that interfere with each other and sort of wash out the unique aspects of the episode to create something more semantic. And I think just learning a location in space like a place cell does, that's just a concept of collapsing together a bunch of related viewpoints and experiences, sights and smells and sounds, the geometry of the room, basically just a lot of inputs that are non surprising and could collapse together into one cluster for that location. So I think all this stuff is like really the same thing.
[01:15:31] Speaker B: So just to be super nitpicky, so I did have CHEESEC on a little while ago and I really enjoy his. It's called a phylogenetic refinement approach, but basically looking through tracing back our evolutionary history to better explain and ask better questions about what brain areas are doing and why they're doing it and how they're doing it and stuff. And he didn't really talk about hippocampus and abstraction and concept learning much, but I was kind of trying to apply his thought process to this in the whole navigation versus cognitive map versus concepts and abstract conceptual space maps and thinking about the chicken and egg problem of these things.
So I look back and at the evolutionary development of the hippocampus. I promise I won't take us too far down this road. No, no, no, please.
So early on, before the explosion of the cortex, it kind of came out of the hypothalamus, which is like a regulatory overall control system essentially.
And the hippocampus is kind of related to the explore aspect of the explore exploit trade off and can be associated with this longer term longer range exploration mechanisms. And I thought, okay, if that's true, like if the hippocampus was sort of developed via built on top of some system that was used for exploring to find food and resources and long range exploration, then I thought, okay, that's related to navigation and that's kind of related to the cognitive map. And that's not so far from then sort of internalizing these things into abstract concepts. So like a conceptual space essentially you can think of it as still being used for super long range behavioral purposes and in that sense is a bit of a navigation, long range exploration way of saying what a concept is. Does that sound ridiculous?
[01:17:34] Speaker A: I mean, no, I mean, not any more ridiculous. Yeah, I mean, I don't know. Evolutionary psychologists have all kinds of descriptions of why we're the way we are too.
[01:17:46] Speaker B: Sure.
[01:17:46] Speaker A: But yeah, no, I mean, no, no, I thought, no, I like these stories, I guess, like. Yeah, I mean, I think it's valuable to think about like, you know, where these things came from. But I mean also like where. I mean there's all kinds of like, what is it? Glucocortic receptors, like stress response. Was it. I can't remember. I remember reading all this stuff like HBA axis. Like why if you have, you get like sometimes memory problems if you have like people with depression or something. I think this is why my reading on this is all out of date. So there's all kinds of weird things that go on, but one thing I kind of focus on is just kind of like what kind of computation? Because you're trying to pull out something more abstract, like more general to what it's doing. And I totally am on board with that program. So for me, what's more general is that the hippocampus has the ability to basically bind together arbitrary elements that are relevant in the situation. Like the bad example, you just like, Wyatt, fur doesn't go with flying and wings. You're just like, boom, you could just do it. It has this ability to put things together which would be really useful too. Like you mentioned if you're exploring a new environment where you have to remember where the squirrel hit its acorns or something.
But yeah, so I mean that's also why it would be useful for imagination or mental simulation or all these sort of functions that are associated with it. Or episodic memory. I mean you could encounter any event and have to remember. It's totally arbitrary.
So I think that's and it's also like an area that's working with medial prefrontal cortex that do this, where you have to kind of figure out what's relevant and orient. So, yeah, it just seems like.
It just seems like it has ability to learn arbitrary things, almost like build new codes or something. So that would be useful while you're exploring. But I don't know, I mostly just focus on the kind of computation it does, and it seems like it links together all these different domains that are associated with the hippocampus.
[01:19:47] Speaker B: Last question about clustering and concept formation and then we'll move on, I promise.
[01:19:52] Speaker A: Sure.
[01:19:53] Speaker B: I thought. I mean, I have a billion more questions about these things, and I'm sure you thought of these things as you were probably 11 years old, but I thought maybe is it possible that the development of expertise in a domain is just. Whereas other people might conceive of some, I don't know, let's say piano playing or whatever expertise domain, there's piano playing, and that's kind of like one cluster. But then once you start to really go through your 10,000 hours of practice, is that maybe just a process of forming more clusters where. And having different conceptions, whereas other people kind of would have them as fewer clusters to conceive of the same concept?
[01:20:38] Speaker A: Yeah, I mean, I think that could be part of the story. Like a lot of.
Even when I was working on this model originally, I was thinking about the expertise literature, like kind of work that James Tanaka and others were doing at the time. And experts tend to draw. There's all kinds of experts, of course, but if you take experts in some perceptual task, bird watchers or something, they tend to form finer gradations and whatever the concepts, the distinctions, they could draw. And so certainly you'd need more partitions, more clusters in memory to support that. But I think what really drives a lot of expertise is changing the space that the clusters themselves would reside in. So experts seem to have access to more and more useful features to describe things. So if we're playing chess, I'm not a chess expert. It's just board positions. But a chess expert could see, oh, this is this attacking position. Or I think this is also. We talked about memory being very superficial. This is why experts are better in their domain, but not outside their domain is because they have better indexes to their knowledge. So like they. When they see something, you know, just like as we become like expert in neuroscience, we have better indexes of people's work and so we could relate them to each other and retrieve them more easy, relevant work more easily, as opposed to rely just on superficial things. So I think the experts change the space. But one nice thing about clustering model sustain, I think it's actually unusual this way. It's one finding one to go after originally, is that when items in a domain become more distinctive, as they would be for an expert that has a richer feature description, a richer space, the benefits of abstraction in the model actually go down. There's a processing advantage to forming more clusters, like you're saying. So it would actually be adaptive, according to the model, to have more clusters when all the birds look really distinct from each other. Whereas to me, they're just birds, because I'm not a bird expert. So I could just collapse them together.
[01:22:47] Speaker B: Yeah, yeah, yeah, Birds. I don't know. I'm not a bird expert either. I don't understand bird experts. I like them just fine. I know you don't like them because you don't like anything.
[01:22:56] Speaker A: I don't know. As long as they don't like friends, it's okay.
[01:22:59] Speaker B: Okay. All right, let's. You know, it's fascinating stuff to me, and that's why I keep asking more and more questions. But you've been using deep learning lately. So, you know, there's this deep learning explosion, and it's kind of colored everything in cognitive science, neuroscience, and you've been using deep learning a few different ways lately.
One thing that you've done is you've connected concept learning to different layers in a convolutional neural network.
You know, the convolutional neural networks that are used to model, as you mentioned earlier, our ventral visual stream, which underlies our ability to recognize objects.
So how do you use convolutional neural networks to better understand concept learning and how it connects with the visual stream?
[01:23:51] Speaker A: Yeah, no, thanks for asking. I think these networks are a complete game changer for the kind of research I do. So your question is great. I'm going to answer with this step even further back. With all these clustering models, all these cognitive models, do they rely on the fiction of that the experimenter has to say how people represent things? So you have to say, oh, that's a triangle, that's small, that's red. You have to write out the features, Whereas that's a real hard part of the conceptual problem is how do you create a description language? How do you come up with the features? That's really. In the previous era of machine learning, what made a model good is having good features. We talked about experts having better features. So in some way the most interesting problem has been sidestepped by necessity because you don't know what, you can't posit what the representations are. So deep learning can.
Well, if you just train something on the imagenet could pull out reasonable features itself from the end to end optimization.
It's really exciting because it's just not one feature. It's like you just mentioned, there's all kinds of levels of representation, of transformations of the image to the response. And so yeah, one thing it did with Olivia Guest we're still working on is looking at animal learning, like pigeon learning. So they could do all these things that people say, oh, that's so abstract, that's so conceptual. But there's a lot of cases where we kind of infer abilities that maybe aren't there in animals and probably with people too. Probably in developmental studies of children. Like we go, oh look, they have a sense of object permanence or this, I mean, who knows? I mean, yeah, they're well designed studies but people read so much into things. Or like the mirror test, like, oh, like you put a dot, a baby's head in it, recognize it? Yeah, it's. And dolphins could do it, but then, oh, so can some stupid fish. So maybe that's not a good test anymore, you know, like and going on a bit. But like the same thing happens in deep learning, you know, like, oh look, this could detect cancer. And maybe it just turned out that all the serious cases were sent to a certain machine that had some artifact in it that the deep learning model picked up on. So even though it generalizes to the test set, the test set has the same sort of confound in it and it doesn't work well in practice. So anyways, it's a long way around. This is by modeling concept learning using different levels of representation from the deep learning model. We could ask are people or pigeons relying on very low level, almost pixel like information to solve the task or are they doing something more deeper based on features that are a little bit more conceptual and not as image bound. So that's one thing we're doing.
[01:26:49] Speaker B: Yeah, well I guess the way that we think about it in the ventral visual stream classically is, you know, light comes in the retina, you're looking at a bat, let's say, you know, and then our visual cortex starts to break it down into all these tiny features and then hierarchically it gets passed to other areas of the brain and they get built up in more and more complex Features. And then finally there's the representation of a bat in our brain. Right. And then. And sort of, I think that the next logical thing to do then is to think, okay, that's the representation that we'll use to form the concept of a bat. Then we'll take that final step. And I guess what your work is showing here is that that's not necessarily how it needs to be done. And that's one of the questions, I suppose.
[01:27:33] Speaker A: Yeah, I mean, I don't know, maybe I think I'm too much of a hard ass or something. So I wouldn't think of the end as being the concept. I mean, maybe it's just like the. I mean, of course you're losing information.
What is it? The information processing inequality, using information at every stage, you're really just transforming it to get to the response to the action.
But yeah, but what you said is totally right. So we could model.
It's just really a question of are the animals using something more processed or higher level or can you actually just solve this task in the stupidest way? Basically like associating responses with what's coming off your retina. And maybe for modeling human learning and animal learning, this would be a good methodology, is to use tasks where you could apply a model to it and quantify what kind of information is necessary to solve the task. Because all these things look so impressive from the outside. Like, oh, animals could do this, animals could do that. But it could be just picking up on some really superficial property of the images. Just like how deep learning models look for shortcuts. And so you got to go through all these efforts to make sure it's just not picking up on texture or something. I think it's the same issue that it's just not deep learning. I think it's people and animals who are just not that abstract. Like we were talking earlier, we'll just look for whatever shortcut to get to the answer.
[01:29:04] Speaker B: Yeah, I mean, abstraction is costly. So if there is a better way to do it in a dumb way, that's actually the smart way to do it. I suppose.
[01:29:11] Speaker A: Yeah, I mean like also like, how do you we say abstraction? Everyone like complains, oh, this is just curve fitting. And this, like, how do you infer like the correct distribution that has something to do with the training distribution? Like what magic is that? It either has to be like building your head or you have to experience it. I mean, I just think we have rich environments and like a lot of extensive interactions, like with fairly rich inputs that force us to Build like somewhat robust representations. But like, I don't know, like, I really hate this, like curve fitting. Complaints. It's like, I don't really know, like what, what other. What kind of learning is possible here?
[01:29:49] Speaker B: Is that, that.
[01:29:50] Speaker A: Yeah, you either have to build, build in like the constraints. I don't know. It's just. There's just no magic.
[01:29:57] Speaker B: And that's the show. Thanks everybody for joining us. No, I'm just kidding. We're not going to end up with there's just no magic.
[01:30:04] Speaker A: But it's just that it's not so sad. It's just like, oh, where's the wizard?
[01:30:09] Speaker B: Yeah, I know. That's. That that's one thing again, just the more you learn, the more magic goes away in your thinking. And which is great in some respects because it's better to understand things. But you know, once you believe, once you realize there is no tooth fairy, it's less magical. And I don't know, there's something less special about that, but it's something even more special about learning how things actually work. So I don't know. There's a trade off. But definitely. Yeah. But just to get us back on track here. Sure. Okay. So. Yeah, so you kind of set it up. So what'd you guys do that work?
[01:30:43] Speaker A: We just actually found that pigeons are pretty much just learning from pixels.
[01:30:48] Speaker B: They're good radiologists still.
[01:30:50] Speaker A: Yeah. Unless you change some superficial property of the images and then they'd. I don't know, maybe they. I mean, well, it's inconclusive, right. Because we're just, we're just their behaviors consistent with like having a deep understanding of the images or just none at all.
[01:31:05] Speaker B: Just to actually say what you did on the technical front with the deep learning network, you basically tested out, you took out different layers from the convolutional neural network and used those different layers to see what matched up best with what the pigeons were actually using to do their radiologist work. Right?
[01:31:23] Speaker A: Yeah, definitely. IT traits. Right back to the model selection story. So basically you could make a bunch of concept learning models that are working on different representations, different input representations. And like you said, you could either work on visual representations that are incredibly low level, like very close to the stimulus, or you could work on ones that are like very high level through many levels of processing. And it turns out that the lower level ones support the behavior of the pigeons, which look like impressive on the face of it, but it's really maybe not.
[01:32:02] Speaker B: Okay, so that's one way that you've been using deep Learning and we're only going to. I mean you've been doing a lot of different things actually. But I just want to bring in one more at least. So you've said that your recent work with adding top down attention to a convolutional neural network is basically an attempt to make deep learning more human.
So what do you mean by that? And how do you implement top down attention in convolutional neural networks? Because there's a lot of talk these days about attention and adding attention to networks and attention is all you need. And this goes back to what attention is and that There are about 10,000 different meanings about attention. But how do you implement top down attention?
[01:32:49] Speaker A: Sure, sure, yeah, I mean you're absolutely right. There's like 10,000 different versions of attention. Actually, I think just like a couple days ago a review paper on how attention is used in neuroscience and AI came out from Grace Lindsey, who's done some work on attention and deep learning. But yeah, so for like if there's any hardcore machine learning people listening, like the way we use attention has nothing to do with the transformer architectures you mentioned. We're really kind of talking about a kind of selective attention that picks out what's relevant given your current goal or expectation. And this goal or expectation is coming from outside. Like the deep learning network is coming from outside the ventral stream or the stimulus itself. So like for example, if you were looking for your keys, you know, it's like you want to like almost reconfigure your visual system on the fly to prioritize like filters in your visual system that respond to shiny things and small things. Whereas if you're looking for your cat, you would reconfigure a different way. So that's how we see top down attention is sort of like a layer or multiple layers that are kind of like multiplicatively weight the filters. So basically silence, some that are irrelevant, that just add noise given the current expectations and make others sort of more consequential in the computation. Like say there was one that was related to how shiny the thing is or something the texture. If you're looking for your keys. Yeah. So we train up these attention weights for different goals at different intensities. And look at this trade off and we find that we could increase sensitivity measured in terms of D prime. So basically just be better for moderate levels of attention. But you also get rising bias just like people if you're like, if there's those things like, oh, I see faces everywhere, funny websites, it's sort of like if you're looking for something, you're more likely to see it. So you see this kind of bias or criterion shift in signal detection terms also happen. So as you turn up attention more and more you kind of get this trade off and eventually things fall apart.
[01:34:59] Speaker B: The way that you implemented this though is correct me if I'm wrong. So did you take a fully trained convolutional neural network and then add in a layer, an extra layer into the network that then you. That was the attention layer that you were just talking about that then you could then train keeping everything else the same. Is that right?
[01:35:18] Speaker A: Yeah, I mean I think that's critical because you know like when you're looking for your keys, your visual system is getting modulated but it's not like permanently changing that much as a consequence of the task. So yeah, so like we kind of viewed like these pre trained network as sort of like your generic building up all the filters, all the feature detectors, like kind of getting representative experience. Almost like you want to hotwire that system on the fly to repurpose it for more specific tasks that are relevant in hand and you could get into it. But this is like really kind of follows from some just a model based FMRI work we've done. Yeah, so yeah, that's exactly how it works right now we're like yeah, we have like a pre print up now and we're working on newer work. We actually like work on a whole separate attentional network that still relies on a pre trained convolutional neural network to do the object recognition but has a whole new network that does the top down modulation that could support generalization so that you know, so if you know how to like look what to look for for like lions and tigers, you probably also know what to look for for a liger even if you've never done that task before. And we also could interface it with language. So if someone says to you look at the black bird in the tree that will get transformed into basically change the ventral stream to configure for that message coming to you. It's cool. I think it's what people do. And we see the same kind of things in our concept learning studies. If there are fewer aspects of the stimulus that are relevant to the discrimination to solving the concept learning problem. We actually see the dimensionality of representations in the mid level visual system like in lateral occipital cortex are smaller. We have this technique for measuring dimensionality of neural representations in this neuroimage paper with Christiane Aldenheim from a few years ago. You can see this task related modulation that is exactly like how we're doing it in the deep learning model.
You kind of just change the dimensionality of the problem and turn silence the irrelevant filters. And this isn't spatial attention, this is like future attention.
But yeah, so I think this is what goes on. I think this is just the goal or the relevancy is coming all the way down from ventral medial prefrontal cortex and it's signals going all the way down to change these visual representations that are coming on up to the hippocampus and other places. So that's the rough cartoon conception of what's going on.
[01:37:59] Speaker B: I'm pretty sure you're cited in Grace Lindsay's recent. I haven't read the whole thing yet, but I'm pretty sure that she cites your work in there.
[01:38:07] Speaker A: Well, it must be a really good review paper then.
[01:38:10] Speaker B: Yeah, it's excellent.
So part of your point in this paper is that in machine learning in AI, the focus of attention is basically almost all bottom up. And so what you're doing here is this top down attention to sort of drive the system, highlight some weights and get rid of others. What are your thoughts on the state of adding attention into deep learning?
[01:38:37] Speaker A: I mean, I think there's no reason not to. So I mean, just with all the caveats that you've mentioned before, that there's many different varieties of attention and we have to be clear about what aspects of attention we're examining. But at least for what we're doing, I think we have a pretty well worked out how this kind of selective attention works in areas that are of interest to us, like medial prefrontal cortex, the hippocampus and along areas in the ventral stream. I think it's ready to start trying out ideas. It's not like it's not too soon. And if anything it's like by trying out these ideas and these networks, I think we'll probably get more insight into what the brain might be doing and it might spur further understanding onto the neuroscience side. So I think it could be the right point to kind of get that virtuous cycle that people like to discuss in terms of neuroscience and AI research.
[01:39:35] Speaker B: You've said that deep learning is, you consider it a game changer and I mean, is this just something like where it's a candy store for you and you think, oh, I can.
You have like 10 different ideas of how you can start using deep networks to interact with your conceptual concept learning systems and so forth.
[01:39:53] Speaker A: Oh yeah. I mean, I kind of almost wish I was born later or something so that I had more years to play with these things just because just the changes in computing, the models really aren't that much different than the 80s. There's just a couple insights and good tricks but like it's just finally like come together that you could do interesting things. And to me it's just so exciting that not to have to work with handcrafted experimenter defined representations. I mean I don't think people would believe what I'm going to say. But like we used to sit around in graduate school like in the late 90s and with professors and students and talk about the big picture and everybody thought it would be 200 years before we had any models that could do object recognition even decently from pixels. And all the research, I wasn't doing any vision science then, but all the researchers saw all their studies and all the minutiae of refinings as feeding into some eventual understanding that would be so far into the future that was so ridiculously wrong. It's just, it's very exciting to see that. I mean the same could be said for even speech understanding and I mean face recognition would be not even imaginable.
So when people don't get excited about this stuff, I think it's just like a really a case of moving the goalposts or they just, I don't know. I mean this is the history of AI too. Things like sorting, which I keep mentioning, used to be considered like AI. I saw a talk forever ago by Ray Mooney. It's basically anything that works is no longer AI.
[01:41:37] Speaker B: There's a name, it's called the AI effect, I believe to turn the AI effect. But that's interesting that you say that because most people are way short sighted in their estimates of for instance, when will we have AGI? When will we have figured out the brain? And it's always 20 to 30 years away. And that was true 100 years ago and it's true today. So it's interesting that you had that different perspective on how quickly this seems to have happened.
[01:42:05] Speaker A: I don't know, it's really exciting. It kind of goes what we were saying. Maybe everything isn't really that complicated. It turns out obviously we're not done with vision and these models have huge hiccups. But I think maybe we really weren't that far off. It could be back to our other conversations. Maybe it's about complexity of interactions, the richness of the input. It's like the theories weren't really that bad actually. And the stuff kind of works, even though it's pretty brain dead like a lot of the models.
[01:42:38] Speaker B: It's. Yeah, so you see, okay, so you use the word, the phrase game changer and I don't want to harp on that, but projecting forward, let's say, I don't know, 20 to 30 years or maybe 200 years like you were saying before. Looking back on this latest what's called the deep learning revolution, how will history view it? Will we see it as having been an essential leap forward toward understanding our own intelligence or toward understanding intelligence in general? Or is it going to. Well, I'll leave it open to you. How will history view this time?
[01:43:15] Speaker A: Yeah, sure. So I mean I could imagine your guests that are astute would probably give the quick history lesson. I'll try to do it really fast, not to be tedious. I mean people go back to like Rosenblatt, who was a psychologist and made a perceptron and like it was the same thing, you know, it was pretty much like linear regression with batch size one and the US Navy, like I think it was us piled money into it and build like, you know, like a hardware version of it and it's fine. But like all the newspaper articles, it was the same hype in like the late 50s is now like, oh, super intelligent machine. And of course it flamed out because it was just kind of basically just doing linear regression. And there's devastating attacks from Inskipper. But then of course then there's this whatever winter. And then I mean back propagation was actually discovered much earlier. I mean it makes sense. It's just like high school mathematics. It's like the chain rule. Big deal. It's a bit. Anyway, it was popularized against again in the 80s and then, oh, now we could solve Xor, a nonlinear problem something in linear regression. And it took off. But then of course it went away, at least in my opinion because it was kind of hackish, just like how deep learning is now. And kernel methods like support vector machines and even Bayesian methods seemed like they had a lot of advantages. So then it just kind of went away a lot. But I guess the difference is now, I mean this stuff actually works in a way it doesn't before. And so I think how it's viewed, it'll just depend a lot like how the labels maintain or change. Like you said before. Oh, everything's just labeled deep learning. So maybe we start getting better representations of uncertainty in these models or this. But I mean unlike before in all these other AI winters, these models actually do interesting tasks now and people are making money off these models. That's maybe not of interest to academics, but it's a difference with the previous waves.
So yeah, I don't really think it'll bust. And how it's viewed, it's just like history is like whoever's writing the history determines it, but I can't see the stuff not leading to the next thing, the next thing, the next thing that makes a huge difference. And of course neuroscience is so fad driven by the time people start doing really super amazing neuroscience with these models who probably will have moved on and underappreciate their work or something and rediscover it later.
[01:45:42] Speaker B: Yeah, yeah, it's amazing how often that sort of thing happens. I really appreciate you talking to me for so long, so thanks, Brad.
[01:45:49] Speaker A: Oh no, it's great talking with you, Paul. Thank you.
[01:46:05] Speaker B: Brain Inspired is a production of me and you. You can support the show through Patreon. For a microscopic two or four dollars per month, go to BrainInspired Co and find the red Patreon button there. Your contribution will help sustain and improve the show and prohibit any annoying advertisements like you hear on other shows. To get in touch with me, email palaininspired.co. the music you hear is by the new year. Find
[email protected] thanks for your support. See you next time.