BI 094 Alison Gopnik: Child-Inspired AI

January 08, 2021 01:19:13
BI 094 Alison Gopnik: Child-Inspired AI
Brain Inspired
BI 094 Alison Gopnik: Child-Inspired AI

Jan 08 2021 | 01:19:13

/

Show Notes

Alison and I discuss her work to accelerate learning and thus improve AI by studying how children learn, as Alan Turing suggested in his famous 1950 paper. The ways children learn are via imitation, by learning abstract causal models, and active learning by implementing a high exploration/exploitation ratio. We also discuss child consciousness, psychedelics, the concept of life history, the role of grandparents and elders, and lots more.

Take-home points:

Timestamps
0:00 – Intro
4:40 – State of the field
13:30 – Importance of learning
20:12 – Turing’s suggestion
22:49 – Patience for one’s own ideas
28:53 – Learning via imitation
31:57 – Learning abstract causal models
41:42 – Life history
43:22 – Learning via exploration
56:19 – Explore-exploit dichotomy
58:32 – Synaptic pruning
1:00:19 – Breakthrough research in careers
1:04:31 – Role of elders
1:09:08 – Child consciousness
1:11:41 – Psychedelics as child-like brain
1:16:00 – Build consciousness into AI?

View Full Transcript

Episode Transcript

[00:00:03] Speaker A: When we actually look at what kids do, what we see is that from the time that they're very, very young, even infants, they seem to have this kind of abstract, structured knowledge of the world around them. But they also seem to be modifying, changing, revising, altering that structure in the light of the data that they see. That's not the only kind of phenomenology. There's also just the phenomenology of being richly in the world and experiencing everything that's going on around you. And I think there's good reason to believe that that kind of phenomenology is more like what you're seeing in children. When do I just give up and when do I keep going? [00:00:41] Speaker B: And what's the answer? What's the answer? [00:00:43] Speaker A: Well, for what it's worth, now that I'm an old person, I can say this. I tell my students. [00:00:54] Speaker B: This is brain inspired. Did Alan Turing have it right when he suggested that we make AI that emulates a child instead of what many of us think of as quote, unquote, intelligent, namely, us adults? Hey, everyone, it's Paul. So Alison Gopnik has thought for a long time that Turing was right. And these days, she's taking what we know about how children learn and trying to apply it to improve some of the shortcomings of today's AI. Alison runs her lab at Berkeley, and she's the author of many books like the Scientist in the Crib, the Philosophical Baby, and most recently, the Gardener and the Carpenter, all of which are about topics related to child development, children's minds and how they work. And in the Gardener and the Carpenter, the relationship that we as parents and adults have with children. Of course, in child development science, there's the perpetual question of what knowledge and skills we're born with. That is, what's innate versus what we learn, also known as nativism versus empiricism. And that's a debated topic in AI. Also these days, how much bias and structure to build into AI models versus the current trend of training them from tabula rasa states. And we talk about that a little, and we discuss how the massive learning engine that we're born with goes to work so fast and immediately changes what we consider our innate abilities. But the main thing we discuss is Allison's research on three ways that children learn that could benefit AI. So just to prime you, here are those three ways. One, children learn via imitation of others. Not just blind physical imitation, but imitation of a person while trying to make use of knowledge of that person's intentions and whether that person seems to know what they're doing. And two Children learn by building abstract causal models. So children build mental models of the world and can infer causal elements in that model, like looking at a stack of blocks and knowing how to knock it over, for instance. And three Kids have a high exploration versus Exploitation ratio. So children try out all sorts of things without specific goals just to explore just because they're curious. And as we grow older and learn more, we explore less and exploit more what we know. You may remember Ken stanley from episode 86 who advocates an open endedness approach to AI where we step back from objectives and objective functions and instead massively explore the space of possibilities. So kids exploration seems directly related to this idea. Me Anyway, if you're a Patreon supporter, I just started a Discord server for all of us. So I was asked by multiple people recently to start some sort of community of Brain inspired listeners. So this is a start of that. Which will likely lead, you know, down the road to some Zoom events. We'll see. But anyway, check your Patreon account and look for the Brain inspired posts for how to join that Discord community. Show notes are at BrainInspired Co Podcast 94 where I link to Allison's work, including a really nice little recent review on the topic of children as high exploration machines. All right, enjoy the childlike yet grandmotherly mind of Allison Alison I've been preparing this hundredth episode special, quote unquote and that means so I've asked a lot of the previous guests to send me replies to common some top listener questions. And the reason why I'm introducing the episode this way is because I've reconnected with many of the people whom I've interviewed from the show, and I've been struck by two things. One, there's a lot of talk these days about the need for more diversity, more inclusivity, more equal opportunity, equal outcomes. And it sounds like everything is just going to hell in a handbasket in the field, in the neuroscience cognitive sciences field. But then I'm just consistently struck with how impressive and how generous and well intentioned some of the people are in our field. I'm saying our field because a very broad biological sciences field, let's say. And I'm wondering what your perspective is on the state with regards to that sort of thing. Just the broad picture and how it used to be and how you see it now. [00:05:59] Speaker A: In terms of things like diversity and equity, you mean? [00:06:01] Speaker B: Yeah, and just the there's a feeling like there's a state of emergency in the field like that we're doing it wrong and we need to do it better. But then when my experience speaking to so many people is like, these are genuinely good people, well intentioned people. And so the overall feeling is not so much one of doing it wrong. [00:06:21] Speaker A: Yeah, yeah. I mean, I think it's challenging about how do you manage to get, how do you manage to get as many people as you can involved in the field? And I think it's a complicated story. And of course in academia we're not separate from, we're not separate from the world at large. So it's an interesting question about what kinds of things can we do in a world that is unequitable in various kinds of ways to improve the situation in our, in our own field. One thing that's been a kind of interesting exercise for me has been the work that I've been doing recently in AI is a very good example of a field. You know, AI is an example of a field, for example, in which there are very few women and women haven't been participating in the same way. And there's, I think, some very self conscious efforts now to have more women involved. And there's been some interesting consequences for that. So for instance, the work that I do about childhood and child development I think is a very good example of something that frankly has been sort of devalued over the years because it's something that's been associated with women. And you might say, well, okay, look, if women are going to be spending their time thinking about children, then that's going to affect their participation in the science. But of course, the interesting thing is that people are realizing, oh no, wait a minute, looking at children is actually incredibly informative. And my suspicion is that they might not have heard about, here's the things that we know about childhood learning or about computation. If they hadn't been thinking, oh well, we should get some more women onto this panel and somehow these women are doing things about children. So I think that's a good example where even if the original impetus is let's get more diversity, let's have more women on the panel, the effect has been really more diversity. You know, diversity is sort of a buzzword. But one of the things that comes out of my work and that comes out of neuroscience and ecology is that diversity, technically speaking, having a wider range of people and ideas is really a good thing for any kind of system. It's a particularly good thing for any kind of learning system. And in fact, one of the things I've argued about children is that the very randomness of children gives you more, gives you more cognitive diversity. So I think that's a really nice example where doing something where you're starting out at a meta level saying, okay, we'll have diversity because we want diversity, actually ends up having. You can sort of point to real advantages in terms of the diversity of the intellectual, intellectual work and the intellectual ideas that comes about as a result of that. So I think there's going to be this kind of interaction that isn't going to happen immediately between wanting more diversity in terms of people, but also wanting more diversity in terms of topics and ideas. So deciding, for instance, that you want to hire someone who studies childhood development in an AI group in a way that you might not have done before, and then realizing, oh, wait a minute, this is actually really informative and gives us a whole perspective we wouldn't have had before. [00:09:19] Speaker B: Yeah, I hadn't made that connection between the exploration that we're going to talk about in children and the diversity that would be beneficial just for idea generation in the field. [00:09:29] Speaker A: That's right. And there's some fascinating sociological work. I've just been talking with a sociologist at Chicago named James Evans who's actually done this beautiful work where they use big data to look at all of the papers that have appeared in science over the last 50 years. And one of the things that they show is that you can show empirically that the more diverse, the more diverse groups, the papers that have more diverse authors, it's kind of interesting, they're more likely to have long term high citation rates. So if you just look immediately, the group that is everybody that you know is the usual suspects and everybody working together seems to get cited. But if you're looking at the long term, like what are the things that really make the big revolutionary contributions that people see 20 years later? Those are more likely to be from groups that didn't know each other to begin with or groups that aren't in the same networks. And I think you can kind of see that in. I think that's a principle that you can see at many, many levels, from thinking about ecology to thinking about childhood, to thinking about neuroscience. Right. So one of the things that we, that you know is that what happens, and an interesting thing that happens in development is you start out with this very plastic, flexible brain with many, many, many potential synaptic connections. And then what happens as you get older is that the connections that get used a lot get maintained, and then the ones that don't get Pruned. So you literally are starting out with a brain that has more potential for diversity and then as you more potential for exploration. And then as you interact with the environment, you end up with this brain that's much more finely tuned for one purpose. [00:11:10] Speaker B: Brittle, you might say. [00:11:12] Speaker A: Yeah, exactly. So you've got this kind of difference between brittle but effective. Right. Brittle but effic for certain things. Sure, yeah. So the idea is that. And in a sense, right, it might seem kind of puzzling, like, why would you have to go through this process? Why not just fine tune your brain to whatever it is that it needs to do to begin with? And I think it's the same idea that having the diverse pathways early on gives you a chance to be robust in the face of environmental change and variability. [00:11:41] Speaker B: Well, this is all like everything that you were just talking about is going to come up over and again during the podcast. But I wanted to start out with But a visual for you. Since this is an audio podcast, I thought it'd be great to show you a visual and then you can describe it for the audience, see if you can guess what it is here. But I'll preface this by saying it's kind of a running joke in my family since you were talking about having women be part of the AI community. I am known as the motherly father in my family because my grandfather said that around the dinner table one night and he meant it as a compliment. But of course I have an older brother and so now it's more of a running joke than recognized as a compliment. But I'm trying to take it as a compliment. Okay, so here is. Let's see. And I'm sorry, this is because the screen is backwards. Can you describe what you see there? And that's 51 and 49. [00:12:29] Speaker A: 51 and 40. So this is a circle with blue on one side and orange on the other and 51 and 49 on the top. [00:12:37] Speaker B: So this is a pie graph. And this is how I would explain to people the average of my feelings about whether it's worth being a parent. For about a six year stretch, it was about 51% worth it and 49% not. So it's barely worth it. Right. Those are six kind of, you know, 6ish kind of hard years. But then recently this is what my pie graph looks like now where it's more 6733. So my kids are 8 and 6 and man, it's gotten a lot easier. Yeah, I don't know if you remember. [00:13:10] Speaker A: Yeah, well, I'm a grandmother now of four babies, the oldest of whom is nine. So I'm seeing a lot of the little ones, too. [00:13:20] Speaker B: Right. You're going to make your grandson Augie famous through your book, the Gardener and the Carpenter. Okay, so anyway, enough about my parenting experience. Let's talk about children. So you're in the midst of this project to develop common sense in AI. That's the big DARPA grant, I believe, correct? [00:13:40] Speaker A: That's right. Yeah. [00:13:41] Speaker B: So there's been the deep learning explosion, and machine learning is all the rage. Learning, learning, learning. It's all about learning. But then there's been a little not backlash, but some checks on that because there have been a few people like Liz Spelke. I've had her on the podcast, and she wouldn't denigrate learning, but she does talk about the core knowledge skills. Now, I can't remember the phrase, but things that children are just about born with, that they seem to come into the world with some core set of innate abilities or they're developed very early on. And you also have people like Tony Zader, who talks about how relatively unimportant learning is, that the learning we do in our lifetime is a very, very tiny amount relative to the optimization project process that's happened through evolution. And I'm going to guess that you're heavy on the side of learning being much more important than not important. Am I correct? [00:14:42] Speaker A: Yeah, that's right. I mean, one of the things that's been really interesting is going back to. Here's a way that I like to summarize it. Going back not just to the 80s and not just back to the beginnings of AI, but going back to the beginnings of philosophy, really, to thinking about Plato and Aristotle. There's been this really foundational kind of paradox. And here's the way I describe the paradox. We seem to know a lot about the world around us, and that knowledge seems to be very abstract and structured. And the fact that we have this abstract, structured knowledge about how space works or how objects work or how people work lets us make powerful new inferences and new predictions beyond what we immediately learn. And of course, the most dramatic example of this is that eventually we have the very abstract kinds of knowledge we have in science, but even in everyday life, that's part of the common sense. We know a lot, and our knowledge seems to have this really kind of abstract character. And yet that information all seems to come from a bunch of disturbances of air at our ears and photons hitting the back of our retinas of the Data we're getting from the world is this very, very incoherent, specific, particular, impoverished. So how could we ever get there from here? How could we ever end up with these kind of abstract theories about the world given that the data seems to be, doesn't seem to have any of those characteristics. And going back to Plato, and literally going back to Plato and Aristotle, there's been sort of two ways of trying to answer that question. One of them, which is the Plato way, or Lis Spelky's way or Noam Chomsky's way, is to say look, it only looks as if we're learning this from the data. Really the reason why we have all this abstract structure is it's just there to begin with. And the data maybe is filling in some of the details, but we have this abstract structure to begin with and the other. And that's the approach of good old fashioned AI back in the, you know, back in the day, somebody like McCarthy or Lisp, that was a kind of approach to AI that people had. It's the approach of philosophers like Descartes, it's the approach of people like Chomsky in linguistics. Then the alternative has always been to say look, it only looks as if we have all this abstract structure. If we only look more closely, we'll see that what we really have are a bunch of correlations among specific kinds of data. So we can actually do all those things that we think that we need abstract structure for. We can do all that just with correlations among the data. And that goes back again, that goes back to Aristotle. It's the view that someone like Hume or J.S. miller, the associationists have. Then it's the view of behaviorists, it's the view of connectionists in AI. And most recently it's been the view that's really been the underpinning of deep learning and machine learning. And there's this kind of something of this kind of ping pong, but back and forth where people try to use the kind of. It's all innate structure and that runs up against problems and then people go back to no, it's just correlations and that runs up against different problems. And I think for developmental psychologists. Now of course someone like Liz Bulkey is a developmental psychologist, but as you say, I think she would even acknowledge this. Neither of those has seemed like very good, satisfactory results. Because when we actually look at what kids do, what we see is that from the time that they're very, very young, even infants, they seem to have this kind of abstract structured knowledge of the world around them. But, and here's where I would disagree with Liz. But they also seem to be modifying, changing, revising altering that structure in the light of the data that they see. So somehow that very specific experience at your eyes and ears when you're saying nine months old or three years old, seems to actually change the kinds of abstract representations that you have. And Piaget, you know, is the great founder of cognitive development, used the word constructivism to try to try to capture that fact. And unfortunately, he couldn't say much more about, like, what actually are the mechanisms that will enable you to do that kind of constructive process? But that, I think, is the core of what we're trying to do, at least with the Berkeley branch of this Common Sense project, is to see, can we. Can we think of mechanisms that could have that kind of output? And I think people in AI are increasingly realizing that if winter is coming, there's been this big AI spring, supposedly. Yeah, there's been this wonderful AI spring based on the fact that things like the deep learning techniques, neural net techniques, turn out to work really well when you have enormous data sets and when you have big, powerful computers. But I think there's a sense that fall is in the air. We're coming up against some of the limitations of those systems. So it's taken enormous amounts every time you want to increase GPT3. So it's however many billion parameters more it's using, it's an enormous investment, and it still is saying stupid things a lot of the time. So I think there's a sense that even though there's been a lot of progress, the kind of learning that we see in. In something like deep learning or deep reinforcement learning just doesn't look like the kind of learning we see in babies and kids where we learn from very small samples, we make very big generalizations from very small samples. And the question is, how is that? How can we do that? How is that possible? [00:20:14] Speaker B: So Alan Turing in his famous 1950 paper, said, Instead of trying to produce a program to simulate the adult mind, why not rather try to produce one which simulates the child's? And you've been quoting Turing. This idea to create an AI child or to model AI after a child, and you've noted that it's only in the last few years that this idea has sort of been acknowledged and accepted and sort of taken off. And more people are using that quote from Turing's paper. I mean, for instance, when I started this podcast, like, two years ago, I had Dan Yeamans on. And he does a lot of the deep learning, matching to visual cortical hierarchical systems in macaque brain. And that was his go to as well. He quoted Turing, or at least mentioned him in. Mentioned that quote about needing to build a child and thinking that that is the way to go. [00:21:08] Speaker A: Yeah, it's funny. I mean, one of the things that. This is a true story. So I've been using that quote for a long time. And I always used it by saying. Prefaced it by saying, you know, it's funny. Here's this incredibly famous paper. It's the Imitation Game paper, but people only read halfway. And when he says, wait, wait a minute, maybe this whole idea about the Turing Test is wrong. Let's. Maybe we should have this other, like the child Turing Test. People, stop. Stop reading that. But then I was trying to actually find the quote for a talk that I was giving about three years ago when I was starting to do a lot more of this work. And suddenly this quote is all over the place. So everybody, Everybody is using. Everybody's using this quote. And I think obviously the reason is that this, this big renaissance in AI depended so much on learning that then the obvious. And of course, Turing's point is, you know, what you really want is not a system where you've built in the story, which is possibly apocryphal of the, you know, the AI person saying, okay, well, we're going to figure out everything that everybody knows, and we'll get a couple of interns over the summer and then we'll program that into the computer. Right. That project obviously was not successful, was coming up against a lot of boundaries, but the learning. Learning turns out to really be the key to the recent successes. But then, of course, if you're thinking about learning, you're starting to think about the best learners we know of who are children. As I say, what we see is that children are doing a bunch of things that the typical machine learning programs aren't doing, but that, interestingly, there's a sort of convergence. So, for example, two of the things that. Well, here are three of the things that the kids are doing that the typical. [00:22:48] Speaker B: We are going to step through them in a moment as well, if you are okay with that and comment a little bit on each. But I want to ask you beforehand. I mean, you have had a long. And you have been successful for a long time. Successful and productive. But it strikes me that you've had this thought, this idea, and you've been sort of preaching this for a long time. Before it has now become popular. How do you know when you have an idea that's worth maintaining despite not everyone in the outside world getting it yet? How patient do you have to be? [00:23:18] Speaker A: I think that's a great question, and it's one of the things that I talk about with my students all the time. I think it's probably the hardest thing you have to do as a scientist and actually doing the science, like getting the idea and figuring out the experiments and doing them. The doing them is the hardest part maybe, but I think one of the hardest things is, look, I have this really good idea. I'm not getting love from the rest of the world about it. When do I just give up and when do I keep going? [00:23:42] Speaker B: And what's the answer? What's the answer? [00:23:45] Speaker A: Well, for what it's worth, and now that I'm an old person, I can say this. I tell my students I think of myself as having had two really good, great big ideas in my career, one of which was theory of mind. Yeah, well, theory theory in general, theory of mind in particular, and then the sort of causal inference version of theory theory, the idea of using Bayesian models as a model for cognitive development. And on all of those, I couldn't get funded. There was a period when I first started doing it. I couldn't get it published, I couldn't get it funded. What I think of as the great period in our lab, which was when actually Josh Tenenbaum was at Stanford then, and Laura Schultz, who's now at mit, was in my lab, and Tamar Kushner, who's now at Cornell, was. And it was just this great, wonderful, exciting time, which was when we were starting to do the causal inference work and started to collaborate with philosophers of science and computationalists on Bayes nets and causal graphs. None of my students had any funding because that was the one time when I didn't have any. I had this drought for several years of not being able to get funded. Now, of course, the trouble is, as in the case of art, the fact that you're not getting funded doesn't prove that it's a good idea. But it is striking to me that, you know, that first phase of trying to think of the idea and then trying to figure out how to implement it is really difficult, and you have to stick through. Now, I have to say, there have also been ideas that I had that just petered out because I couldn't figure out a way to turn them into something that was ideas that I think are good ideas, and I couldn't figure out a way to turn them into a productive research program. [00:25:27] Speaker B: But you felt the same in your gut and your intuition was the same about those ideas versus your other. What you consider what has been your successful ideas. [00:25:35] Speaker A: I suppose, yeah. So it, you know, you can't. There are, there are tragic examples of people who've sort of isolated themselves from the community just off pursuing their own, their own ideas. And that isn't, you know, a terribly successful. That can be. That cannot, you know, that even though our mythology is, oh yeah, there's the brilliant genius who, you know, locks him with him being their operative word, you know, up in the, in an attic somewhere and then comes up with a great idea, that's not really the way it works in science. This gets back to this diversity point. It works by having lots of people who are interacting with one another. [00:26:13] Speaker B: Is it the more, the better? The more people you can bounce your ideas off, the better and not worry about people stealing your ideas and scooping you. [00:26:19] Speaker A: And all that has been my way of doing things over the years, partly because I'm, as my students will tell you, I'm very bad at being tactful at all or saying that's anything other than whatever it is that I happen to be thinking at the moment, which has some other kinds of downsides. But I do think, and again, I think the science suggests that the more open you are, the more diversity there is, the better. And, you know, the open science movement, the transparency movement, has been a really good example of how much more productive, how much more productive that is. But one thing that I think is interesting that people don't maybe appreciate as much, and I don't have any. I think this is an art form. I don't have any recipe is turning the good idea into an operational experiment. So, you know, my background is in philosophy. In many ways, I still think of myself as being more a philosopher than a psychologist. And you can have a wonderful idea, but if you can't turn it into something that you can turn into an actual empirical research program, yeah, it's not productive. And sometimes what happens is you have ideas about how you're going to turn it into a real experiment. It just doesn't work. And at some point you have to say, okay, just need to do something else. With developmental psychology, that's especially true because it's very hard to anticipate what things are going to work with kids and whatnot. I mean, after 30 years, you start to have some intuitions about it. But there's lots of times Where I would say, oh, that's just going to be too hard. The kids aren't going to be able to do it. They're not going to be able to concentrate, et cetera. And it turns out that they do beautifully. And then other times where I think, okay, this will really get the kids engaged, and the kids just look at you like, what are you doing? I don't want to do this. This is boring and stupid. And I don't get what the point is. So you're always in this position of trying to translate the. Not just for us, not just translating it into an experiment, but translating it into an experiment that 3 year olds are going to be willing to engage in. [00:28:20] Speaker B: Right. Okay, well, let's talk three year olds. Three and four year olds, I suppose, is around the age group that inspires these algorithms and these ways of learning. So you talk about three main ways that children can teach us about learning and perhaps building AI that learns better than the current AI. So I'm gonna. Let's just talk about imitation first, kind of quickly, and then we'll go into abstract causal models and then we'll finish with a longer exploration of exploration. So imitation is the first one. Do you want to just say a few words about what about children's behavior and way of moving through the world with regards to imitation could be used. Why would that be useful for AI? [00:29:05] Speaker A: Well, one of the things that is really striking about humans is that we learn from other people. And in fact, you could argue that we learn more from other people than we do from any other source. Most of what we learn something that is being conveyed to us through one way or another by other people, although we learn a lot from our own experience as well. And our capacity for culture, our capacity to have each generation pass on information to the next generation is one of the really distinctive. Our distinctive evolutionary advantages. And it turns out that even, I mean, in this case, newborn babies already seem to be tuned to learn from other people through imitating what they do. But what we and others have discovered is that it isn't like they just sort of mindlessly imitate. They imitate in very subtle ways. So they'll imitate someone differently depending on whether they think that the person was doing something on purpose or accidentally, or whether the person was doing something in order to teach them or was just doing something in order to accomplish a particular goal. And they vary systematically. We have some really lovely experiments that show this. Combine the information they get from other people with their own experience, and they'll do that to do things like judge how reliable another person's evidence is. And again, beautiful work showing that even three and four year olds will judge whether someone's testimony is, you know, is this someone who you can rely on even? Is this someone you can rely on in a particular domain? You know, so is this, this is someone who knows a lot about, you know, wood. So I'm going to listen to what he says about wood, but not necessarily. Or here's a better example. Here's someone who knows a lot about physics, so I'm going to listen to what he says about physics, not necessarily about consciousness. Right? [00:30:50] Speaker B: Whoa, wait a second. The physicists would disagree with you. [00:30:56] Speaker A: Just because you know a lot about physics doesn't mean that you know a lot about everything else. Interesting example, but three year olds can figure this out. So that's one. Being able to imitate, but not just imitate and being able to learn culturally in general in these sophisticated ways is obviously a big capacity that humans have. And if you look at robotics, for example, is an example of a field where they're trying very hard to try and get robots to learn by demonstration. Because of course, things like motor skills are almost impossible to sort of explicitly describe to someone or explicitly program in. And it's interesting that it's actually hard. It's hard to get. You'd think even just getting a robot that can imitate your physical gesture turns out to be quite demanding. So I think that's a very promising line of research. [00:31:46] Speaker B: So it really is about understanding the intentions of others and being able to imitate based on not just their actions, but the intentions of their actions. [00:31:56] Speaker A: Exactly. That's right, yeah. [00:31:58] Speaker B: So the second way of the three, the three main ways are using abstract causal models. Maybe you can say a bit about that. [00:32:08] Speaker A: Yeah. So again, if you're thinking about this difference between the sort of innatist approach to thinking about knowledge and learning, one of the questions is what kinds of models can you have about the world? And in particular, what kinds of models can you learn from your experience in the world? And this is really the start of the work that I've been doing in AI and computation now for 20 years, starting around 2000, in fact, a little before then. And this came out of this idea of the theory, theory. So the idea was that children are doing things that are very much like intuitive theories. So then of course we went to the philosophers of science and people in computer science and said, okay, so tell us, what is it that scientists are doing when they have theories? And they said, well, we don't know, just maybe psychologists know. But around the arts, there was this really interesting convergence of work from people in philosophy, science and people in computer science looking at, in particular at understanding the causal structure of the world. That's something that theories really do. They do other things, but one of the most important things they do is they tell you about what causes what out in the world. And there was this very exciting bunch of work showing how you could actually construct causal models from data. So this was an example where a kind of representation that was really important and really abstract and complex. We could think of some systematic ways that you could actually build a causal model from looking at correlations between events. And this is very general, very, very general, powerful ways of understanding the world. So the question was, are kids doing something like that? And what we did was actually give kids. This is the part where about, you know, figuring out the experimental technique. What we did was figure out a way to actually give kids information about a causal system. Our Blicket detectors. A little machine that lights up. Yeah, our Blickets, and then just say to them, can you make this work or can you tell me which ones are blickets? And when we did that, it turned out the kids were incredibly sensitive to pretty complicated statistical patterns. When we were doing this in parallel, people like Jenny Safran and Dick Aslin at Rochester was showing that even infants are sensitive to these statistical, statistical patterns. But not only were they sensitive to the statistical patterns, but they were quite automatically and spontaneously using them to make causal inference the way that a scientist or a statistician would do. And what we've done since then over the past 20 years is show just how general and powerful that kind of learning is. You can not only learn specific causal relations, but you can learn abstract higher level causal relations. You can learn multiple causal relations with multiple variables. You can use that kind of learning to do experiments. You can learn both from observation and experiment the way scientists do. So that's been. You can postulate unobserved causal variables. So, you know, you see a pattern that doesn't make sense unless you think there's some hidden thing that's behind the curtain that's responsible for it. And again, even little kids are doing that. You can use analogy to try to say, okay, the causal structure in this domain is like the causal structure in this other domain. Another really important kind of learning. So it's been very exciting because it's a kind of example of where how you could do this constructivist, how you could do this constructivist project in real life of taking a bunch of data and then building abstract models. And then in the past maybe 10 years, people across cognitive science have really generalized this idea more to talk about probabilistic general generative models in general. So even if you go beyond causality, the idea is that you can build these generative models. And when you build the generative models, you have a generative model. It lets you make predictions. That's important. But it also means that then you can go backwards and use Bayesian methods to go from the data and decide what the right generative model is. And one of the things that I think is happening a lot in AI now in a very exciting way is having these kind of hybrid models that have generative models of one sort or another, especially causal models, and yet use some of the powerful learning techniques from things like deep learning to actually learn those models from, learn those models from data. So you have this kind of hybrid combination of trying to make sense out of the data and then trying to build the models based on the data. [00:36:37] Speaker B: One of the things that you've said is that the reason why kids are better at learning these causal abstract models is because they have fewer priors. So they start off with a cleaner slate. And this will come up again, I think, when we talk about exploration in a second. But is there a paradox in that? So children, and let's say a low prior, causal abstract model has to build priors over time and then that's what learning is. Right. So it seems like it's a goal almost or a direction for development and learning. But is there a paradox that can we build systems that forever learn in that way? Or does learning by definition include building the priors that then make us worse learners? [00:37:27] Speaker A: Right, exactly. [00:37:28] Speaker B: Adults or trained up AI systems? [00:37:30] Speaker A: Yeah, I mean, I think one of the things that I increasingly think that we don't appreciate enough in cognitive science is the idea, and maybe we don't appreciate enough in, I suspect in the United States in general is the idea that there are genuine trade offs, that you can't always optimize everything. And this is something that of course comes out of the technical work in computer science in pretty vivid ways. Right. So you can't optimize both having as much information as you can and having as strong a prior as you can and being as open to learning as you can. Those two things are just intrinsically intention with one another. And a thing that I want to emphasize a lot is a lot of the traditional ways of thinking about development have been okay, this kind of implicit teleology that there's some place you want to get like, right. You know, whoever the, whoever the 30 year old boy genius is in your department, like that's the, that's the absolute peak of all of human cognition. Right. And development is about can you get to be more and more and more like that, like that mind. Right. That's really what you're trying to do. [00:38:41] Speaker B: You're not helping tear down stereotypes here. [00:38:43] Speaker A: Yeah. And then of course as you get older, you're just falling off from that, you're just falling off from that model. But that doesn't make much sense from an evolutionary point of view. Like if it was so great to be that guy, then why not just be that guy? Right. And the idea is that there really are these genuine trade offs and having a period of childhood, for example, it's not that the children are just sort of defective grown ups who have to develop priors, but the very fact that you start out with a system that doesn't, that has a flatter prior is actually an advance. Now if you just, and this is the point about the explore exploit trade off in general that we'll talk about in a minute. Of course, if you just were always ignorant, right. It might make you good at learning, but you're going to always be ignorant. So I mean the whole point of learning is to be able to have enough structure so that you can quickly make decisions, for example, in a new situation. But it always has this trade off that as you get, as you learn more, you're going to be seeing less, you're going to be less open to information from the world around you. And I think that that's really. Is when you said it's a paradox. It is, but I mean that's what it is. That's the point. The point is that there's this intrinsic trade off between these two, between these two developments. [00:40:01] Speaker B: But if you're using the idea that children have these low prior causal abstract models, and that's what we should model AI after, then the AI is going to become adult too. Right. So I'm trying to picture how the AI will continue to be great AI in that low prior structure, like in this situation that you just described, where it's a trade off over time and you end up with specialized AIs either way. [00:40:27] Speaker A: Yeah, I mean, I think exactly the question is, and the advantage of relying on learning is this point about flexibility. Right. So I've argued that what we need to have in AI is we need to have a Kind of developmental story within AI. So we might need to actually have some differentiation, a kind of division of labor between here's say, parts of our system that are, that are the explorers and then here's parts of our system that are actually the actors or an obvious thing. Start out with a system that is, that doesn't know as much, that isn't as competent, and then let it develop into a system that is competent, which again is kind of what, something like deep learning. That's the advantage of something like, of something like deep learning. The problem is anytime you build something in, it's going to have the advantage that you're going to have things built in. You're going to be more competent if that's true in the world in which the thing that you build in is true. But of course you don't know that you're going to be in the world in which the thing that you built in is true. So you're going to have a disadvantage if the world turns out not to be the world that, in which the thing that you, that you built in is true. And I think what biologists call life history, the developmental trajectory of an organism, is actually a way of trying to deal with those kinds of tensions and trade offs. [00:41:42] Speaker B: What do you think about that phrase, life history? [00:41:44] Speaker A: I love it. So this is a wonderful life history is a wonderful example exactly of this kind of interdisciplinary interaction. So if you ask most psychologists, including most developmental psychologists, they wouldn't even know what life history means. [00:41:59] Speaker B: I was going to say, like, I have a hard time with the phrase just because it doesn't sound like it means what it means to me. [00:42:05] Speaker A: Yeah, well, you know, it's what always happens when you have technical, when you have a technical, a technical term. If you ask evolutionary biologists, they immediately know what you're talking about because life histories are a really, really important part of biology. I think one of the ideas, you know, in biology, they're less interested in trying to differentiate between sort of the psychological and biological, for example. So life histories include a whole lot of just physical, biological things about physically, how long does it take you to mature? How senescent are you? When do you die? In fact, in biology, a lot of the life history is about things like that. When do you reproduce, when does an organism die, and so forth. So it sort of starts out in this physical context, but I think it's actually really informative in the psychological context too, to think about, think about the whole way our whole life unfolds and think about the way that A Life Unfolds for a Species as a really informative piece about how that species is managing to adapt to its environment. So I've been on a campaign to get life history into psychology. We just have a paper, a special issue of Proceedings of the Royal Society that came out called Life History and Learning, which I think is the first time that people have tried to put together those two ideas. [00:43:21] Speaker B: Well, so let's go ahead and segue into exploration versus exploitation because that's within that series of papers. This is what you really focus on. So that's the third thing that, the third main way that children learn that you see as being inspirational for potentially building into AI, and that is active learning, which includes play and exploration. And you've said that childhood is evolution's solution to the explore exploit dilemma, right? Have you heard of the concept from computer science? This is from Ken Stanley, actually, who's on my podcast a while back, of open endedness. [00:44:01] Speaker A: I mean, I'm not sure what the specific idea is supposed to be. [00:44:05] Speaker B: Okay, so I'm going to read a quote here. So the idea of open endedness. And I emailed him after and I sent him your work. The idea of open endedness is essentially that by trying to follow objectives, by using objective functions, by trying to achieve objectives in work places and to generate great ideas, what we actually need to do is remove the objective objectives to do great things. Remove objectives. And so it's very in line. So here's a quote from your book the Gardener and the Carpenter. The fundamental paradox of the explore exploit trade off is that in order to be able to reach a variety of goals in the long run, you have to actively turn away from goal seeking. In the short run. These two ideas seem to jive pretty well together. [00:44:53] Speaker A: Yeah, I think so. And of all the things that are lessons for AI from childhood, before people in AI were thinking, I think, about children, they were starting to think about active learning and exploration and recognizing that to get a system that can be robust, you need to be able to have it actually kind of escape from its mainframe and get out there and actually be deciding what kind of information it wants to get from the world itself. And in order to do that, you need to have objective functions that include things like curiosity. Right. So satisfying curiosity. I mean, curiosity is fascinating. I have a grant and I've been thinking a lot about it. Curiosity is fascinating because it has all the structure of a motivational system. It comes with emotions, you know, like you're curious, you're just driven, you know, you go out and you feel like you're just not going to be happy until you solve that problem. And yet it doesn't come with any obvious utilities, right? So, you know, you get a bunch of information, but it's not at all obvious that the information is going to do anything for you. So the idea that I think people in neuroscience have had as well, and I think it's a very interesting idea, is sort of you could take the structure of classic structure of reward of something like, okay, you get your goal and you get a burst of dopamine, and then you go out and try and get more of the dopamine or whatever and apply it to something like curiosity. So you could think that part of what especially happens with humans is we get this kind of extra sort of pseudo motivational system that uses a lot of the same apparatus. We're motivated, we're happy when we do it, we want to do it, we are trying to accomplish it. Even though the goal isn't any of the goals that you would think of as being a typical goal for an organism, right? It's not increasing its resources, it's not increasing its reproductive success. It's not doing any of those kinds of. Any of those kinds of immediate things. And I think that's one way of thinking about curiosity. We've been doing a bunch of work recently about play, and an interesting idea about play is that it's a situation in which you set yourself these goals, but they're not the goals that are actually useful for anything, but they teach you what it means to set up and accomplish goals. And they might be sub goals in some new problem that you're going to have to face that you're going to have to face in the future so that you could. There's some very interesting work in meta reinforcement learning, where the idea is that you're learning to learn. So learning how to accomplish something like, I don't know, like playing chess or even more. My grandson plays what we call Addy chess. So this is. He's five now, but when he was four, he'd watch his big brother who plays chess, chess, and then he'd play Addy chess, which is things like, I'm going to knock over all the pieces, or I'm going to put all the pieces in the wastebasket, or. And now I'm going to see if I can put all the white pieces on the black squares. And it's setting up these kind of sub goals that are not actually the goal of chess. But you can see why figuring out how to do all those things are the kind of sub goals that you might need if you're actually going to get to the point of doing something like playing chess or at least just playing with the world in general, which. [00:48:10] Speaker B: Also that particular example he has some of the other features that are part of these exploration qualities like that's noisy and random and I don't know if you'd call it risky maybe if you're going to lose a piece. Kind of impulsive and definitely involves play and just curiosity like an intrinsic motivation for actively learning, I suppose. [00:48:32] Speaker A: Yeah. And one of the other big points that I've been making is that, and this is part of the trade off point is that things that are bugs from the perspective of exploration can be features from the perspective of exploitation and vice versa. Or maybe a better way of putting it is the things that are bugs from the perspective of exploitation can be features from the perspective of exploration. So being impulsive, taking risks, doing things in a kind of random way, having a lot of variability in what you do, doing things without necessarily getting any high utilities out of them, those are all things that make you look irrational if you're thinking about it from the exploit perspective. And in fact those are the things that traditionally have led people to say that children are irrational or you know, kind of not very functional or not very adapted. But those are all things that are exactly what you want if you want to get high levels of, you want to get high levels of exploration. And the biological literature is really interesting about the fact that being in this protected period of childhood gives you a context in which you can, you can exploit these, as it were, you can exploit these bugs, you can be an explorer because nothing is riding on it. You don't have to worry about it. One way I put it is that babies and young children have exactly one utility function which they're extremely good at maximizing, which is be as cute as you possibly can be. And as long as you're as cute as you possibly can be, then, and you probably know this if you've had little, you know, when you were, you were showing your pie chart, right? Well, why do you keep going? Well, it's because every, you know, somehow in the middle of all this there's this adorable smile and you know, just amazing overwhelming cuteness. And that just keeps you going in terms of taking care of this, taking care of this little creature. But aside from that, you, the babies are free to just be learners and thinkers. [00:50:29] Speaker B: No, I mean, yes, the cuteness works for sure. One of the things, though, that's interesting to me from a. You know, because I continue to just come back to think of how these, how these qualities could apply in AI. And one of the consequences of children having that long period where nothing is riding on it is that it's a high cost to the environment, including their caretakers, be those parents or grandparents or alloparents. And it struck me that maybe there's a pretty high cost if you're going to build the AI. So now thinking, if you build this into AI, what does that mean in terms of are they going to be relying on the adult AIs and driving them crazy in the process, or how will that look? [00:51:18] Speaker A: Well, I think that's exactly the point is that one of the things that I've been thinking about recently, this is a bit of a segue. So one thing is just the research and development, being able to have the system not be very successful to begin with, but then have more success across again. I think the important dimensions are robustness and variability. So either the trade off is it's easy to design a system that does one thing really, really well, but the problem is what happens when you want it to do something else. And part of robotics is a nice example, I think, where in some of these collaborations this has been very vivid. Right. So if I said, yeah, okay, great, I'm going to get a robot, it doesn't matter, it'll do one thing well, like it'll pick up a nail. Well, it turns out, no, no, that's not one thing for a robot, right? What the robot might be, you could train to be very good at is moving its hand in exactly this particular trajectory to get this particular object that's this particular size and pick it up. And even just being able to get it so damaged you could move its arm over a little bit and it could still get to the nail. Turns out to be really challenging, let alone put it in an environment where it's not picking up nails, it's picking up pens and it's in a different factory and getting it to be able to do that. So this robustness and variability is really, really important. And an idea that again, you see in robotics is start out with a system that's not actually doing as good a job. But if you let it learn, then it will be able to be more robust. And especially if you let it learn in this kind of playful, exploratory, in this kind of playful, exploratory way. One project that we're doing, working on now, which is I think is just low lovely project is we're collaborating with some people at Google Brain and Pierce Sermon at Google Brain has this beautiful and Corey lynch this beautiful result where they're looking at this goes back to imitation. So they're designing robots that are supposed to be imitating people to do things like have a desktop and put things in drawers on the desk. And what they discovered was the first way you might think that you would train the the robot is you show it people putting things into drawers and desks and that's the data that it's going to use to figure out what it should do itself. Well, it turns out that when you do that, you get this classic kind of overfitting where the robot gets to be very good at doing exactly what the person did, but can't generalize to anything new. So instead what they did was they gave the robot, they said just to Google engineers, just play with the things on the desk. Don't try and do anything, just play with them. And what they discovered was that if you gave that data to the robot, the robot was much more robust. So having this, you're not trying to do anything, you're just messing around, you're just playing. It lets you explore the space in a way that meant that you ended up with a more robust system later on. There's also a beautiful neuroscience finding that I really like. Like this where this is Beckett Ebitz and they were looking at, you know, really doing single cell recording in primates who were solving these classic reinforcement learning bandit tests. So the way reinforcement learning bandit test goes is, you know, you have two, basically you have two buttons to press, you have two levers to pull and one of them gives you a certain amount of reward and the other one gives you a reward more often. And the problem is what you wrap very rapidly can learn how to do is pull the one that is going to give you the most reward. The problem is if the environment changes, maybe it's going to turn out that actually the other one is going to be more rewarding. So if you think about like a natural foraging context, you know, you eat all the fruit off of one tree and the other tree is actually going to be a better, a better bet now than it was before. So this is like the classic explore exploit, framework, where when do you decide that you're going to try the thing that wasn't as successful before? So what they did was they actually did recording in the sort of decision making parts of the brain when the, as the monkeys were trying to make this decision. And one idea was, well, maybe it's that you have to kind of make this meta decision where you say, okay, I've been doing this for long enough, maybe there's been a change, maybe I should try the other piece. In which case you'd expect to see activation of decision making. But what they actually saw was that other parts of the brain just injected noise into the decision making process. So it was like, if you're trying to imagine the phenomenology of the animal, it was like, oh, the hell with this. I've just been pushing this goddamn lever all day. It's really boring. I just want to do something else, like forget about this. And I think that just the hell with this. I don't know what the long term effect of this is going to be. I'm just going to explore is more like what you see when you see the kids who are really exploring just for their own sake. [00:56:08] Speaker B: So every once in a while the brain just kicks itself. [00:56:11] Speaker A: Yeah, exactly. [00:56:13] Speaker B: So I have a few listener questions here, if you're game for it. [00:56:18] Speaker A: Sure. [00:56:19] Speaker B: This is from Sammy. Is exploration vs exploitation a valid dichotomy given that everything is an uncertain prediction at a certain level? And he's wondering if there's evidence in brains for that or brain structures, or if there's evidence for a kind of spectrum between explore, exploit, like how dichotomous are they? [00:56:40] Speaker A: Well, you know, there's an interesting question that again gets back to this randomness question. If you look at adults, you see two quite different kinds of exploration which people describe as being either directed or random. So this gets back to exactly the point that I was just making. So, you know, you could say, all right, well, if I'm a scientist doing an experiment, for example, I have to actually sit down and think, which experiment should I do that will give me the most information that will be relevant to the goal that I'm attempting to accomplish later on. And sometimes when you're exploring, it's that kind of exploration where you're self conscious about, I need information in order to be able to accomplish a particular outcome. So you could think of that as being kind of almost like a sub goal when you're exploiting is, okay, I need information so I can exploit more effectively. But sometimes you're doing this more random exploration like those monkeys where you're just saying, and as a scientist too, you know, scientists officially aren't supposed to just do fishing expeditions, but I bet you'll never find a good scientist who doesn't say, yeah, Boy, this great idea just came because I was messing around and this totally unexpected thing happened. And in fact, there's actually some nice sociology studies that show that the difference in the Nobel Prize winning labs is when something weird happens, do you say, oh, forget about it, that's just a distraction. And you say, huh, that's weird. Why did that happen? Let's figure out why that happened. So I do think there's some sense in which you could say, and of course from the evolutionary perspective, in the long, long run you're talking about thriving in environments and having reproductive success, which is an exploit value. But in the short run, I think you can really discriminate between the systems that are responsible for this. And you even see brain discrimination in the brain between the systems that are responsible for one process and the other. [00:58:32] Speaker B: Is synaptic pruning in infants modeled as a gradually lower learning rate. Does that analogy hold when modeling differences between brain regions in adults? [00:58:44] Speaker A: Yeah, that's a really good question. And one of the issues is, you know, what's the relationship between. This gets back to the point that you were making before about, you know, you don't want to be as good a learner in a sense when you're an adult. Exactly. Because you can rely on the things that you've already learned. So I do think, one of the things that I think is interesting is if you take an introductory cognitive psychology or cognitive science class, right. And you take the class that's the adult cognitive psychology class, or look at that textbook and then you take the cognitive development class, they talk about totally different things. So the adult cognitive psychology class is going to be about attention and memory and decision making and putting things in short term stores versus long term stores and trying to decide what values you should have and how you calculate your utilities under uncertainty. Those are all the topics you're going to see in any kind of standard cognitive psychology textbook. If you go and you read the developmental, the cognitive development textbook, you're going to hear about intuitive theory formation and grammar induction and statistical learning. Now each of them, you know, the, it's, it's like at the end of the grown up textbook there might be like a chapter about learning. And in the end of, somewhere in the kids textbook there might be something about attention and memory. But it really is striking that it's like they're really, really, really different systems. And the adult one really is one where learning is not always going to be an advantage. That's not the main point. Whereas for the kids, I think the point that Is the main. [01:00:18] Speaker B: Very good. Donald wonders, and this is a little bit different, changing topics just a little bit if this is relevant for when researchers tend to publish their breakthrough research. So first of all, if it's true that breakthrough legacy establishing research publications come earlier in careers, and two, if so, if graduate students have a high exploration ratio relative to later in the career. [01:00:47] Speaker A: Well, I think once you're talking about adulthood, one of the interesting things is that we're more context determined, I think, than in childhood. So I think, you know, in childhood, the truth is in childhood, like you're not going to be able to get your 3 year old to be efficient at getting themselves together and getting to preschool no matter what you do. [01:01:06] Speaker B: Right. [01:01:06] Speaker A: So you might as well just, you might as well just enjoy the single tier coming down. Yes, you might as well just enjoy the exploration because it's just the other thing. This isn't going to happen. But of course, when you're talking about adults, even sort of post adolescence adolescence is a really interesting period from this perspective. But if you're talking about, you know, people over the age of 20, let's say we have the capacity to do both of those things and we can move back and forth depending on the context we find ourselves in. And again, you know, if you're in a more protected context, like hopefully graduate school, then that gives you an opportunity for exploration. So I think it's an interesting question within adulthood about how much it is really just this. You know, as you get older, you get less exploratory and how much it is that the context you find yourself in may make you less exploratory. And one of the things that I was saying before about my own career and I think the career of lots of productive scientists is that you sort of go through, say, 10 years. For me, it's been sort of 10 years of really nailing down an idea and then getting kind of bored and wanting to be a kid again and taking something that's like causal basenets or like reinforcement learning you don't know anything about, or life history something you don't know anything about to begin with. And putting yourself in that kind of challenging situation is actually the thing that gives you the inspiration in which you become an expert and you learn and you do a bunch of, you do a bunch of things afterwards. So I think a lot of it in adults depends on context. And you can see something like, you know, that's the rationale behind having a sabbatical every seven years or spending time at the center for Advanced studies is that you can you pull someone out of the conventions that they have and then that enables them to think about something in a new way. Kind of get back to this. Zen masters call it beginner's mind, even when you're, even when you're an adult, even when you're older. And I think some of this in an interesting. One of the things that I'm very interested in now, just starting to be really interested in is if you're thinking about that human life history, there's the childhood part which is distinctive, but then there's also what I think of as the elder part, sort of 50 to 70, which is also distinctive. So chimps are dead by the time they're 50. And humans are very unusual in that we have this postmenopausal period and then, you know, the men kind of get it for free as well, this later period of 50 to 70. And one interesting idea is that there might be sort of further adaptations to being in that niche. It's no longer the niche of either in a way exploring or exploiting. I think that's the niche of caring and teaching. And one of the things that you start to see is that kids are learning some of the big structure of the environment, not so much from the parents as from the grandparents. The grandparents are the ones who are telling the stories, making up the myths, sort of seeing, seeing and being willing to convey. Here's what the big, here's what the big picture is. I must admit I have found myself of course, in the AI community conveying that perhaps there was a result in like, you know, 2015 is an important role of elders because the students can't believe that there actually was something back in the ancient olden days like 2015. But I think that's part of the idea. [01:04:31] Speaker B: Yeah, I was going to ask you about just elderly. I mean, there's so much focus on how great children are at learning and through your work, what we can learn about making better AI through studying children and what is there in the poor elderly population, they always get the short end of the stick. They don't get to be the 30 year old white guy in his attic and they don't get to be the infinitely potential child. So what good are they? [01:05:00] Speaker A: Yeah, no, that's interesting. I just wrote a piece in Eon which is a great kind of ideas journal, which I love. So I have a piece that just came out exactly about this. [01:05:11] Speaker B: Oh, okay, I did. [01:05:12] Speaker A: And in fact my new book was originally going to be called Explore and it was going to be about children. And it still will be. But I'm now thinking that I'm going to call it Curious Children, Wise Elders. And the idea would be that there's really this kind of three stages of man or three stages of human woman. In fact, woman is sort of more, in some ways more appropriate. Where you have this explore period, then you have this exploit period, but then you have. The only way that the explorer period will work is if you have care. Right. If you have a, if you're in a context in which you're being taken care of. But more than that, if you're a cultural species. And interestingly, the only other species we know of that has postmenopausal grandmothers is the orcas. And it turns out the orcas are a rare example of a cultural species, a species that has cultural transmission. You think about cultural transmission, the niche of being the person who's passing on the information to the next culture comes in very naturally, fits very naturally with the person who's also providing the safe environment in which you can explore. And I think those elders are really that, that's really what their niche is. Their niche is we've accumulated the information over our lifetime and we're kind of going to be in charge of the information that's been accumulated over previous lifetimes. And now our job is take care of the people who need to be taken care of and also transmit the information to the new, to the new generation. There's a beautiful paper by Michael Girvin that is in this philtreans that I really like, where they looked at hunters. And one of the things that is kind of an interesting contradiction is if you're thinking about culture, you assume, yeah, the young hunters are not very good at hunting and they have to learn it by observing and learning from the older hunters. That's, you know, great human advantage. But of course, as anyone who has kids knows, like when we make pancakes with grandchildren, it takes four times as long as when we make pancakes without the grandchildren. Because it's hard to actually get something done when you're trying to teach someone who's not very competent to do it at the same time. [01:07:17] Speaker B: That's before the cleanup. [01:07:18] Speaker A: Yeah. And what Michael discovered was that the pair, what you saw was the, again, you know, like the 30 year old guys were going off by themselves in hunting. That was the most efficient way for them to bring back hunting. It was not really a great idea to have a bunch of 8 and 9 year olds trailing along behind you. If you just want to get the biggest, the biggest hunting yield. But if you had the 8 and 9 year olds with the older people weren't as efficient, you know, they weren't as strong, they weren't as good at hunting anymore. But they had the knowledge. Right. They were the ones who really had the information and they were the ones who also had the time and resources to give it to the children. So you sort of saw these, these pairs of the younger people and the older people. And then the middle aged people were off doing the things that they had to do to get the resources or were teaching the kids how to do the practical things you needed to do to get the resources. [01:08:09] Speaker B: At least the grandmothers. So what you're saying is that basically the old men are the only worthless beings left? [01:08:14] Speaker A: No, the old men were actually the ones who were the hunting. Were hunting as well. If they can just keep themselves focused. Yeah, I mean the grandmothers are more dramatic because of course it's from an evolutionary perspective. But I think the idea is that the old and you know, this my possibly self serving evolutionary just so story is if this is true, if this is the niche, then remembering what happened yesterday, everybody knows what happened yesterday. But remembering what happened 50 years ago, or in the case of AI, remembering what happened in 2000, that's actually a really useful thing to remember. You could even imagine some cognitive adaptations so that not being as good at immediate, you know, not being as good at immediate, immediately remembering something as opposed to being able to remember the thing that happened a long time before the person you talked to was around is a good adaptation. [01:09:11] Speaker B: Speaking of remembering, and I know we're running short on time here, so I have like so many questions that I'm not going to be able to ask you. But one I want to make sure I ask you about is about consciousness. And so our intelligence and our learning, like we've been discussing changes over time. Our entire being changes over time. And you know, Thomas Nagel has the famous what is it like to be a bat? Way of thinking about how we can't know others subjective experience. But you could say the same thing about being a child. You know, what is it like to be a child? Even though we were all children, we're so different as adults. I can't remember what it used to be like to be a child. I can only pretend I don't trust my memories of what I felt like it was like when I was a child. And I can kind of see it through my children, but I'm just too far removed. It's A different lifetime, it's a different existence. And I'm wondering how. So along with that comes your subjective experience. And I'm wondering how you view consciousness throughout development because I know that you don't view consciousness as a single thing to be on or not. [01:10:20] Speaker A: So in the Philosophical Baby, which is my second book, I talked about this a lot about this problem about what is it like to be a baby? Is there any way that we could figure out what it's like to be a baby? Is there anything at all that it is like to be a baby? And I think this is another kind of interesting example of the explore exploitation distinction. So we know a lot about the kind of consciousness that's associated with a certain kind of exploiting kind of behavior. Things like planning or things like focus or things like concentrated attention. And I think it's sometimes been tempting for philosophers and psychologists to think that that kind of consciousness is the only thing that really counts as consciousness at all. So that kind of self conscious, narrow, focused attention and someone like stand the whole idea of the. Yeah, the workspace for example, is that essentially that idea. And I think the right way to think about is that's one kind of consciousness, that's one kind of phenomenology. But that's not the only kind of phenomenology. There's lots of. The philosopher Ned Bloch has made this point. There's also just the phenomenology of being richly in the world and experiencing everything that's going on around you without thinking in terms of attention and focus. And I think there's good reason to believe that that kind of phenomenology is more like what you're seeing in children. And recently I've gotten interested in psychedelics because they're a good example of a really systematic thing that happens where you see systematic brain changes and systematic phenomenological changes where exactly what happens is that the goal directed planning, focused attention kinds of systems seem to go offline. And I think, you know, psychedelics are just a dramatic example of various kinds of mystical experience, meditative experience. There's other kinds of examples that you can point to that are, that are like that the numinous experience. And you know, the numinous experience by definition is not the experience of you're going around and doing things and concentrating and focusing. It's that you're open to the things that are going on or that are going on around you. And again, it's a kind of interesting trade off case where if you think about something like open awareness Meditation or meditation in general. The interesting paradox is that by not doing things, by just sitting in one, literally not doing anything, sitting in one place, not moving, and then using techniques like counting or breathing to keep even from doing stuff inside of your head, you end up with this very striking phenomenology that's quite different from, from the phenomenology of your everyday life. You'd think, well, that should just shut everything down. And it does the opposite. It turns everything up. And I think that's a good model for what's going on in childhood. So I think the kind of consciousness that's associated with things like plasticity, learning, information extraction, that kind of consciousness is really different from the kind that's involved in action. And I think the former is much more like what you see with babies. [01:13:25] Speaker B: I mean, there's the joy, I do remember the joy of being a child and for it, you know, feeling like an hour was an entire day, et cetera. And you know, are we going to be adding isolation tanks to our labs so that every month for half an hour we can drop a little acid and lay in the isolation tank and that'll replace sabbaticals, especially if the pandemic keeps up. [01:13:46] Speaker A: Yeah, I think, I would not suggest that. As I've suggested before, I think that thing that we should be doing is hanging out with three year olds. More that I do think is a good, I mean, actually quite seriously apart. For me, part of the, part of the difficulty of the pandemic has been that I haven't been with my grandchildren. And one of the things that I really have experienced, and I think other people experience as well, is, you know, because I have a job and work to do that involves a lot of planning and a lot of thinking and a lot of organizing. In a way, the only time that I get to really escape from that is when I'm with my grandchildren. Because if you're with a two year old, like, you know, you're in the present moment just because you have to be, to be with a, to be, to be with a two year old. So I think there's a kind of curious mindfulness way of being. And also because the 2 year olds are open to so much, you know, you realize because the 2 year old's paying attention to the airplane in the sky and the little fluff on the ground, you're paying attention to these things. So I think that's a really good way of getting a good way of getting this sense of a broader, this sense of a broader perspective. But you know, also things like meditation or travel or sabbaticals or times when you're not actually working, I think are really good. And I do think one of the things that is a real sociological danger, especially in academia and science now, is that as the kind of meritocratic ratchet has. Has sped up the sense of, well, you could be working all the time and you should be working all the time and you should be writing the extra paper or getting the extra publication out. I think in the long run that's really damaging to science. And being able to be more like the people of leisure who started the Royal Society might be a better model. [01:15:36] Speaker B: Do you make time? Do you set aside time for play? [01:15:39] Speaker A: Well, as I say, it's great when I have my grandchildren around because then I can. My. [01:15:42] Speaker B: Yeah, I know. But in your work life, though, that's. [01:15:45] Speaker A: I think it's quite challenging, actually. I mean, I do have done that. I think I did that more in the past than I'm able to do now. But I do think it's really important and I try to persuade my students that that's what they should be doing. [01:16:00] Speaker B: And then you go back to work. I know you have a lot of work to do. You're a famous research grandmother scientist, and so I'm going to let you go. But one last question. Plans to build consciousness into the AI or is that just not part of the project or your interest? [01:16:15] Speaker A: You know, I think. I do think it's still a great puzzle. I don't think about what the relationship is between phenomenological experience and computation and function. I don't think it's something that we can kind of say very much about either way. So I don't think we. I don't think there's any reason to believe that we couldn't build consciousness into a functional system. After all, we're functional systems that have consciousness. On the other hand, it's not obvious to me exactly that, you know, if we just had a sufficiently complex system that it would automatically end up having phenomenology. I do think those relationships are complicated, but I think the way we'll find out about them is by. Is not by sort of a priori assuming that you have to or don't have to have a particular kind of function for consciousness. It'll be by actually trying to figure out what are some of the specific examples of the kinds of phenomenology that are associated with. With particular kinds of function. My friend Peter Godfrey Smith has a new book out and a book about octopuses. That I think is a very nice way of approaching this, which is sort of think about all the incredible variety of biological systems that we know about and think what kinds of phenomenology might go with that kind of experience. And that might tell us something about what would happen if an artificial system as well. [01:17:33] Speaker B: You haven't seen that Netflix program, My Teacher, the Octopus, have you? [01:17:37] Speaker A: No, I've heard about it. [01:17:39] Speaker B: I recommend it. [01:17:40] Speaker A: Yeah. I'm willing to believe that cephalopods have lots of secrets to tell us. [01:17:46] Speaker B: Yeah. Pretty amazing stuff. Allison, thanks for spending the time. And I'm going to have to do this. Exploring this with me. [01:17:52] Speaker A: Yes, well, always happy to do it. And it was a really great conversation. Thanks. [01:18:11] Speaker B: Brain Inspired is a production of me and you. I don't do advertisements. You can support the show through Patreon for a trifling amount and get access to the full versions of all the episodes, plus bonus episodes that focus more on the cultural side but still have science. Go to BrainInspired Co and find the red Patreon button there. To get in touch with me, email paulainnspired co. The music you hear is by thenewyear. Find [email protected] thank you for your support. See you next time.

Other Episodes

Episode 0

November 12, 2020 01:26:52
Episode Cover

BI 089 Matt Smith: Drifting Cognition

Matt and I discuss how cognition and behavior drifts over the course of minutes and hours, and how global brain activity drifts with it....

Listen

Episode 0

September 15, 2020 01:56:01
Episode Cover

BI 084 György Buzsáki and David Poeppel

David, Gyuri, and I discuss the issues they argue for in their back and forth commentaries about the importance of neuroscience and psychology, or...

Listen

Episode

October 06, 2019 01:24:45
Episode Cover

BI 049 Phillip Alvelda: Trustworthy Brain Machines

Phillip and I discuss his company Brainworks, which uses the latest neuroscience to build AI into its products. We talk about their first product,...

Listen