[00:00:02] Speaker A: Maybe the pinnacle of what makes human intelligence human is its open endedness. Like not problem solving, but this tendency we have to explore.
That's the whole point. Like there wouldn't be humans if humans were the objective. That's what's so fascinating about it. Like that's, then that's, that's a kind of interesting lesson for AI. It may be a cautionary tale. You know, like the process that produced human intelligence was one that wasn't trying to make it. And if you look out the window and you see all the nature around you and you think about it was a single run of a single process that produced all of living nature. I mean, what could be more impressive than that? It's literally biblical. Like what has been created here?
[00:00:47] Speaker B: This is brain inspired.
Hey everyone, it's Paul. Deep learning is all about using some objective function to train a network. For example, minimizing the error between the correct answer and what the network produces. And often the objective is for the network to perform well on some benchmark task like the ImageNet or MNIST datasets. Neuroscientists make models and perform experiments with the objective of answering specific questions, testing specific hypotheses about some brain function. And on we march, making hard won steady progress, improving deep network performance by tenths of a percentage, creating models of brain processes that inch closer to accounting for some cognitive function. However, what of a better way forward, at least better for making big fundamental progress is to not chase objectives, but rather to let creativity and intuition drive the work that we do and take us in uncertain and potentially radically new directions. That's what Ken Stanley calls open endedness. And he thinks it's a powerful and unfortunately neglected framework to apply to really ambitious problems like developing AI. Ken hasn't always known it's called open endedness, or called it himself open endedness. But going back to the early algorithms that he developed in neuro evolution, evolving new neural networks as opposed to training a given neural network, he's always been driven by the principles of open endedness. He and Joel Lehman wrote a book about it called why Greatness Cannot Be Planned. And Ken recently started an open ended research team to develop AI at his present company, OpenAI. So that's what we talk about, open endedness. And this is a topic that applies to many facets of life. And I got so excited, I was a little more all over the place than usual. But that's okay. I think it went in many interesting directions. And I enlisted a couple past podcast guests, Stefan Lanyan and Melanie Mitchell. To send some questions for Ken. So I play those. So that was fun. And this was a really fun conversation that maybe will inspire you to think about how open endedness might apply to many of your endeavors. As usual, you can find links to the things that we reference in the show notes@BrainInspire Co Podcast 86. If you value this podcast and you want to support it and hear the full versions of all the episodes and occasional separate bonus episodes, you can do that for next to nothing through patreon. Go to BrainInspired Co and click the red Patreon button there. And here is Ken Stanley.
Ken, I have three questions for you today. And within those three questions there are an infinite amount of things to discuss. So. And I also, in addition, have two surprise guest questions for you along the way.
[00:03:58] Speaker A: Great.
[00:03:59] Speaker B: So these simple, these simple questions are one, how do we achieve or solve, one might say open endedness. And of course, to get there, we'll talk about open endedness. How is that related to achieving AI, whatever that is, and where, if at all, does neuroscience or understanding natural intelligence fit in? So I assume you have pretty straightforward answers to all three of those questions, right?
[00:04:26] Speaker A: I can address those questions. I don't know if it's straightforward, but I can address them.
[00:04:30] Speaker B: Okay, so let's start with open endedness, because I like this quote, you've given lots of talks on this. In at least one of the talks, you end it with this quote. To achieve our highest goals, we must be willing to abandon them. At least this was your last sentence. So what is the field of open endedness trying to achieve? And along the way, I guess we need to define what, or sort of define what open endedness is, Right?
[00:04:54] Speaker A: Right, yes. So open endedness is really inspired by things that we observe in nature, which are what we call open ended processes. And there's only a few, but they're like really amazing.
And the first, like maybe the most canonical is evolution or natural evolution, evolution on Earth.
And the particular aspect of it that's so remarkable is just that it went on and on and on for more than a billion years and kept on inventing new stuff that's interesting. And we just don't know artificial processes that do things like that. We can't build them. Right now there's a very few machine learning algorithms, which is basically the field that I work in, where you would actually want to run them more than a week or a month, let alone a year or 10 years, let alone a million or a billion years.
[00:05:44] Speaker B: Even on today's faster, more Powerful computer platform.
[00:05:48] Speaker A: That's the funny thing. Yeah, we have this powerful compute. But even so, it wouldn't be worth it to run for a billion years.
Nothing would happen because these algorithms eventually converge. So either it's good news or it's bad news, but either way they're done. Like the good news is it converged to the solution. See, it's good. The bad news is it got stuck. So now you're stuck. It's not good and it's stuck and it's ended. But in open endedness there is never the end. That's what's interesting, that's why it's called open ended. And the point is that it's something that's creative basically forever. And hopefully even in sort of like the grandest version, it's not just forever creative, but it gets even more interesting the longer it goes.
And so evolution is a bit like this, you know, with these kind of like relatively simple organisms early on, like single celled things. And then you get to stuff like human level intelligence, the flight of birds, photosynthesis. And the thing that's amazing about this all in one run. We're used to things in machine learning where like, okay, I could get a bunch of interesting results, but they're probably separate experiments that just are ran from scratch separately. This is all the same thing. It's all starting from the same root of the same phylogenetic tree. And so it's just a remarkable thing that this can even exist. But since we know it can, we'd like to be able to reproduce it. And there's more examples than just natural evolution. Like the other big one would be like the history of human civilization and human invention in particular. Like human invention, which includes things like art and music and science and everything that we've created. It's also a giant tree of discoveries that isn't heading anywhere in particular and is in effect kind of like one run and it just kind of seems to keep getting more interesting. We start out with things like wheels and fire and here we have like computers and space stations now.
And so it's just another amazing process. And so the question is, can we create artificial processes that have this property? And that's what I would call open ended processes. And I guess I would say I'm sort of trying to coin a new term, strong open endedness. Strong open endedness is like it never ends.
[00:07:52] Speaker B: Okay, well is that like strong AI? Is that why the.
[00:07:55] Speaker A: Yeah, I think I started to think just how to make the distinction because there are, there are experiments that People have run where there's kind of like some open for a while. It does some surprising stuff and that's cool, that's fun to watch. But it always just ends. It doesn't go forever. So strong open ended. We go forever just so that we can have a distinction here and say we're really interested in achieving the strong open ended is just way out there. Amazing, weak open endedness. We can do that right now, but it's not quite as interesting. It's far from as interesting.
[00:08:26] Speaker B: You've been working on open endedness for a long time now. And you and Joel Lehman, is that his name? Joel Lehman, your co author, Lemon. Joel Lemon in why Greatness Cannot be Planned. And that book, you guys describe open endedness and you end with AI and with I think human innovation. Is that right? As the two. No. Is that right? No. Evolution. Yeah.
[00:08:49] Speaker A: And evolution. Yeah.
[00:08:50] Speaker B: Evolution, yeah. As the two case studies. The two. The two big examples in that book.
[00:08:54] Speaker A: Right, right.
[00:08:55] Speaker B: And I only see like super positive reviews of that book. And it's almost.
I wonder what you, first of all, is that true?
And then I wonder, like, what do you make of just how positive the responses are to the book and just to open endedness in general. Anytime I see a talk by you, you're always interrupted by someone saying, first of all, this is blowing my mind. And secondly, here's my question, which never happens in a talk.
What do you make of it, of the positive response and what does that say about our current era?
[00:09:33] Speaker A: Right, right. Yeah. So it's true that I've gotten a lot of positive feedback.
Not everything is positive. I mean, just to acknowledge you can find a few bad reviews here and there. And in fact, I expected there to be some negativity because the book was clearly polarizing. I mean, it's basically a challenge to the status quo. The status quo that like most things should be objectively driven in our culture, in our society, like the way we run things. And so a lot of people believe in this stuff. A lot of companies are run based on objectives. A lot of academic research is run based on objectives. And so I expected that they could ruffle some feathers to suggest that we shouldn't do that. I also expected some people would love it because it's basically like having a straight jacket and people don't really like it, even though we just impose it on ourselves. So it's not surprising to me that there's a lot of people who are also very happy.
But it's true that mostly I hear there's A lot of people happy. So I guess my conclusion is that this self imposed straitjacket of just like objective obsession that is part of our culture is like one of those things that we did to ourselves without realizing we're doing it. Like most individuals don't like it. Even the people involved in those systems, they just thought like it's necessary for some reason, like I've got to satisfy my manager and let that manager like somehow know that they can trust me. And so I'm just going to subscribe to this and then I'll force my reports to do that also. And it all aligns together and then suddenly the whole world is working this way. So I think the appeal of the book is the message that like, actually some things work better if you don't do it that way and take off the straitjacket. And that's a liberating kind of a message.
[00:11:13] Speaker B: I mean you must get all sorts of crazy questions and you know, life experience stories. Like, you know, of course I listen to your talks, I read the book and I think, oh my slacker young self. All I was doing was being open ended. I wasn't. I must have been doing something right, you know, so you must get all.
[00:11:29] Speaker A: Sorts of responses like, no, yeah, I do it. It's really interesting the life stories that I've gotten from this. It's really gratifying actually because as a computer scientist it's like completely not what you expect in your career to have like almost therapeutic interactions with people.
[00:11:47] Speaker B: Well, I also thought you must be getting all sorts of invitations from like businesses. But then I thought, no, that can't be true because your message is antithetical to their supposed progress. Right. So they don't want you coming in and telling all of their employees to not obtain the objective. But aren't you getting inundated with the self help industry? Because this is, this seems very self help oriented as well.
[00:12:13] Speaker A: You know, that's an interesting question. Actually. It sort of is the opposite of that really. Businesses have approached me, or more like business conferences, like they've asked me to come talk to. There is a really unbelievable diversity of different kinds of communities that have asked me to talk. Like, I mean I even spoke to the retirement investment community, so the people who run our retirement accounts, you can like, what does that book have to do with that? But like what I've learned is that everywhere is looking for some form of disruption. Like people want to get out of their shell and like out of the box and think of something new. And the book is just by its nature kind of about how to do that. And so like at the top level, most people are interested in that, I think in like almost any industry.
And so it's not as resistant as you would think. Like, there's very few kind of like bean counters that really, really believe in the like system of just metrics and objectives. There are a few, there's a few, but I think they're, they're pretty rare. They're like a caricature.
Most people are just sick of it that I've encountered even at the, even the administrators who are the gatekeepers, they're like people like that. Like when I get to talk to them, they're usually like, I just really don't know how to do something different here. Like, tell me how to reorganize. I mean, I remember I went to one huge lab where they had almost $1 billion that they were managing. And one of the first questions that the leader was asking me was like, well, how should I reorganize how we allocate all these funds? And I was like, this is a crazy question. Like I don't, I'm gonna tell you how to like reallocate your funds in this gigantic organization.
And so I think it's just like people don't really, hadn't really realized there is an alternative. It just like this seems like the only really viable way to go is like, yeah, of course there needs to be accountability. How could we live in a, how could we live in an organization without accountability? And objectives provide accountability and metrics, tell us how we're doing on those objectives. And so while we don't enjoy it and we don't like it and we wish we could be all freewheeling and interesting, like when it comes down to it, most people are like really uncertain like what that really means that's actually going to work in practice.
[00:14:19] Speaker B: I mean, have there been particularly challenging criticisms that you've taken to heart and that has challenged your intellect to challenge the framework of open endedness?
[00:14:31] Speaker A: Yeah, I think like the biggest question, I don't know if it'd really call it criticism, but it's kind of critical, is just how do you control it?
[00:14:38] Speaker B: Because the constraints, it relates to constraints.
[00:14:41] Speaker A: But it's sort of like the whole theme of this is just like to let you just do what you want.
Or if you think about it from an AI perspective, it's basically saying let an algorithm just go off and dither around and find interesting stuff. But we're not going to tell it what to do. But everybody wants to tell things what to do. Like ultimately we do want to tell things what to do. Like I want this vacuum cleaner to clean my room, not to just explore around and find interesting things to do. Like it's, that's what it's supposed to do. And so like people at first are like, this is super exciting. Like we can, we can like find new things we didn't think of. But then they're like, but how do I control it? Like I want it to do X. And there's a tension, there's a tension there. I mean even outside of algorithms, like when we talk about the book and just like how you run like an innovative organization, it comes up also because people are like, well, I can let my employees kind of explore around in some way.
But then how do we get anything done? What is actually.
[00:15:37] Speaker B: Well, but isn't this like Google's 20, is it 20% rule? I forget what it was, the exact number.
[00:15:41] Speaker A: But it is like that. Yeah, like that kind of thing. It feels dangerous because you're like, well how do I then channel that back to like our company goals and back to objectives? Like at some point we have some objective. And so yeah, there's a bit of uncertainty. What does it mean to control something where like the main property of that thing is that it's not something you control and that's a paradox and it leads to tension. And you know, I recognize that that is something to be uncomfortable about, but I think the answer is just that there's trade offs and you just to decide where you fall along those trade offs. Like you're never going to get both. You can't have total control and total creativity at the same time. That just doesn't happen. So you'd have to at least recognize the trade off is useful at the get go. But then you can have compromises. It doesn't have to be all or nothing.
[00:16:33] Speaker B: Well, you repeatedly emphasize that really what open endedness is good for is for ambitious goals. And in your vacuum cleaner example, that is not exactly an ambitious goal. If the solution is to clean the floor, you don't want your vacuum cleaner like mine did last night getting stuck and then running out of batteries.
Because it's exploring. Right. So there's a trade off there as well. Correct?
[00:16:57] Speaker A: Yeah, and that is an important distinction. Yeah, I think people who didn't like the book, that's one of the things that they missed is that we're talking only about really ambitious stuff. Yeah, it's true. We never said, and I Never thought that we should eliminate all objectives from the world or that things that aren't super ambitious should somehow just turn into creative activities where you just wander around aimlessly and hope something happens. Like if you want to make a sandwich, then make a sandwich.
[00:17:24] Speaker B: I have some pretty ambitious sandwich designers.
[00:17:28] Speaker A: But I mean, we even acknowledge this in the first chapter. I mean, we were aware that people would say things like that, and that's like a straw man that they would attack like that people would say this is extreme. Like they're saying that we should never do these things. Look at all the useful things we've done with objectives. But we know that we're not saying that there's never been anything useful. We're talking about things where we're making really, really blue sky discoveries here. And those are the kinds of things where the objectives are not going to serve you well.
[00:17:55] Speaker B: I mean, it is so just conceptually it's difficult to grasp. I mean, you yourself have said maybe we don't need to define it per se, but it's even somewhat difficult to characterize it. And I came up with an equation here and you can correct me, or a recipe, we'll say, so it has to be highly ambitious for open endedness to work. It has to be highly ambitious. You have to have some intrinsic motivation because you can't just kind of sit around whittling a stick and hope something will happen, right? You have to have motivation to explore and you have to truly be seeking novelty. And we can talk about your novelty search algorithms and what that has led to and the concept of divergence. But then the last part of the recipe is valuing interesting findings. And so we need to talk about what interesting means.
[00:18:48] Speaker A: Yeah, yeah, yeah, yeah.
[00:18:50] Speaker B: Is that close? Is that a close recipe?
[00:18:52] Speaker A: That is, those are, those are good ingredients. But I think it sort of depends on what you're trying to cook a little bit, you know, because there's different places where you would do something open ended and it might just be a subset of those that apply. Like for example, like natural evolution, you know, isn't curious like it just because it's not an entity and yet it is open ended. But like when humans or reinforcement learning agents are engaged in sort of exploratory, playful discovery, then they may be motivated intrinsically by curiosity. And so like there's a certain context dependence to these ingredients, but each one of those ingredients is one of the kind of, is one of the things that, it's one of the weapons in the arsenal, so to speak, that you could Use, but I don't think you always need all of them.
[00:19:40] Speaker B: Well, you could say that evolution is highly motivated to explore the search space of possible lives, of possible configurations that have high fitness or whatever within the constraints of survive and reproduce, like you've mentioned. So maybe not. Maybe that's a, maybe the teleology of being highly motivated doesn't fit with the evolution story, but you can make a case.
[00:20:03] Speaker A: Yeah, I see that. Yeah, that's, I mean that gets a little bit into semantics, I guess. Yeah. Like what do we mean by motivated? Is, does it have to be like a person or is it going to be like a thing? But yeah, if you look at it that way, you can interpret it that way. But I think what really matters is ultimately like the more algorithmic questions than like that's more like a matter of interpretation. Like do we, do we think of the process as being motivated to discover things? And we could or we could not. But the real question is what makes it motivated? Like what is the, what are the actual algorithmic mechanisms that allow something to diverge like this?
And that's when it gets into like real implementation, like how do you actually get an open ended process going? But yeah, the things you mentioned are kind of the things that we touch on. So it's a good list. Yeah, it's a little hard to digest the whole list at once probably, but.
[00:20:46] Speaker B: Yeah, yeah, I don't really want to step through them, you know, because we're going to be talking about all of it along the way.
[00:20:52] Speaker A: Yeah, yeah.
[00:20:53] Speaker B: Something you said already previously, just shortly before you have people in business talking, asking you, like, I don't really know how to move forward, like how to be creative in the work environment. And it immediately made me think of AI and the current. So there's a deep learning explosion and a lot of people think that it's kind of coming up against a wall. I mean, there's still a lot of progress being made, but on the whole you're still making a modicum of improvement on these benchmark tests, which is antithetical to open endedness. So in a sense this could be a blessing for the AI world as well.
[00:21:35] Speaker A: Yeah, yeah, I mean, I come from that world, so it was actually like starting with the observations in AI that eventually led to this larger theory that applies to more than just AI.
And so, you know, the thing is, it's true that in AI we tend to be very benchmark driven, that that's part of the culture and that's very objective and it's also Underwhelming and not exciting, and even the people in the field know that. But they still try to beat the benchmark anyway. Because sometimes it's hard to think of what else to do. But I tell you what else to do. To me, like that isn't really progress. Like that is. Or at least it's not the kind of progress that interests me. The progress that interests me is the invention of new playgrounds. Like what I think is really, really in AI is when an entirely new playground appears, which is a very rare event and performance doesn't even matter. That's the problem with performance metrics. You know a new playground when you see one and you can criticize the heck out of it and give all kinds of reasons. It doesn't work really well and it doesn't beat this thing on this benchmark. But it still created a whole new world of low hanging fruit of ideas that nobody would have considered if it hadn't come into fruition.
I'm really interested in creating new playgrounds which is basically like stepping stones, which is related to this kind of theory of open endedness. It's the discovery of a new stepping stone opens up a whole new frontier of possibilities. And those are things that I would consider to be progress in the field of AI.
[00:22:59] Speaker B: What do you consider progress for yourself personally?
[00:23:03] Speaker A: You mean, is it professionally or just in my normal life?
[00:23:07] Speaker B: At the end of a day, what makes you feel like you've made progress? Is it if you've done something interesting or novel?
[00:23:14] Speaker A: Well, I think it's, yeah, it's the playground thing. Like if I could open up a new playground, then that's like probably the most exciting thing. Like it's like a whole new world has been created of possibilities. Yeah, I would like to do that.
[00:23:25] Speaker B: It does seem like what you do is fun. Is it fun?
[00:23:29] Speaker A: Yeah, yeah, for sure. Yeah. I mean it's. I've definitely enjoyed most of what I've been doing. I mean, yeah, from the research, I have enjoyed it. Yeah, it's fun. Yeah. I mean, trying to do open end stuff. Open endness is associated with play, so. But thinking about it a lot is like thinking about playing.
[00:23:44] Speaker B: I mean, you've created a bunch of evolutionary algorithms and you come from a deep background of machine learning and you've developed these novelty search algorithms along the way to push toward open endedness.
So let's talk about creativity for a second. I mean, there's just so many different avenues that we could go down. So I don't even know if we've established A good baseline for people to understand what open endedness is yet.
[00:24:11] Speaker A: Fair enough, we could address it a little more, I guess. Like, what is open endedness? Like, I would, I wanted to acknowledge, like there is a community that was interested in open endedness for like decades, like maybe 30 years, which they call themselves the open ended evolution community. Well, they don't use the word community, but open ended evolution. And they've been discussing a lot of these issues for a long time and they have workshops, but it was very tethered to the word evolution and the community stayed small. And so I don't, like, I want to say that sort of I own the idea and sort of own the definition. Like this has been discussed for a long time.
But I think what the kind of realization that I had was just that why is this community so small? Like it seemed like this is like the coolest topic ever. It's so interesting. And I just like within the last five years or something just was like, I gotta do something about this. There should be like many, like 10 times more people investigating this.
And so when it comes to definition, I think I'm more concerned with why is this so interesting than what is the particular definition? Because in my experience in that community, there was a lot of argument about definition. There's also an AI. There was a lot of argument like, what is intelligence really? What is it? And this has never been resolved. And what I think is interesting is it didn't need to be resolved. Like, there's been plenty of progress without resolving the question of what is intelligence. It helps to have some discussion. I'm not saying we didn't talk about it at all, but I just don't think there needs to be some definitive final answer to the definition. It's something, you know, when you see, and if you look out the window and you see all the nature around you and you think about it was a single run of a single process that produced all of living nature. I mean, what could be more impressive than that? It's literally biblical. Like what has been created here? I mean, it's creation.
[00:25:58] Speaker B: Yeah.
[00:25:59] Speaker A: So like, yeah, we could, we could get into the nitty gritty semantics of what is it actually about. But it's just, it's something incredible, the whole point.
[00:26:05] Speaker B: Well, one of the big points is that there's no objective to that creation, to that biblical creation. There was no objective, I mean, I don't know.
Humans. Right. That's the objective. Right. Of evolution. Right. That's where the top tip of the.
[00:26:19] Speaker A: Arrow of evolution that's one thing I strictly like to say it's not what it is because that's, that's the whole point. Like there wouldn't be humans if humans were the objective. That's what's so fascinating about it. Like that's, then that's, that's a kind of interesting lesson for AI. It may be a cautionary tale. You know, like the process that produced human intelligence was one that wasn't trying to make it. Like there's nothing in the, in the ingredients of the initial run of evolution, like at the beginning of time where you could say this is trying to produce a brain. That would be crazy. You can't extrapolate that far out. It's some kind of orthogonal happenstance that isn't really directly related to the constraints of the system from the beginning, which is survive and reproduce.
[00:27:03] Speaker B: You give a lot of examples of things that have happened in the past that are examples of open endedness. Like vacuum tubes weren't created to make computers, but are necessary for computers.
I'm not going to make you recapitulate the pick breeder story, but you put this little applet thing on the web where people can go and click on an image that they like and then it turns into like that becomes the child and turns into a bunch of different images. And you do this over and over and over and people can share their different journeys and stuff and you end up with these images that are very impressive but not what you were striving for. So if you were striving for a particular image, you're going to end up with Jupiter with its storm, spot on it for instance. And who knows what that person or team was striving for. You've ended up with some interest, like a car I think that you've ended up with without intentionally doing it.
[00:27:54] Speaker A: Right? Yeah.
[00:27:55] Speaker B: So you end up with. So this is an open ended process and you end up with interesting things, things you didn't expect that might be useful for something else. But how important is it that the end product is useful? Does something. Is usefulness part of interesting?
[00:28:15] Speaker A: Well, I guess that's. I mean it is, it is. The answer to that is partly related to just what do you care about? I mean if we're going to artificially trigger an open ended process, we might ask why, why are we doing this? If it's just to see things that are interesting, then maybe they don't have to be useful. Unless your definition of interesting means it's useful.
[00:28:36] Speaker B: Well, usefulness must, you know, you get a kick out of it. I guess that's useful.
[00:28:40] Speaker A: Yeah, yeah. I mean, if that's what you're hoping, then yeah, you're hoping that everything basically will be useful. If that's what it means and like getting a kick out of it is useful, then yeah, hopefully everything gives you a kick. Or actually I shouldn't say that. Hopefully a lot of stuff gives you a kick because we're okay with some stepping stones that don't satisfy anything.
[00:28:59] Speaker B: Right?
[00:28:59] Speaker A: Like the stepping stones are useful in their own right because they got you to that place where you got to kick out of it. So it's not like every single thing you traverse has to itself be great, but you should get to places that are really interesting at some point.
[00:29:12] Speaker B: Well, that's like an activation energy that you have to get over to get to the interesting point.
[00:29:17] Speaker A: Yeah, you could look at it that way.
[00:29:18] Speaker B: You said evolution is an open ended process and apologies, I'm kind of all over the place, but I have about a billion thoughts on these things, as everyone does, I'm sure. But I've also heard you said that evolution sort of cheats us out of many of its potential interesting end products because the thing that it creates does not survive and reproduce. And who knows if that thing that it created that didn't survive and reproduce wasn't a stepping stone to something potentially even better or more interesting.
[00:29:47] Speaker A: Yeah, yeah, I have talked about that. This is a subtle issue, but it's about the constraint. And this is something people often get confused is the difference between a constraint and objective. Like in evolution it's true that everything has to survive and reproduce. So a lot of people think, well that must be an objective then. Like the objective is to survive and reproduce. So it's like we're trying to get to that point. But I think you have to remember that an objective, at least I think from a machine learning perspective is a place that you don't start where you're already there. Like it's somewhere you move towards. And the thing is, the very first organism must have survived and reproduced by definition, right? Because it like here we are. So the problem was solved at step one. So that doesn't really mesh with the usual conventional notion of an objective. And that's because at least in my view it's not an objective, rather it's just a constraint. Like we're already there from the beginning, but the constraint says that we have to stay there. Like every single thing ever that's going to perpetuate more search has to also satisfy that constraint.
And so it basically is a pruning mechanism. So it's saying, like, what are we not going to look at? Rather than what we are. I think of it as a. Not like we're not going to look at things that don't have basically Xerox machines inside their stomach, like copying machines. Like, basically we're walking copying machines. And the only things that you can perpetuate are walking copying machines. And if you're not a walking copying machine, we don't care how interesting you are, you're out of the game. And yeah, so that means all those things that were interesting but like couldn't reproduce are gone. Those lineages have never been explored.
And this is just a theoretical thing because, I mean, practically, obviously they can't be, because they can't reproduce. But theoretically, you could imagine from a computational perspective something could let it reproduce anyway. We could mutate their genome and artificially inseminate something and put it out there.
That's science fiction. But from a computational perspective, we're talking about algorithms. It isn't science fiction because an evolutionary algorithm, you can just make anything reproduce you want. It's completely up to us. So that means that there's this lost opportunity, like all these things that have been pruned out of the search. But the way to think about it is like that the constraint to me is the thing that ensures that things are interesting.
Like, it is possible to have a constraint that would admit lots of crap that isn't interesting. You know, like, what if the constraint was just you have to get to a minimal mass in order to replace produce. So in other words, like, if you get above a certain mass, then, like, you know, God will come and make a child for you. You don't need to have reproductive organs or anything. Like, you'll just get a child. Well, I'd say the world would be pretty uninteresting. There'd be all these inert masses literally everywhere. But this is still the world of DNA. I mean, it's still going to be organisms on Earth, but it would be junk. And so something about survive and reproduce is ensuring that everything's interesting. It's true that it prunes out other things that could have been interesting because there's plenty of things that have been alive that were really interesting but just never managed to reproduce. But at least everything that did reproduce was interesting. And that's why nature is so interesting. At least to me, everything's interesting. There's not an organism on Earth that isn't interesting. And that's because of this constraint. But if we widen it and make it less constrained, we get more stuff. But we also admit some things perhaps that aren't interesting as we get less constrained. So there's some trade off. Like we could fill the world with inert blobs.
[00:33:10] Speaker B: I mean if it is so useful, if open endedness is so useful, why hasn't evolution done that? That's a very teleological statement there.
[00:33:19] Speaker A: Like you mean expanded?
[00:33:21] Speaker B: It's not like allowed for uselessness for a few stepping stones in order to achieve usefulness?
[00:33:27] Speaker A: Well, yeah, that's. I mean because I think it's. Because evolution doesn't care about any of this. It's not a concern to it, whether it's.
[00:33:35] Speaker B: It's not even a thing.
[00:33:36] Speaker A: Yeah, it's not trying to do anything. So it just is what it is.
But that question does come up in artificial evolution then, because then we're deciding what we care about and someone does care. So then we have to confront these kind of questions, like what do we actually want it to do?
[00:33:52] Speaker B: One of the things that I appreciate, that you appreciate and your work has highlighted this recently, is the importance of the environment, environmental constraints. Right. So the question is how do you achieve true open endedness that will create enough complexity forever to be able to then generate new things forever and potentially interesting things forever. And you've started at least the latest that I know. You're making mazes and a. So it's a co evolution algorithm where your agent is trying to solve the maze in an open endedness fashion, but then you're also developing new mazes. So there's a co evolution of the maze itself and the agent trying to solve it. Do I have that right?
[00:34:36] Speaker A: Yeah, yeah. That's the minimal criterion, co evolution that I did with Jonathan Brandt. Yeah. And there's also something else that I worked on with colleagues at Uber I Labs which was called POET or the Paired Open ended Trailblazer which also evolves environments. And yeah, it's getting towards this issue of, I mean if you look at things like novelty search, they're really about exploring the space of solutions. You could think of it as different behaviors, but like there needs to be another side to this which is like what are the actual things there that are there to do in the world? Like opportunities. In other words, like what are the opportunities to do something new? Like you can look for new things to do all day, but like if I lock you in a room and there's nothing in the room, there's only so many new things you can do you have to get out of that room and find something new to do?
And so like a lot of the environments that we create in artificial systems are like this kind of locked room. Like there's only so much that can happen and then it's kind of over. So we need something that somehow gets environmental diversity also at the same time as we're getting solution diversity. And those algorithms are beginning to explore that question and recognizing just how important that is, that opportunities also have to come out. And evolution on Earth does this by making the organisms both opportunities and solutions. And so that's kind of cool. It's like a little bit self referential because a tree is a solution. It's a way of life and a way of surviving, but it's also an opportunity for somebody to eat leaves. So you can have a giraffe.
And so it's both, it's both a solution and an opportunity. And then the giraffe makes an opportunity for something else, I guess, to eat the giraffe.
And so that's why evolution has been able to keep going for like a billion years. Because it's not just generating solutions, it's also generating opportunities.
[00:36:28] Speaker B: Well, it's highly complex and highly recursive, somewhat like the brain. This reminds me of Stuart Kaufman's concept of the adjacent possible. Are you familiar?
[00:36:38] Speaker A: Yeah, yeah, yeah, yeah, I see the connection.
[00:36:41] Speaker B: I mean this is the same, you know, his more recent thought processes or I guess talks that he gives talk about the unboundedness of complexity and just in the way that you just spoke about it, that an organism that different opportunities are afforded as things develop, one opportunity or one evolutionary development. He uses the swim bladder for instance, which then can evolved and then can become a habitat for a new microbe or whatever. And then that is a completely new thing in the universe. And it's generative and creative in that respect. So it really crosses with that sort of concept.
[00:37:21] Speaker A: I think Stuart Kaufman and I probably would get along well. We've never actually spoken to each other, but we obviously we have some similar interests. And I kind of think of this adjacent possible. It's a little bit more like the how, whereas I think I'm a little bit more talking about the what.
It's more like explanatory is how I think of it. He might disagree. Maybe I'm mischaracterizing, but it's sort of like how can you explain how this could be possible? Like how could all this stuff have happened? Well, it's because of this notion of adjacent possible. Like there's the search space has this very intriguing property that like there's these really counterintuitive adjacent hops that you can take from certain points in the search space to other points that just aren't what we would expect. And you can. It's almost completely unpredictable. And this sort of explains how it's possible for all this stuff to exist. But it, it also raises questions philosophically like why is the search space that way? That's just like an amazing property of the universe. It's like a prior property. Evolution doesn't explain that. Evolution uses that, searches that. Yeah, and then I think I'm more like, the stuff I'm talking about is more like, well, what are the ingredients you need to actually implement something like this? So it's less philosophical and more kind of like what's the algorithmic formula here to really do this? And so I feel like it's sort of complementary, which is why I think we would have a good discussion.
[00:38:40] Speaker B: Okay, so one more question here about open endedness and then I'm going to play a question for you. And this is more about creativity and the idea of an objective. So just as an example, vacuum tubes. I don't know what vacuum tubes were innovated or invented for. Do you happen to know?
[00:39:00] Speaker A: Yeah, I did research this, so I might be getting a little fuzzy, but I believe that originally they were just using for experiments with electricity like people want.
[00:39:09] Speaker B: So they were invented for a purpose with an objective in mind. So this is a case though where this is an example where the product that eventually was used in a different framework to build computers actually did come from an objective driven pursuit. And I'm wondering if. So open endedness is a natural way to think about creativity and how to be creative, how to generate creativity. But I'm wondering if creativity would occur regardless of whether there's an objective. Right?
[00:39:40] Speaker A: Yeah, that's an interesting question.
I think that objectives, you can have creativity with objectives, but it's much more likely to be, it's much less likely, I should say it's less likely that you would have creativity with objectives because like, let's look at, there's like an old joke about grants like when scientists apply for grant funding that like, you know, the best thing to do is propose what the panel wants to hear, which is the objective, and then just do whatever you really want it to do once you get the money.
And that just I think exposes the fraud of this like the subjective stuff. It's just a, it's basically a security blanket to make everybody feel like we're actually, actually on some kind of track. But the truth is the cool stuff happens off the track. Does that mean that like there's no exceptions? I mean, of course there's exceptions. There's going to be some exceptions, but there's a general rule like this is actually a better. A stronger principle is when you're getting off the track. Now if you think about like your example and like you're getting. You have objectives, but you still lead to something kind of interesting. But you have to recognize that what really happened there is still ultimately not very objective. Because it's usually it's a serendipity. Like it's like, well, I was working on this thing, but actually it turned out that the thing that was really useful about it wasn't what I was trying to do. It wasn't the objective. Like the computer is not what the vacuum two people were trying to build. So that step was actually a non objective step, not an objective step. And you. And basically the creativity happened when you abandoned your objective, you let it go. So a lot of people have gone down a path that was objective and just had serendipity and realized that something else possible here.
And I think that falls into this kind of non objective interpretation.
[00:41:23] Speaker B: This goes back to being motivated, intrinsically motivated. And I'll use a quote you gave in the book. This is Pasteur's quote about being prepared. Fortune favors the prepared mind. And that seems important that you're at least pushing forward. And really, I don't know, it just seems to be an important ingredient for me. The work, the doing something seems important.
[00:41:45] Speaker A: Yeah, that's also like in the book we say serendipity is not an accident. So it's a kind of a similar notion. And it's true that pushing forward is a form of prepared mind. I guess I would agree with that. But there's also just like. I find it really interesting when I look at Wikipedia's serendipity page, because there's a serendipity. Serendipity. I think it has a bunch of inventions and things and all kinds of serendipitous discoveries.
[00:42:11] Speaker B: Oh yeah.
[00:42:12] Speaker A: And it's really fascinating to look at it. Like all these things people weren't trying to do that they accidentally did.
But like microwaves, for example, microwave ovens. Like there is somebody research.
Yeah, true. Yeah. But one interesting thing about it is that everybody seems to be really smart and have a good track record. Like all these people who had accidents. So how could this be an accident. Like if there are accidents, then it shouldn't have to correlate with how smart you are. Like, like probably the best person for having an accident is some lunatic on the street running into the walls on the side of the road.
They're going to have lots of accidents. But that's not what serendipity is really about. I mean, the prepared mind means that you are opportunistic is what I think. So it's basically you are willing to switch on a dime and you can see when new opportunities arise. And that's what real genius is, I think. It's not vision. People like visionaries. They think geniuses are visionaries who saw 15 steps ahead of the horizon. I don't think of it that way. I think it's that the geniuses are the people who actually realized that there is something one stepping stone away before anyone else realized it. It's like we have the thing we need right now. It could change the world. But no one has yet seen the connection.
[00:43:18] Speaker B: I mean, everyone has had an example. Everyone's career path is defined by this.
No one thought when they were seven that here's my path and then they followed it. But they might have started following something that was interesting which led to something else. So everyone has experienced this and yet we don't follow it in many other aspects of our life.
[00:43:37] Speaker A: It's true. It's also a personal thing. Yeah, it's not just about discovery in this kind of big grandiose form. It's about your individual life too.
A lot of it is non objectively driven. I'm not sure everyone, maybe some people, really just stuck to the plan from day one. But yeah, I think for most people there's a process of discovery. And I think life is open ended. People are open ended individuals. Not just us as a society, but just your individual life. And because of that, that is an aspect of human intelligence that also needs to be understood and thought about and celebrated and celebrated for sure. Yeah.
[00:44:16] Speaker B: Okay, so question number one here from an old podcast guest. This is Stefan Linyen. He uses AI to generate creative things. So he studies creativity and wants to harness the power of AI to study creativity.
So here's his question.
[00:44:33] Speaker C: Hi Ken, in your work you talk about open endedness, both in nature and in artificial intelligence. Now nature, of course, has been an inspiration to many, if not all AI techniques. Obvious examples being genetic algorithms that are based on evolutionary processes and neural networks inspired by the stuff our brains are made of. Yet in these cases, we use a rather simplified understanding of how nature works, typically resorting to mechanical explanations that, as you say, work towards some kind of optimization or a predefined goal. So my question to you is as to make progress in AI towards open endedness, do we need to adopt new models of biological processes that do justice to open endedness in nature, or is the challenge even more profound? And does the way we build artificial systems fundamentally obstruct open endedness, like the way we store an analog signal that contains a potentially infinite amount of information as a binary digit?
[00:45:33] Speaker B: All right, did you get all that?
[00:45:34] Speaker A: Yeah, interesting, interesting question. Thanks for that question.
So the first part of the question is about do we need to change something about how our algorithms reflect what's happening in nature?
And I think so. That's the sort of easier part of the question. And I think that one's a clear yes, but we have to realize that I think part of the point here is we don't understand what's happening in nature. That's why if we did, we would just make the algorithm work that way.
[00:46:07] Speaker B: But are the current algorithms up to date with what we do understand about nature? I mean, of course it's an open ended process pushing forward. Right, Because AI doesn't incorporate brain stuff at all.
[00:46:20] Speaker A: Yeah, that's true. That you could ask, how up to date are we with, with our current understanding?
And I would just say that we're continually understanding nature better. So at any moment in time, you know, it's probably we have, we have a better understanding than we did a few years before and the algorithms will accordingly be a little bit better. But we're still not there. We're still not at the point where I think we fully understand nature, like, nature. The problem is that like you have all these texts, textbooks that explain evolution, like you read them in high school or maybe in college, and that's like a message that we understand it. Like, here's the explanation. It's simple, it's like one chapter, like, there you go. Like all of these profound things have been explained. But the problem is that explanation is not the same as a full understanding, because that's just an explanation that's not the same as a formula or an algorithm. Like to me, like, to really understand would be to have the algorithm, and that just is elusive, the algorithm of nature. Like we can do evolutionary algorithms, but nobody really thinks an evolutionary algorithm is the same thing as evolution in nature. They're inspired by it, they have some reflection of it, but it's just a shadow of it ultimately. And so there's something we're missing and we are continually gaining more insight into what we're missing. Like I think now I know more about what's really going on in nature than I did like 15 years ago. But I still think there's things that we don't fully grasp or understand or else we would just do them because if we really understood it, we would just write it down as an algorithm and we can't and we haven't and have failed to do that. And so it leads to the next part of the question of is there something even more profound going on here? Like we're in the world of binary and digital, but we really need to be in the world of analogy that's way more deep. And I think.
I don't know. I don't know because I don't know what I don't know. But if I had to guess, I would say no. I think that the, at least for open endedness now there may be other parts of nature where we're off kilter, like maybe consciousness or something, maybe that can't be done through digital computation. But for open endedness I believe it should be doable with digital computers. I think we have the substrate we need and I think that we can conquer this.
[00:48:36] Speaker B: Well, I think that wasn't his, that was like a specific, the digital analog was a specific example he gave. But I think he was asking more about our biases. Right. Just to think about everything as goal driven and whether we can even escape that even if we are aware of it. Right.
Whether we're asking the right questions or if we're limited by our approach, by our biases and stuff.
[00:49:02] Speaker A: Yeah, that's a different angle on it. I didn't think of it that way. Interesting. I think.
No, I mean, I don't think we are like fundamentally limited by those biases like we have. We do come with these biases, but I just think we're flexible enough to get around them. I mean that's, I mean, I'm getting around them. So I think there's nothing like fundamental about the human mind where it can't think in other ways.
May take a little flexibility, but I think we have it.
So I think we can do this.
[00:49:34] Speaker B: Yeah, I'm probably stretching his intended question anyway.
[00:49:38] Speaker A: Yeah, fair enough. I don't want to initially ascribe that to him, but it's still an interesting angle to think about.
[00:49:44] Speaker B: But do you think that we are incorporating what we do understand about nature and evolution and the algorithms that we do understand? Are we incorporating Them well enough?
[00:49:55] Speaker A: Well, yeah, that's a good question because I guess that's more like a practical question because what is the community actually doing? And you know, I would, as I would situate myself as somewhat of a rebel in that respect that like I really want to do that and others are more kind of satisfied to move in a more conventional way. So some of us are not constrained in that way. But I guess you could say that in general AI has been more conventional minded and less oriented towards open endedness. So yeah, we're not. If we say we, I mean like the whole AI community, then we're not really, we're not really taking this to heart yet that much. But I think we're moving in that direction because of the fact that it's becoming more and more popular to discuss and I see the word coming up more.
And it's also true that people in AI seem to like it when they hear about it. It's like they didn't even think about it before but when they hear about it they're like, that actually is really interesting. And so I think it's going to kind of spread a little bit and you'll see it growing inside the AI community.
[00:50:59] Speaker B: It seems like it's exploding, but I'm biased because I've been learning about it.
But given all the positive responses, some of the comments, why isn't this everywhere already? Things like that, nature.
You must be optimistic about the progress.
[00:51:16] Speaker A: Yeah, that's an interesting question.
It's kind of like a strategic issue or a meta question about how is this going to actually spread around.
And I think it is spreading. And so is it exploding? It's kind of a matter of opinion. What we call an explosion, it's definitely increasing in people's interest, but it's still a small minority, I think, of the whole community. If you look at all of deep learning or even all of machine learning, most people are not even thinking about open endedness. So fair enough, that's still true, I think. So there's still a lot of room for an explosion.
I do think that it deserves more mind share than it's still getting even now. And I think it's, you know, why is it not getting. It is partly because there's a very practical orientation to the field where it's just like how do we solve problem X?
And that's a lot of the way that we validate which direction to pursue and where to put money, which ultimately determines which research actually gets done. So openness doesn't really serve that Very well. Like openness is really about, like, well, let's just see what happens if we don't know what's going to happen. And that's like super interesting. But I can't guarantee you that your telephone call center is going to be 3% more efficient next year.
And so that's just doesn't align with this culture of sort of practical problem solving, but it may align with the idea of getting to something like AGI. I mean, then this is a more grandiose thing where these kinds of more philosophical issues come into play in reality.
But just the general culture of the field, it doesn't align perfectly with it. So I think that's the limiting factor.
[00:52:55] Speaker B: Let's see. Okay, so let me go ahead and play the next question because it's related to natural processes and applying them. Okay, so this is Melanie Mitchell, complexity scientist. You're familiar with her.
I'll just pay the question.
[00:53:12] Speaker A: Great.
This is Melanie Mitchell. My question is, you've brought a lot of new ideas to the field of evolutionary computation, especially as applied to neural networks. I'm wondering what you think are the most important new ideas for evolutionary computation that come from biological inspirations but haven't been used yet in the field. Thank you. Well, I just want to say hi to Melanie because I have met her and know her and it's cool to get a question from her and I really like her work. So. Yeah. So coming from the biological side, where are the opportunities in evolutionary computation?
I think that what's useful is to reinterpret biology outside of the conventional way of explaining it. So I don't know if that exactly answers the way Melanie is thinking of it.
[00:54:10] Speaker B: What do you think of the conventional way? What is the conventional way?
[00:54:13] Speaker A: I think we have a sort of a conventional narrative about what evolution is in nature. So thinking really about biology in nature, just forgetting algorithms for the moment. But like in nature we think of it as a kind of death match. Like you get this term survival of the fittest. And it's like there's a competition going on. It's very competitive. Like the narrative is competitive.
And we don't really question that. That's a pretty. Well, I mean, after, you know, you go to high school, that's sort of like the idea you get. Like you are the product of like millions of years of brutality. And now we've got this super hyper optimized being that can take on the world.
And I think that that narrative is not necessarily helpful algorithmically. Like when you think about importing what we see in nature to algorithms that are artificial and useful to us and powerful in open ended ways. That narrative isn't really what we need. I think that trying to look for alternative narratives in nature is super inspiring and interesting from an algorithmic perspective. I can give an example of this, which is what I would call the Rube Goldberg machine interpretation. So this is a totally alternative way of thinking about it, instead of thinking about it as a competition. And by the way, I'm not saying it can't be thought of that way. Like it's possible for there to be more than one valid interpretation of a system. So this is just a different interpretation. It's not saying the other one is wrong because they're just interpretations. But the interesting thing about interpretations, they lead to different types of ideas. And that's why it's useful to have different interpretations. So in this interpretation, what we think about is this, you know, instead of thinking evolution as a progression where like we went from one point to like better and better and better points, think about it instead as nothing is changing ever, and we're always doing the same thing. And so in that view, what we have is a situation where there was a single cell, which is like presumably the first cell on earth, and it reproduced and made another cell. And so the thing that it did, what it accomplished, was it got another cell out of the first cell. And if you look at it that way, then like every single organism that that's ever existed has only accomplished that at most. I mean, some have accomplished less because they didn't reproduce.
[00:56:26] Speaker B: But in most cases, that's it, that's all they've done.
[00:56:28] Speaker A: But I mean, look at what I mean, if you have a child and you're a human, they were a single cell. What's the use of all the rest of it? It's not necessary. We can get from a single cell because you were a single cell when you started out. And then all this huge digression, like multi trillion cell digression, which is human life, just to get another cell, it's ultimately just the same thing that first cell ever did. And so in this interpretation, the way you can think of it is think about it like a Rube Goldberg machine. Like we don't need all that to do what we did. Like we don't need like all of these like levers and pulleys and ramps and things falling down and fires lighting up to open your newspaper in the morning with some crazy complicated machine. I actually saw a guy who had this huge giant machine. He built to open his newspaper just for fun. We don't need any of that.
[00:57:14] Speaker B: Yeah, he's kind of pretty famous. But we do need action comic hero movies. Right.
[00:57:19] Speaker A: We may need that. So I'd have to create that. But, like. So in the same vein as, like, that Rube Goldberg machine that opens the newspaper, we don't need any of this stuff to get another cell. But what's happening. What's happening in nature is that we're riffing on this idea, on this theme of making another cell in infinite variations for eternity. And that is why we're getting so much interesting stuff. Is that actually, like, the interesting stuff, like, intelligence itself is totally orthogonal to, like, the constraints of the process, which is just saying, make one cell from another. But if you riff on that theme forever, like, you can get amazing things. Just like if you built machines to open newspapers forever, like, eventually they can be as sophisticated as, like, the most sophisticated, sophisticated space station.
And so interior to that there could be inventions that are amazingly powerful, like human intelligence. But it's a completely unnecessary digression from what actually has to get accomplished unless.
[00:58:15] Speaker B: It increases the ability to accomplish it.
[00:58:17] Speaker A: Yeah, you could say that. But I would question. So that goes back to the metaphor, like the original narrative, which is the competition narrative. It's really hard to drop that. We want to go back to that. So then people tend to go back in that direction.
But the thing is, the idea that's better for accomplishing it is now this progression idea, and things are getting better, and we're in a competition, and that's why things are changing. But the thing you have to remember is it's not clear what we mean when we say better.
[00:58:41] Speaker B: Well, higher probability of going from one cell to the next cell, let's say.
[00:58:45] Speaker A: Well, we don't. I mean, a bacteria reproduces many more times than a human, so any individual human is far inferior. And actually also just in terms of total biomass on Earth, the bacteria have us totally beat, as do the ants. So we're not winning on any objective metric. Biologists hate this kind of thing. By the way, when I say things like this, they're like, fitness isn't really meant to, like, compare, like, one species to another in this way. But that sort of, to me, is like being. Well, because you're kind of, like, setting the rules of the game. So I can't have this narrative, Like, I think we should be open to these kinds of, like, interspecies comparisons to make the point. Point of the narrative. Like, there is no superiority here. Like this is total Rube Goldberg.
[00:59:24] Speaker B: Damn. You can. I am superior to the apes.
[00:59:28] Speaker A: Okay, Yeah. I don't want to imply biologists think we're superior because they would totally object to that. I know that they don't think that, but. Yeah, but it's just this idea that like there's some advantage to it. It's an advantage in some way. It doesn't have to be an advantage. All it is is that every, anything that can be, will be. That's the rule we need. And then we'll get to see everything, everything that's possible within the constraints. Like I think about it like it's almost like milk pouring out on, spilling out on a table. Like eventually it'll cover the whole table. You get to see everything except where there's obstacles in the way. Like it won't pour around walls. And so to me the walls are a metaphor for the constraints that like you have to be a walking Xerox machine. Like there's all these places the milk could spill. Like riffing on people who died and never reproduced. Like maybe they had like genius, they could have been really interesting lineages, but they died before they reproduced. That's the walls in the way of the milk. It couldn't spill in that direction. And so it doesn't go everywhere because of those constraints, but everywhere else it will go. And that's why we see all this cool stuff. Wait a billion years and you'll get like amazing stuff because everything that's possible is being revealed. So.
[01:00:32] Speaker B: Well, one way to look at evolution is that it's terribly wasteful and inefficient for the objective of keeping cells alive or something. Passing cells along, reproducing cells, because it makes all sorts of terrible things through random mutation that maybe in a different environment could lead to something interesting. So the idea of Spaceship Earth, like it's this wonderful place that is just perfectly suited for us.
The opposite side of the coin of that is this really harsh environment that only a few. Yes, it's very complex, but only a few things would be truly viable within that very constraining complexity. So then I think open endedness seems even more inefficient than evolution because what you want to do is break down those constraints and really let it really explore. What are your thoughts on that?
[01:01:21] Speaker A: Well, I don't completely understand when open endedness, if you say open endedness is less efficient than evolution because to me evolution is the canonical open ended system. So that is what open endedness is, is evolution.
[01:01:34] Speaker B: But you've I've heard you talk about how. So evolution doesn't let everything continue. Right. Because it kills off the things that don't suit its environment.
[01:01:43] Speaker A: Yeah.
[01:01:43] Speaker B: And I know we're being very anthropomorphizing evolution, which is not good. But, but open endedness would be much more forgiving if. I understand.
[01:01:51] Speaker A: I see, I see. Well, that's just, that's one dimension of open endedness, which is like the degree to which it's open. Like it's true that like the ultimate open ended algorithm would just explore literally everything. Like every single organism would get to have children.
[01:02:09] Speaker B: What's the word for that gradient, the openness? Is there a word for that?
[01:02:13] Speaker A: I mean that's like random search and some. Well, not. Yeah, it's kind of like random search. Like everywhere the search goes, basically we'll just keep going. Yeah, but it's a population based random search. It's also, I call it gentle Earth. Like it's like a metaphor in a way. Like it's an earth.
[01:02:26] Speaker B: Gentle Earth.
[01:02:26] Speaker A: Gentle Earth, like an earth where nobody, where nobody fails to reproduce. Where everybody reproduces. I think it's interesting to think about gentle Earth and what it would be like, you know, because it's like all of the branches of the tree of life that were pruned. What if they weren't? What would be on this planet right now? And I think, you know, there'd be a lot of blobs that don't do anything on the planet right now. There'd also be a lot of interesting stuff that we've never got to see. So there'd be both.
And so when we talk about inefficiency, it's, it's a little fuzzy. Like what is efficiency with respect to like inefficient with respect to producing efficient, viable creatures or maybe efficiency with respect to producing interesting new stuff?
[01:03:10] Speaker B: Well, I think just even in terms of resources, and I know that brings it very down to Earth, but just the resources. Like AI gets knocked for taking up a lot of energy, for instance. Right. But if you're going to run an open ended algorithm, if you're doing it well, I suppose it would take up almost infinitely more resources.
[01:03:32] Speaker A: I see, yeah. Okay. Yeah. In terms of resources. Well, no, I think, I don't want to grant that because I think actually it's a better. Well, let's see. I think it's a better use of resources in some sense. We have this amazing computational power. Let's say we do like some amazing supercomputer.
You can't give Me, an algorithm that really uses it, like there's nothing to do with it. Like what the heck is going to actually like exploit this resource? Now I know that in deep learning we can do some amazing things with really big computation. So I'm not even talking about that. I'm talking about things that are even bigger than that. Like I'm talking about things where I could run it for 1000 years and it would like change everything.
There's nothing available for that. So like the most efficient use of that would be an open ended algorithm. It would actually exploit it to the maximal extent it could be exploited and show you everything you ever dreamed of.
[01:04:23] Speaker B: Would come out of it in the long term. In the ambitious long term.
[01:04:26] Speaker A: In the long term. Yeah, but I mean that's what that's, I mean computation is space and time. So it's taking advantage of space and time. That's what it should be doing.
[01:04:34] Speaker B: It requires more patience though. I mean like you've harped on. This is not in our culture right now. Patience is not part of our culture.
[01:04:41] Speaker A: Yeah, and it's a practical problem too because it's like, I mean you can't, we're not going to get to see it in our life that we going to take a thousand years. So it's true that like we in practice, like if we're going to explore open ended algorithms, we need to make ones that produce something worth our attention within our lifetime or else like we're not going to have the patience in practice. That's just a practical reality.
And so it is. This is like a real dilemma I think in the field is that you don't know when to stop your run because like you know, if I ran it for five days, you know, and it was okay, but maybe if I had run it for 10, it would be amazing. When do I actually stop and know to say yeah, this is the evidence I need. And that's just an aspect of open ended is you don't know when to stop it.
[01:05:24] Speaker B: Can I ask you about learning with respect to open endedness? Because so all these neural networks and I know I'm focused on deep learning and you have plenty of deep learning experience and you even work on neurally plausible backpropagation like dopamine.
[01:05:40] Speaker A: What are they called?
[01:05:41] Speaker B: Dopamine.
[01:05:41] Speaker A: Oh the. Yeah. And the differentiable plasticity with Thomas Maconi. Yeah.
[01:05:47] Speaker B: Which is really his ideas.
[01:05:48] Speaker A: Yeah.
[01:05:49] Speaker B: Anyway, but then you heavily have a evolutionary algorithm and neuroevolution type of background as well.
But learning, so learning I want you to Help me in my own mind compare learning and open endedness. Because learning is normative, it has a direction, it actually by definition has a goal. Because you're learning toward, you're getting better at something. Right.
And some people think of even evolution as a really slow learning algorithm, but I think that you don't think of it that way.
I almost want to compare learning and evolution because you think of evolution and open endedness in general as sort of a search process that finds interesting things and evolution might find usefulness within the domain of life. Right. Do I need to think of learning more like that search space or do I need to think about evolution and that searching more like a learning mechanism? Does that make sense?
[01:06:54] Speaker A: Yeah, yeah. It's a really interesting comparison to put evolution next to learning and say, well, how do they relate to each other, especially open ended evolution and learning.
I think one of the things that it points to is just that learning, in my view at least as you alluded, can be open ended. I think that open ended is not just only about evolution. And that's an important kind of social point because the open ended evolution community, by adopting that word evolution is sort of implicitly excluding. Lots of people work in things like deep learning. They don't mean to, I'm not saying they mean to, but like just the terminology excludes. And so I think it's really important to open up the terminology and acknowledge that open endedness is a property of many kinds of systems that are not necessarily evolutionary systems. And that's what's so amazing about it, including learning systems. And so I think because of that we have to allow for learning to be open ended. So it's a slightly different than just kind of normative view of learning that you're describing. And I think though that it's, it raises interesting questions which illustrates why it's important to at least try to think this way and explore this idea. Because if you think about it as a human being, clearly we can learn. We're learning systems. That's why we're inspired to do machine learning.
And yet it's arguably our path. It's very open ended. As humans, we discussed this a little earlier even in this program because you were pointing out how in career paths can have a very open ended discussion discovery process. But I would argue even early in life, like babies and toddlers, it's not clear to me that this is an objectively driven process. When I watch my baby, who's now one, learning how to walk or something.
[01:08:37] Speaker B: Congrats. The memories from way Long ago for me. That's awesome.
[01:08:40] Speaker A: Yeah. Yeah. And he's been.
It's not just him, but I generally observe that. It doesn't seem to me that babies have a goal in mind. They're just trying whatever they fit. And the thing is, they're following stepping stones. Once they're shaking their arm around, they realize it hit something. They just realize you can hit things. That's kind of interesting. So now maybe you can hit things and then you realize, actually I can hit things to places that are useful to me and then maybe I can hold things. And it's like, this isn't because the baby started out with this big plan. Like I got to figure out how to hold things. It just sort of banged into things, found stepping stones, but once it did something, it realized it could use that to do something else.
And so you might argue that this whole developmental process, which certainly I think deserves to be called learning, is a completely open ended process. It's just that it's also inevitable. So there's this, there's this interesting notion of inevitability and open endedness that like at the beginning of an open ended process, it's not necessarily the case that everything that happens is totally unpredictable. Like there can be open ended processes where the early stages actually are pretty predictable. Like you're going to run into things like holding and walking and stuff like that as a baby, even if it's open ended. But eventually, because our whole life is open ended, it's not predictable. Like you could not predict when you're a baby that you're going to be running an interview show. Like, that's not clear from the path.
[01:09:56] Speaker B: Yeah, that was my point from earlier.
But even doing this. So I know that right now this goes back to the motivation. As long as I do what I'm doing right now and give it my all, in 10 years, I'm probably not going to be doing this podcast, but it'll lead to something interesting that I value.
[01:10:11] Speaker A: Yeah, yeah, indeed. And, but I think that's, that's just an illustration of learning open endedly. Like in the process of discovery that you're going through. Like, I think you certainly deserve to be credited with learning. Like, I mean, we could say it's something other than learning, but I think it's a form of learning. It's a learning without a curriculum is what it is. Like, no one has laid out before you the steps that you should take along this path. You just are stumbling through them, but as you stumble through them, you are actually learning A lot and becoming more wise about all these aspects of the world which are just the ones you encountered, which are different than the ones I encountered. But all of us have a different path like that. And I think that there's a lot of wisdom gained along each of those paths, and that's learning.
[01:10:54] Speaker B: All right, Ken, let's switch gears for the last little bit here.
So I know that you've had an interest in brains and in the way that brains work, going back to the beginnings of your interests in before machine learning. I think I don't know if that's what got you into the whole thing. And you often say who knows? At the end of a talk, you say who knows? Open endedness may just very well get us to AGI without trying. But then you also show a picture of a brain and say AGI and you say, and get us to a brain.
Do you think of AGI as human level intelligence, or is there some other way that we should. You think about it. And then I want to talk about intelligence in general eventually as well, but.
[01:11:36] Speaker A: Cool.
Yeah, I think that AGI is. Well, these terms are really contentious for some reason. There is AI now there's AGI, there's a human level intelligence.
[01:11:50] Speaker B: It's very strong open endedness.
[01:11:52] Speaker A: Yeah, well, there's a new one. I threw it there.
I'm not as tied to Nestle a specific terminology, but I think with an understanding of what AI is really meaning to articulate, it's an idea. Well, I guess that is AGI intelligence or is it human intelligence? Well, AGI refers to generality because the artificial general intelligence is what it stands for. And I guess that in some ways it may be an oversimplification, I guess in my view of what we're actually aiming for because the word general is promoted so much there. It's not like only generality is really the issue here, but generality is an issue. It's just there's other issues.
And I understand the allure of generality because it's based on. We seem to be able to do so many things that we're basically general intelligence, you could say. But the thing that it kind of misses the fact that we're also extreme specialists. In other words, we specialize. Our lives are about specializing. It's very unusual you find a master in two domains, let alone three.
One of the top 10 best basketball players in the world and the top 10 physicists in the world at the same time. It just doesn't happen.
That's because people specialize over their Lifetime when they become great at something, something.
And so just talk about general intelligence kind of papers over or fuzzes that out that like the extreme specialization is also a characteristic of being human.
And we may theorize that somehow that won't be in the age yet. Like maybe the age yet we imagine something that actually is a master of everything. So it's not like a human, because humans don't seem to do that. But I would argue that that might actually not work. Like maybe there's a reason that we also need to specialize. We have amazing general capabilities, I'm not denying that. But we also have amazing specialization capabilities. So it's very complex, like how these things mix together. And so I don't think the terminology necessarily elicits all of that. But I mean, we can use that terminology if we want to have something to talk about. It's fair enough to me to call it AGI.
[01:14:06] Speaker B: Yeah, I mean we can just call it AI. We can call it whatever you want to call it. Okay, well, considering open endedness, should AI take any guidance from looking at brains or just any natural intelligent processes?
[01:14:22] Speaker A: Yeah, I mean, at least in the sense I mentioned about babies and open endedness throughout lifetime.
[01:14:28] Speaker B: But that's almost an objective, right? Sorry to interrupt.
[01:14:33] Speaker A: Is it almost an objective?
Well now what exactly are we saying is an objective there?
[01:14:39] Speaker B: Well, in some sense it seems antithetical to open endedness to say if we want to solve AI or create AI, what we should do is pay attention to brains, for instance.
[01:14:53] Speaker A: I see what you're saying. Yeah.
[01:14:55] Speaker B: Shouldn't we explore beyond the boundaries of what brains can do and what humans can do? Should we use that as guidance?
[01:15:01] Speaker A: Well, the confusion that's happening here is because we're actually in an open ended sandwich, because there's open endedness on both sides of us. Like the open endedness that leads to us is very different from the open endedness that's inside of us. You know, we are, we are open ended. That's the open endedness inside of us. So your lifetime has an open ended aspect to it. And your intelligence is very tied in with being open ended. I think, I think human intelligence maybe the pinnacle of what makes human intelligence human is its open endedness, like not problem solving, but this tendency we have to explore. We're just amazing at that. And that's why there is a history of invention and civilization and so forth. But there's also the open endedness that precedes us. That's a completely different thing. That is the explanation for how we got the brain we have, which is evolution.
And so like we're in the middle of that. And so you can, you can conflate those two things when you just talk about open ended because they're different from each other. But it's really interesting that they both are open ended because like you could say that, well, like to understand the openness that precedes us, it won't help us to create a goal which is a target, which is the openness which is inside of us. Like that might be an argument you could make because that suddenly does sound like it's a goal. But I think that's conflating two different things. Like when I'm saying that the open endedness that's inside of us is interesting from like a neuroscientific point of view informing AI. It's about like the question, once you, when you get to us, what is it actually like, like what kind of structure do you expect to get if it's like a human? Like it's got to have this open ended property to it, like the cognitive aspect of it is going to be open ended. And I'm just saying that I think that that sometimes is underemphasized, like in AI's interpretation of cognition because it's often viewed sort of as a problem solver or like a classifier or something like that.
And so it might be helpful for us in understanding what kind of thing we're aspiring to here to realize that there's this really, really magnificent aspect of our humanity which is our open endedness, which might be getting a little bit of short shrift. And I would like to understand what actually accounts for that because a lot of the metaphors that we're using, we think about things like back propagation are related to the idea that there's a target that we're moving towards.
But what is actually the cognitive apparatus of open endedness from an algorithmic point of view or like a neural propagation point of view? I mean it's this extremely hard question to answer because it's like to reduce very abstract concepts into real neural network explanations is like, who knows when that's going to happen. But it's still, I think, interesting to think about that. And then on the other end, the evolutionary side is like, it's a whole other reason that we need to think about it, because we might need it to get to that point.
[01:17:48] Speaker B: I mean, do you think that our conception of evolution or just the ontology of how biology works, are we there? Have we solved it? I mean I know that there's remaining questions in evolution, but is there going to be a radical new theory idea that is going to frame our understanding of that can encompass something like open endedness and our cognitive abilities like that?
[01:18:13] Speaker A: Well, I think we're not. Yeah, I don't think we're there. Sort of as I kind of hinted before, like I don't think we have the full theory, whatever that is, that like accounts for open endedness in evolution. I don't think we're there because again I think that to get there, if we were there then we could actually just implement it as an algorithm and we'd be done today. And so there's something we still don't fully understand, but I think we're closer. We do understand some important ingredients I believe at this point.
And so that then leads to this question of the grand theory and basically a shift in theory and evolutionary theory and that leads to all kinds of controversy is what I've noticed. Like it's. The evolutionary theory is just such a crowning achievement of science that saying that there's like a fundamental shift, even suggesting it is like just like a absolute heresy that there even might be one.
And so like, because I think what scientists are afraid of is that you're hinting that like some of the main underpinnings of the theory are wrong, the current theory, and that's not necessarily true. Just because we see we have a fundamental shift in understanding doesn't mean that some underpinnings are wrong. You know, like there's still selection going on. I mean these things are happening.
And so somehow you have to thread that needle of preserving the parts and acknowledging that there are parts that are worth preserving here, but still saying that there are still fundamental insights to be had.
[01:19:41] Speaker B: Broadening the. I'm sorry, broadening?
[01:19:44] Speaker A: Yeah, broadening the overall narrative of what's going on.
That's why I mentioned having new narratives, like new interpretations is helpful.
But to really do that, I doubt I'm the person because, because I'm not a biologist, so I'm not equipped because like this is like it's so the politics are just so complex and I probably don't even begin to understand them. But I've seen other people, like, I've seen people who thought what I. Some of our stuff was interesting, who were invested more in biology, who tried to kind of push the needle a little there with some new theories and I've seen that they run into dramatic resistance.
And it's probably appropriate because the theory is so powerful, but I do think that we're going to have to do some updates because again, I believe that you can't really claim to understand what's going on if you can't implement it. So my bar is kind of the AI bar. All right. If you biologists really understand it so well, then just write a program and we should see nature in all its glory inside the computer.
[01:20:49] Speaker B: It is ironic though, the new, you know, oh, you dare question Darwin, burn him at the stake. You know, the new heresies. That's fun.
[01:20:57] Speaker A: Yeah, yeah, it's a little ironic.
[01:20:59] Speaker B: So neuroscience is okay. So this podcast tries to, at least ostensibly, is about the interface of neuroscience and AI. And neuroscience gets a lot of criticism these days for sort of being stamp collecting and not having enough theory driving the experiments. We're collecting more and more data, but then where's the theory that will. Sure, you know, that we can frame the narrative and then do better experiments and understand what the data is about. Does neuroscience need open endedness, that type of pursuit? Because we write, you know, I'm not in it anymore, they write grants, like you said earlier, to pretend like they're working on their grant question, Right?
[01:21:43] Speaker A: Yeah.
[01:21:44] Speaker B: And then make progress that way. But. But does there need to be a more open ended sort of pursuit in neuroscience, do you think?
[01:21:51] Speaker A: I would guess the answer is yes, but I want to admit that I'm not a neuroscientist, so I can't really credibly critique the field. But it's like every field seems like they need more open endedness. I would guess that in neuroscience, what that means is it's about the idea that, you know, there are some neural phenomena that I just want to look at, but I don't really know what they are like. I don't know what they mean, I don't know what they explain. I don't have a theory, I don't know. But I just have a gut feeling that this is interesting.
And that's why give me $500,000 so I can look at this and I bet you that's impossible. You can't say that. And that is a sense in which it could be more open ended is that we should let people say that because some of our discoveries are going to be because something's interesting, not because we even know what they heck it is, especially in a system as complex as this one.
And our neuroscientists that have been trained for like 30 years or something before they become a professor, whatever they are a scientist, they deserve a little bit of acknowledgement of that effort that we put into them. They also put in effort, but society has invested in them for decades. Imagine how much we have spent on this. Like can't we just acknowledge that after all those decades maybe they, their intuition is worth following. Like that something's interesting. Like they don't have to have a theory. Like they're, they're mature enough now as scientists. They actually might have be on to something when they have a gut feeling. I'm not saying they shouldn't have to justify their feelings at all. Like it's, I wouldn't accept a grant that just said this is cool, like let's look at it.
But I can explain to you, I can explain why something, something's interesting to you without knowing where it's going. Like I should be able to do that. We don't challenge ourselves to do that enough. I think, like just to say this is why I think this is really cool and I will not tell you what's going to happen if I investigate it. But it's not like we're not idiots here, like who can't communicate with each other just because we don't have an objective. Like there's a lot of other stuff we could be talking about other than just where we're going because we just don't know. I mean that's the nature of exploration, which is what science should be about. So yeah, I think, I'm sure that neuroscience, because it is, it is one of these fields where there's so much we don't know and we're in such a morass of complexity that absolutely it could be better served by allowing some of that kind of exploratory investigation. I'm sure that's true. Not saying it should all be that way. Okay, that's the straw man everybody likes to attack. Like we can't get rid of all objectives. That's crazy. I'm not saying we should get rid of all objectives. No, of course not. But let's put some resources into this kind of thing and start acknowledging that we vested enough in these people so that their intuitions actually matter.
[01:24:32] Speaker B: But this is a different sort of. Usually when people think of a bottom up approach, they think of a data driven approach, like see what's in the data, collect the data, look for patterns and then use those patterns to map onto whatever cognitive function that you think you might be working on, et cetera. But what open endedness suggests and what you're suggesting, if I have it right, is that it's a different kind of Bottom up approach that explores.
I don't know if intuitions is the right word for it, but do I have that right? Do you see it as a bottom up kind of approach, not just to neuroscience, but to AI as well?
[01:25:08] Speaker A: Yeah, that's true. I see the bottom up, top down point.
It is maybe.
Is it bottom up? I have to think about it. Is it really bottom up?
[01:25:19] Speaker B: Because it's not going from theories of what. So in neuroscience there's this kind of tension between creating theories versus doing experiments and collecting data on the implementation level versus creating theories about what's computationally going on.
[01:25:33] Speaker A: That's true. Yeah. It does seem like that's reasonable to say so. Yeah.
It's kind of like what would happen if I did this? I don't know, but I'd like to know. But it's not just, you know. Yeah, it's not just collecting tons of data. I mean, maybe data collection is sort of open ended. I mean that like, if it's just for the sake of getting the data, like I don't know what I'm going to see, but I'm just looking at it, then you're kind of saying, I don't know, I just want to look at this thing. That's what I want to be paid to do. So like, yeah, there's some degree of open endedness there.
[01:26:06] Speaker B: So Ken, you were at the University of Central Florida and then you went to. I think I have this right, you went to Uber AI Labs, now you're at OpenAI. Congratulations on the. Wow. It's a very new job, right? A couple months old?
[01:26:19] Speaker A: Four months. Yeah. Yeah, thanks.
[01:26:21] Speaker B: What are you doing? What's going on at OpenAI? You're heading a team of open ended, what do you call them? Open endeders. What do you call your team?
[01:26:30] Speaker A: Well, just the open endedness team, but I haven't yet. I gotta think maybe we should call ourselves open endeders. That's a tough term to say.
[01:26:37] Speaker B: No, that's terrible. It won't take you long to come up with something better than that.
[01:26:40] Speaker A: Not very catchy, but yeah. I started the open endedness team at OpenAI, which is great because that's what we've been talking about here. So yeah, I'm really trying to push forward the progress that we've just been discussing. And OpenAI saw the potential for open endedness to dovetail with the aspirations of AGI, which I agree with.
And I think that the amazing talent and resources there with respect to machine learning and deep learning are very compatible and complementary to the goals of open endedness and open endedness is complementary to their goals as well. So I think it's a really good pairing of ideas and makes it a great place to be to be exploring this topic.
[01:27:28] Speaker B: So it's more of the same. You're just on a different scale and with a different team sort of in a way.
[01:27:33] Speaker A: Yeah. It's interesting that. I guess it's the first time that I've really led a team that just is explicitly called open endedness.
My entire career before this, I think I've been sort of implicitly trying to pursue open endedness and not because I necessarily always like hiding it or something, but I don't think I really fully crystallized what I'm really interested in. Until maybe recently. I was probably everything I've been doing, like you could see how it has something to do with openness, like going back to neat or something like that, which is like evolving increasingly complex neural networks. But I wouldn't have used that term back then. And I just kind of gradually realized that's what I've been really. What's been really been inspiring me for some reason. I don't know why I'm so inspired by this, but so like this is like I finally, yeah, like made it concrete so it's explicit. Let's pursue open endedness.
[01:28:25] Speaker B: That must be satisfying.
[01:28:26] Speaker A: Yeah, it is satisfying.
It's weird. I mean once I wrote something when I was like 16, like a program because I just read about evolution in the biology textbook and I was like, this should be. I could write this in basic, basically was a programming language that I knew and so it sounded like a program. I was like, this must be. But the thing I really wanted to do when I was 16 was I was hoping, hoping something would evolve that was weird or interesting. I had no goal in mind. I just was like, some cool stuff might happen. And I created the most crazy program ever. It's not scientifically valuable at all, but it's interesting to think back to that. Going back to when I was 16. I was pretty much pursuing open endedness right there for decades ago. So I think I just somehow. I don't know why. It's just what I'm interested in. And so it's really great to finally do it.
[01:29:20] Speaker B: Yeah, it's like justifying your entire implicit career up to now.
[01:29:24] Speaker A: Yeah, yeah, exactly. It feels like it's some validation or something. Like actually doing open ended for real now.
[01:29:30] Speaker B: Well, I'm looking forward to seeing where things go where, you know, where it takes you if you're right. You'll not be studying open endedness in the future sometime. Right?
[01:29:40] Speaker A: Why? Because it.
[01:29:41] Speaker B: Well, because the search space is so vast and you're amenable to searching within that space.
[01:29:47] Speaker A: Yeah, I agree completely. Like, yeah, who knows, I could see myself deviating. So that's true.
[01:29:53] Speaker B: There was one thing I wanted to ask you before I let you go about open endedness and its relation. I have such a long list. I should just send you my notes. Although it would take you so long to read them because I just had so many questions. I'm not going to send you my notes.
But one of the. Because so many things come to mind when you let yourself swim in this space a little bit. And one of those is. So there's this idea of focused learning and called diffuse thinking. Focused thinking. Diffuse thinking. Anyway, it's the thing where you're working hard on a problem, you're really focused on it, you kind of get stuck and you kind of keep hammering at it and then you walk away and you make one of those super complex sandwiches that we talked about earlier and you take a shower and then you're unfocused and your unconscious processes. Is that open endedness at work in the brain?
[01:30:41] Speaker A: Well that's a really interesting question. I mean again another interesting question that I've thought about and it has some elements of when we talk about open endedness like for example, one of the elements is you're not trying to solve the problem and you solve it. That's clearly like a kind of a non objective process in that sense at least. Like I became what finally made it possible to figure this out was to stop trying to solve it. And that's like to achieve your highest goals you must be be willing to abandon them. It's like totally compatible with that notion. But then the other aspect of it is that it's kind of mysterious like because I don't actually know what is going on subconsciously because it's subconscious. So like I went in the shower and it popped out and like was, was the thing happening subconsciously actually open ended itself like that? I don't know what's going on. It's possible, like I think it's plausible like your brain is following stepping stones kind of casually and because of that it's willing to entertain options that you wouldn't normally consciously consider.
And so maybe that was what freed you and liberated you to actually find a different path in back to where actually does lead to where you're hoping to go. And so it doesn't seem crazy to think stuff like that could be true. And I've definitely experienced it too. I actually explicitly, intentionally try not to think about things that are like. When I realized that there's a really big problem that I wish I could solve, I just shut it down for a few months. I'm like, I'm not even going to think about it. Wow. Months. Yeah. Because I'm like, I just feel like it's not the time.
As soon as I have a feeling like I'm really trying hard, then I'm like, this is a sign that it's not the right time to do this. Because that's very objective when you're trying to do it really hard. And it's like, it's too hard of a problem. Really hard problems are not like that. You can't just try them. You got to let it settle in some way that you don't know what's going on, but it might happen. So actually try to do that.
[01:32:33] Speaker B: That's really great. That means my whole life is a waste because everything seems very hard. You know, the story about Edison, the way that he would come up with solve problems, solve things that he was working on. He would hold two metal. I think two metal balls in his hand and sit down in his chair and start to doze off. And as he would doze off, he would drop the metal balls and they would fall into this metal pan and make this loud noise and wake him up. And often that would. He'd have the solution. That's the. Maybe that's urban legend, but, you know, it sounds nice, but that's a fun one.
[01:33:04] Speaker A: Yeah. Actually, didn't. I didn't know that. It sounds like something I should have known about. Like it's a cool legend, at least.
Yeah. Like, that is. I mean, the subconscious and just it's. Yeah. It's a little more amorphous than like, it's not like algorithmic because you can't really say what is. What are we proposing here to do? But like, it seems. It seems somehow about the same kind of stuff. And I feel. It feels like it like viscerally. Like when I have an idea that came from, like, not thinking about something, it doesn't feel like I was trying, you know, it just. It feels like it came out of left field. That's why it's like a Eureka type of situation. You know, you're like, where did that come from? Like, it just like popped in.
[01:33:45] Speaker B: Yeah. Which is beautiful and frustrating.
[01:33:47] Speaker A: Yeah, that's true. Yeah. Yeah. Partly because you don't know if you can ever do it again. Like, there's no formula to that. So I always feel worried if that's how I thought of something because I'm like, I don't know what I just did. I can't repeat the process.
[01:34:00] Speaker B: Right. Well, this has been really fun. I appreciate you taking the time with me and letting me ask my silly questions of you. You must get so many silly questions from people. Interesting questions, let's say.
[01:34:14] Speaker A: I don't think they're silly. They were really. It was a great conversation. Yeah. These are great questions.
[01:34:19] Speaker B: Well, I appreciate it. I wish you luck moving. Wish you luck moving forward. Although, of course, you don't need it, but thanks, Ken.
[01:34:24] Speaker A: Thank you. Thanks for the opportunity. It was really fun.
[01:34:41] Speaker B: Brain Inspired is a product of me and you. I don't do advertisements. You can support the show through Patreon for a trifling amount and get access to the full versions of all the episodes, plus bonus episodes that focus more on the cultural side but still have science. Go to BrainInspired Co and find the red Patreon button there. To get in touch with me, email Paul BrainInspired co. The music you hear is by the New Year. Find
[email protected] thank you for your support. See you next time.