BI 080 Daeyeol Lee: Birth of Intelligence

August 06, 2020 01:31:09
BI 080 Daeyeol Lee: Birth of Intelligence
Brain Inspired
BI 080 Daeyeol Lee: Birth of Intelligence

Aug 06 2020 | 01:31:09

/

Show Notes

Daeyeol and I discuss his book Birth of Intelligence: From RNA to Artificial Intelligence, which argues intelligence is a function of and inseparable from life, bound by self-replication and evolution. The book covers a ton of neuroscience related to decision making and learning, though we focused on a few theoretical frameworks and ideas like division of labor and principal-agent relationships to understand how our brains and minds are related to our genes, how AI is related to humans (for now), metacognition, consciousness, and a ton more.

Related:

View Full Transcript

Episode Transcript

[00:00:02] Speaker A: When you look around, most of the entities that are solving problems are living things, including ourselves, and there's a reason as to why we don't accept any solutions. And that is because some solutions are compatible with the properties of life and some solutions are not. When you're in the social domain, recursion is a huge problem, and therefore there is a need to have an accurate model about other agents. But if you're in a society where you and others have the same hardware, like in human society, then once you actually have that good model about other agents, that means that you actually have a pretty good model about yourself. [00:00:51] Speaker B: This is Brain Inspired hello everyone, I'm Paul Middlebrooks. Today I speak with Dayol Lee, a neuroscientist who runs the Lee Lab for Learning and Decision Making at Johns Hopkins. Dayol and I are familiar with each other because we have overlapping interests in metacognition going back to when I was in graduate school, and I've always had great admiration for him. So in his lab they study learning and decision making, and in particular how animals use different strategies to solve problems, and he often uses eye movement tasks while recording neurons in awake behaving animals. We could have talked about all that today, but instead we talk about his book Birth of Intelligence From RNA to Artificial Intelligence. The central thesis of the book is that intelligence is inextricably linked to life and the need to self replicate. So we discuss that thesis and the book covers a lot of the neuroscience and theory related to the learning and decision making necessary for intelligence. But our discussion focuses on a few concepts that are in the book that maybe aren't as well known or discussed as much when thinking about intelligence and the evolution of intelligence. Concepts like division of labor to solve problems and how to delegate so as to optimize the division of labor. We talk about a theoretical framework called the Principal Agent Relationship and how we can use that framework to understand the roles of various functions and intelligence with respect to evolution. We discuss our sense of self and its relation to our sense of others. We talk about some of the negative emotions we experience as a consequence of having evolved multiple intelligent algorithms that are specialized for different circumstances and the need to regulate and evaluate those algorithms. And we discuss plenty of other related topics as well. I think you'll find these ideas interesting and useful. If you value this podcast and you want to support it and hear the full versions of all the episodes and occasional separate bonus episodes, you can do that for next to nothing through patreon, go to BrainInspired Co and click the red Patreon button there. Go to the show notes at BrainInspired Co podcast 80 where I link to the book and where you can learn more about Dale and his work. And be sure to stick around to the end for a taste of Dale's DJing skills. All right, Enjoy. Dale, it is great to see you and thanks for coming onto my podcast here. [00:03:41] Speaker A: Thanks for having me. This is a great honor. [00:03:43] Speaker B: Birth of Intelligence from RNA to Artificial Intelligence So, first of all, I want to say thank you for writing the book. I very much enjoyed it and it's a struggle on my part because it covers so much. The book covers so many topics. Today we're only going to cover a few of those topics in the book. I imagine a few that stood out to me as novel and interesting, among which there were many. But you study decision making and reinforcement learning, neuroeconomics and related fields. Did your work. So I'm curious, did your work in those fields, and you have a background in economics as well. Did that kind of work that features prominently in the book, did that shape these bigger picture views that you express in the book about intelligence and evolution and AI, or have those bigger picture views been in the background and informed your work throughout the years? [00:04:44] Speaker A: I think it's both, actually. I think the process was bidirectional. I think the reason why I got it attracted to the field of decision making in the first place was because, you know, ever since I was a kid, I was always interested in as what's intelligence? You know, are humans special? What is thinking? You know, do we really have consciousness? I think like many neuroscientists, those are the questions that I think attract people to neuroscience. And when I realized that I can study decision making in animals and this is not unique to human. It's a fundamental to the functions of nervous system in practically all animals. And now I think that actually it goes beyond animals that I was really, really attracted to the field because it seems like decision making is really the global process that you can use to characterize everything that the brain does. Right. So it was surprising to me a few years ago when I first realized that, oh, if I want to summarize the function of the brain in one word, what would that be? Is it going to be perception? No. Is it going to be memory? No, it's a decision making. [00:05:47] Speaker B: Yours is decision making. Yeah. [00:05:49] Speaker A: Yeah. So that's why I studied decision making. And even though I'm not an AI researcher, I think about AI and I sort of pay a lot of attention to the Papers that are coming out at least in sort of high profile journals about the recent developments in AI and things like that a lot, because it's all under the same umbrella. These are the things that are fascinating to me. [00:06:11] Speaker B: And was economics just a pit stop along the way? [00:06:15] Speaker A: That was a fluke. As a high school student, I had given up science because I realized that I wanted to be a physicist when I was a kid. And I've met a few people like that actually in my career that they realized that they learn either Iran is silly or correctly, that you have to be a genius to be a physicist and you realize that you're not a genius and therefore you shouldn't waste your time trying to make any contributions to physics. So you give up and then you. In fact, I won't name who that is, but one of, one of our colleagues actually went through exactly the same path that he wanted to be a physicist and he gave up and he studied economics and then realized, oh, you know, decision making is actually function of the brain and therefore I should study. [00:07:06] Speaker B: Oh, is that right? [00:07:07] Speaker A: Because it, you know. Yeah, because decision making is something that a lot of economists are interested in as well. So you actually can go come to study of decision making through many disciplines and economics is one of them. [00:07:17] Speaker B: So many people of, I'll say our generation. I think I'm a little bit younger than you. Not much if I am, but a little bit, yes. When I was in an undergrad, when I was an undergraduate, the university, which was the University of Texas, didn't even have a neuroscience program. It wasn't even available. So many people came from physics, from economics, from something else, I guess Ramon y Cajal may be an exception. Even those old guys and guys and gals came from other disciplines, but that still has been happening. It's only recently that people are coming from neuroscience, which is a strange thing to think about. [00:07:56] Speaker A: That's right. [00:07:57] Speaker B: So I feel like we are extremely lucky, speaking about old people, to live after Darwin and to benefit from the theory of evolution. I can't imagine living without Darwin's theory of evolution, which is so useful as a guide to thinking about intelligence and so many things. And it's also awe inspiring to me to think of all of the other theories that have yet to come. Right. So. And eventually we'll, we will. Well, I don't know if we'll be around, you know, to think this, but people in the future will think, oh, I can't believe people used to live without that theory. Theory X. Do you think that we'll discover a more fundamental theory of life beyond evolution, or do you think we'll just continue to kind of refine evolutionary theory? [00:08:45] Speaker A: I think it's close to the latter. So it's hard to imagine that we'll come up with a framework that will completely replace Darwin's framework. That just seems unthinkable to me. Yeah, but there are still a lot of details that are unknown. Right. So as I covered a little bit very superficially in my book, you know very little about how the life originated, and that's a very hard problem. I think I said in the book that, you know, two most challenging scientific questions in our time might be origin of universe and origin of life. And I'm much more interested in original life because it's a cluster to home. And we may someday learn either by induction or deduction, that, you know, this is how life evolved, you know, originated or must have originated, but we don't. I realized that actually this is something that I discovered during the process of writing the book. You know, I thought that there would be good consensus in the scientific community, scientific community as to how the life originated by that by now, because I started hearing about the RNA world, you know, several decades ago. But there are still a lot of details that are missing. So someday we may learn a lot more about that problem, but that'll be still some details in the big picture. Right. [00:10:08] Speaker B: I think that there are still already some pretty fascinating theories about the origins of life, essentially. You know, there are three or four out floating around out there that are, I don't know, pretty attractive to me as theories which we don't need to go into. Let's talk about intelligence. So, as any good author writing about intelligence, would you really start with what is intelligence? Because as you point out in the book, there are just so many different definitions out there, so many versions of how people define intelligence. So I will ask you, Dale, what is intelligence to you? [00:10:45] Speaker A: Right? So to me, intelligence is the ability to make decisions, solve problems, but that's not enough because there are many life forms and even machines that can do that. But intelligence is truly an ability to solve problems under a variety of environments, not just one specific context, because that's relatively easy. That is changing in an uncertain fashion. And that's. I think there's a substantial consensus among these people that are studying intelligence that that's really essential part of intelligence. And this actually goes to the central motive that I had when I decided to write a book, and that is that I thought that there was missing something really Important missing in the definition. And that is that how do you know that a problem is solved? Right? So you can give all kinds of problems. And problem solving, I think, has two additional elements that I think are relatively ignored. I'm not the first one to make this point, but one is agency. So if you're saying that somebody's solving a problem, there is somebody that's solving the problem, and then the other one is subjectivity, which is, how do you know that the problem solved? And that obviously is closely related to the agency, because obviously the agent that's trying to solve the problem is not going to be satisfied until certain criteria are met. But if somebody else is trying to solve the same problem, then the solution might be different. So then does that mean that, you know, anything goes and, you know, no matter what happens, you can declare that, well, this problem solved, at least for somebody. [00:12:24] Speaker B: It's very postmodernism. [00:12:26] Speaker A: Right? And I don't like postmodernism because then why are we bothering to do science? We're, you know, as a scientist, we're pursuing some objectivity. And that's when I realized a. I think it's actually a deep connection between intelligence and life. Because when you look around, most of the entities that are solving problems are living things, including ourselves. And there is a reason as to why we don't accept any solutions. And that is because some solutions are compatible with the properties of life and some solutions are not. And therefore, I added one additional qualifier in the definition of intelligence, that intelligence is an ability to solve problems in a variety of environments for life. [00:13:10] Speaker B: So that's a pretty important addition, I think, because it really frames the rest of the book, which we'll talk about here. But another interesting facet that you add is that problem solving has to be from the perspective of the problem solver, because, like you just were talking about, one person's problem is another person's solution, essentially, or. [00:13:33] Speaker A: That's right. [00:13:34] Speaker B: So it has to be taken from each life form's perspective. So this opens up a problem for AI, essentially, in that if intelligence requires life, then that means that AI is not intelligent. And you talk about this in the book. So why isn't AI intelligent? [00:14:00] Speaker A: Well, I think you can actually answer that question in more than one way. You can consider AI as a machine that solves the problems for human. So AI has a life. It's not just AI its own life, it's a human life. So you can consider AI just like any other tool. So this is a position that I took through most of the book that there is no fundamental difference, in my opinion, between AIs and any other tools that humans develop to enhance our productivity. And in that sense, AI is just like any other tools. And I don't think that's necessarily bad news for AI. And then the other one, this is more in the domain of sci fi. Is that so is it possible then someday the AI will have its own life and therefore it sort of begins to own its own intelligence? I think that's a question for the future. I look into the field of artificial life a little bit while working on the book, and I think they're at a very, very elementary level. I think they're like trying to come up with sort of artificial cell membrane. [00:15:05] Speaker B: And even more elementary than us neuroscientists. [00:15:09] Speaker A: I think they're more elementary, yes. I hope that I'm not like, you know. Yeah, that was my impression from an outsider. [00:15:20] Speaker B: Yeah. Artificial life is pretty interesting though. But I mean, it makes you think. I mean, if life is necessary for intelligence, then that might be the thing that we should all be studying. Is artificial life? [00:15:31] Speaker A: No, it depends for what. I think that scientific research should be beneficial to human society, and I think it's. Artificial life, I think, is a fascinating topic and I think people are studying it. I think it's a fascinating discipline. So we'll probably learn a lot more about what is life and how it originated. For example, if you try to create artificial life in a laboratory, one possibility that I speculated a lot while working on this book is whether is it possible to have completely alternative sort of chemical forms of life? Or is it possible that if you want to have a life using the, you know, atoms and, you know, particles and, you know, if you're, if you, within the constraints of physics, that the only way that you can have life is something that's based upon nucleotides. Right. Because until you actually find the life that is, that has a completely different chemical basis, we will not know the answer. [00:16:34] Speaker B: Yeah. [00:16:34] Speaker A: And that's. If somebody's going to find out an answer to that question, I think it'll be the people that are studying artificial life. Right. Because they'll try to come up with a life that's, you know, not, that doesn't, doesn't necessarily have, you know, bilipid layers and DNAs and RNAs, et cetera, et cetera, and they may realize that someday they may actually be able to prove mathematically that it's not possible. The only way to have life is to use DNA and RNA and proteins. [00:17:02] Speaker B: Or at least maybe it would be, maybe not mathematically impossible, but intractable because it would be such a complicated process because evolution has shaped us through so many twists and turns. [00:17:16] Speaker A: So if you add a constraint that it has to evaluate, in other words, you just cannot come into being. I mean, that's a part of physical constraints that I was referring to, that it has to be compatible with the origin of universe. It's where everything starts with hydrogen atoms, et cetera, et cetera. So if you suddenly require, as initial conditions really extremely complicated molecules, that would not be reasonable. [00:17:42] Speaker B: Well, thank God you didn't start the book with hydrogen atoms and you started with RNA instead. Right, but so you're okay calling AI intelligent in that it hangs onto our lapel as we, you know, because we're life and so we create AI and therefore it is intelligent via us. Right, but not on its own. That's right. So what would satisfy you to consider AI intelligent? What would AI need to make it intelligent? [00:18:14] Speaker A: Yeah. Artificial light. Right. So if there is a machine that has a capability of reproducing itself, because that is the essence of light. Right. So life, I think can be considered as a machine. I mean, the life is a machine, but it's a machine that can replicate itself at the cellular level, not necessarily at the whole organism level. And therefore if someday, if some, if people can successfully build a machine that can start replicating itself, then I don't see any logical objection to start treating that intelligence that's controlling that machine that's now replicating itself. And our intelligence is death, not a. [00:18:59] Speaker B: Part of that story that the agent, the AI agent, for instance. Well, I'll start with humans. So humans, we have something at stake, right? Because we're going to perish, at least for now. We're going to perish eventually. And sure, we procreate and then we've successfully passed on our genes, but then we still have, there's something at stake that we have. And if we were eternal beings, would it be enough to just self replicate? I mean, part. Because part of life is survival. Right. And homeostasis and finding food is that part of the equation is the AI agent doesn't want to be turned off because there is that life at stake. [00:19:38] Speaker A: Well, so if machine begins to self replicate, in other words, if you have a machine that is designed to or evolved to self replicate, it's going to start having exactly the same problems that other life forms have. Because this is a bit of a hand waving, but there is something called second law of thermodynamics so even if you build it really, really strongly over time, it's going to decay, things going to break down. So it means that it has to find parts that can replace the things that are broken. Of course, any intelligent machine will constantly need energy. So I think it's going to have the same problem, fundamentally the same problems that all other life forms have. There is this movie that's really fascinating. [00:20:31] Speaker B: Oh, no. [00:20:31] Speaker A: Called the Automata. [00:20:33] Speaker B: Oh, I was released in 2014. That's a new one. Sorry, go ahead. [00:20:36] Speaker A: Yeah, so the reason why I wanted to mention this is because after I published the book, I actually watched that movie. Only after I published the book, did the first edition in Korean in 2017. And that book kind of brought to my attention a gray area. [00:20:53] Speaker B: The movie. [00:20:55] Speaker A: Yeah, sorry, the movie. The machine that doesn't self replicate and the machine that replicates something in between is a machine that can repair itself. [00:21:03] Speaker B: Yes. Yeah. [00:21:04] Speaker A: So that movie is about a machine, you know, a future society where humans actually made it illegal for machine to repair themselves because they figure that that's very dangerous things. If machine has a desire to repair itself, then that desire could actually interfere with the human interest, which is somewhat fundamentally the same sort of conflict of interest that someday people might have if we start having a machine that can replicate itself. So I think there is a gray zone, gray area between repair and replication. And I'd like to put that aside somewhat, because I'm still struggling with that, as to whether we should extend the notion of life to include something that could repair itself. Because fundamentally, I think the repairing and replication, I think, shares a lot of process. Both of those are basically processes that are trying to go against the force of psychotherapy, of thermodynamics. Right. Because there's a. I don't want to be teleological, but there are a lot of things that you would deduce from the thing that's trying to maintain its original shape. Because that's not going to happen just passively. And it's going to require energy and it'll have to do something in order to make that happen. And it's replication and repair. [00:22:28] Speaker B: Basically, in the book, you define life as self replication. And so this is maybe why you're saying you want to put that gray area aside. [00:22:35] Speaker A: Yes, exactly. [00:22:36] Speaker B: Offline. You had told me that you weren't 100% confident in some of the conclusions or concluding premise in the book. Is that what you're talking about, that definition of life in the gray area? [00:22:53] Speaker A: No, I don't feel uncomfortable talking about this, because that doesn't completely destroy my argument in that no matter what you do, no matter what you talk about, there will be always. Things are not always black and white, they're always gray area. And virus is a good example. It's neither alive nor dead. But that doesn't really change the other story that I'm trying to present. And I think repair might be one of those things as well. In other words, it's in the boundaries, as I said, it's in a gray area. But that doesn't necessarily kill the entire argument because everything that I said could be easily extended or slightly modified by including self replicating or self repairing machine. [00:23:40] Speaker B: Is that going to be the next book, the gray area? [00:23:42] Speaker A: No, I don't think so. [00:23:44] Speaker B: Okay, do you think that. So you have intelligence inextricably bound to life. So if something has life, is it automatically intelligent then does it possess, is there a gradient of intelligence then? The lowest being the simple bacterial cells or things of that nature? [00:24:05] Speaker A: Bingo. Yes. Okay, so I think that. So when I sort of, you know, tell my colleagues that I wrote a book about intelligence and the main reason that I wrote it is because I wanted to make this sort of comment that intelligence requires life or they're, you know, they go together, a lot of people will try to give me a counterexample. So do you think that bacteria is intelligence? Yeah, I'll say. Of course, that's the reason, that's the, that's the part of the argument. But I'm not saying that bacteria are as intelligent as human beings. [00:24:36] Speaker B: They're not fully consciously aware. [00:24:38] Speaker A: That's right. I mean it's like any other physical quantity. Right. We're heavier than bacteria. So what prevents us from saying that we are more intelligent than bacteria, but still acknowledging that it has certain form of intelligence? [00:24:52] Speaker B: So intelligence is a function of life. And I'll just jump ahead. Through evolution, our brains have become more and more complex. Is that complexification, is that solely to increase our self replication efficiency? [00:25:12] Speaker A: Mostly I'm hesitant to say solely. [00:25:15] Speaker B: Sure. [00:25:16] Speaker A: We'll go probability. Yeah, that sounds a bit teleological because evolution is a blind process. So I'm somewhat hesitant purpose to that process. [00:25:30] Speaker B: So language is a problem. [00:25:31] Speaker A: I mean it's. That's right. But as a first approximation, I think that captures the essence. So one other way to put it would be that if the brain that's hugely metabolically expensive, if that didn't enhance the survivability of the organism that harbors it, how would it evolve? This is why as you mentioned in the beginning the Darwin's framework is so fundamental for everything in biology. [00:25:58] Speaker B: It is interesting because even thinking about these things, at least if you're me, you sort of think teleologically and then you catch yourself kind of using those words in your thought language and then you catch yourself and then remember that, well, there's no purpose, obviously, but you have to have some way of speaking about the advancement of evolution. And it always sounds purposeful. [00:26:21] Speaker A: Right. I think that's okay. There's nothing wrong with it as long as you remember at the end of the day that after you finish making your argument that you had used teleological explanation as a shortcut, because otherwise you have to just increase the number of words to try to make it sound a little more precise. But that makes it harder to understand things intuitively. So I think it's a good. I think you can have a compromise. [00:26:48] Speaker B: This idea that intelligence requires life is anti functionalism, essentially meaning that a functionalist, for instance, would say that there are multiple different ways to realize intelligence. And intelligence is an independent process from life and from the substrate from which whatever intelligence it is, from which it emerges. I said emerges. But have you had pushback on the requirement of life for intelligence? [00:27:19] Speaker A: No, I've been keeping it at low profile. [00:27:24] Speaker B: What do you mean? You wrote a book about it. How's that low profile? [00:27:32] Speaker A: Part of the reason is because I'm a neuroscientist, I really didn't have a formal training in AI or philosophy. I think English uses the term closet philosopher. That you are doing a lot of philosophy but without proper training and without necessarily reading all the literature. So you mentioned functionalism, so that somewhat sounds inconsistent because functionalism refers to the view that cognition doesn't depend upon a particular form of hardware. [00:28:07] Speaker B: Yeah, that's the main form of functionality that I understand. [00:28:10] Speaker A: Yeah, exactly. And I think the last time that I heard the word functionalism in that context is probably already more than 10 years old. [00:28:20] Speaker B: So you are a closet way deep in the closet. [00:28:23] Speaker A: Yes, it's been a while. So I don't think it's necessarily inconsistent with my view because I'm not saying that many of the sub modules or processes of intelligence, such as memory or object discrimination, identification, border control, cannot be implemented in machine. And it could be implemented in many different forms. I mean, it's been done. There's steam engines and internal combustions and nuclear power. So there are many different forms of machines that can produce mechanical energy. So why should that be an exception for other information processing machines? But that doesn't necessarily mean that when you try to build a. The entirety of an intelligent machine that thinks that's made of any hardware can reproduce all the properties of intelligence, which, as I argue in the book, requires life. Right. So unless you can complete the description of mechanical system that can replicate itself, my view is still not inconsistent with functionalism. Does that make sense? [00:29:27] Speaker B: Yeah, no, I can see it. I was curious because there's a lot of talk these days about things like multiple realizability, which is, you know, related to functionalism, that you can realize cognition, let's say, or, you know, behaviors in multiple different circuits of the brain, for instance, and basically have the same kind of behavior. So I was just wondering if there was, you know, if you'd received pushback, but I guess you're keeping it on the down low. [00:29:57] Speaker A: Does your podcast have any philosophers in the audience? [00:30:00] Speaker B: Oh, yeah, we talk philosophy. I mean, I'm not a trained philosopher, so it's ridiculous for me to. It probably drives philosophers nuts to hear someone like me talk about philosophy. But I'm reading a lot more along those lines these days and talking to a lot more people on the podcast along those lines, since I'm not in a lab doing research. So I've got to spend my time somehow. I can't just make food for my kids all day long. Although that's what it seems like I do. So, yeah, we do. [00:30:29] Speaker A: So I may get pushback. I may be driving some people very uncomfortable. [00:30:35] Speaker B: I don't know. I think that you made the case that it's not incompatible with functionalism. [00:30:39] Speaker A: Thank you. [00:30:40] Speaker B: Okay, so let's get into a few of the topics in the book. Like I said, there's just too many for us to go through the whole thing. But one of the things I wanted to talk about is the concept of division of labor. So, like we were talking about evolution, the evolution of intelligence is this long entangled weave, right? And if intelligence is a product of that evolution, one could argue that it's unlikely intelligence is multiply realizable, just like we were just talking about, or that a functionalist account of intelligence, the way that we have it anyway, isn't likely. So part of that story of evolution, an important part as you see it, is what you describe as an early division of labor in the service of self replication efficiency, and hence that it makes it part of the story of intelligence. And that is the story of how RNA divided its labor, its own labor, into DNA, which is a stable way to store information, and proteins to control the chemical reactions happening. If you could elaborate on that, that would be great. Like how you view rna, proteins and DNA as an early division of labor story with respect to evolving brains and life. [00:31:53] Speaker A: So you already gave a good summary. If you look around, all life forms on Earth basically rely on two different polymers, right? One is DNA that encodes information, and the other one is proteins that as you mentioned, catalyze all chemical reactions in a cell. And if you accept this sort of view that in order to have a life on Earth that you need these two complicated polymers cooperating with one another, then you sort of get into this dilemma, which I think has been used a lot by creationists, that this is why they think that life was created by God, because it's unimaginable, it'll be a miracle. Only a miracle can basically come up with these two complicated molecules at the same time. And there is a resolution, one possible way to get out of this dilemma, and that is that if you look the third important polymer in living cells, which is rna, this actually has a dual functions, right? It can store information. And because it could have very complicated three dimensional structures, it could also be a catalyst. And therefore that's what led to this. I think it's originally led to this hypothesis that in the beginning the life actually may consist of RNAs. But then why did life require DNA and proteins? And that's because just like many things, if you try to do everything, the efficiency is not great, right? So it's probably happened at some point early on on Earth that when RNA had accidentally come up with a way to sort of delegate the process of storing information in something that looks very similar to RNA but has a more stable chemical structure, I.e. dNA, then it will improve the efficiency of self replication. And then when it also realized that, oh, if you can somehow recruit another chemical that has a more diverse chemical shapes so that you can catalyze more many different chemical reactions more efficiently, then it'll recruit another class of molecules, proteins. So that to me actually exemplifies something that I referred to many times in the book divisional labor and delegation. Because delegation comes inevitably into picture when you talk about divisional labor. Because that means that you are not doing that job anymore and you're relying on somebody else. So if that somebody else screws up, then you go down together. But it's a worth taking risk because you know, as long as you control the agent actually to maintain the original responsibility, then you know, you're better off. [00:34:41] Speaker B: These are ideas from economics, right? [00:34:44] Speaker A: Well, yeah, I think economists get some Credit because Adam, I mean, I think many people know that Adam Smith used this pin factory as an example of division of labor as a means to maximize productivity. And I think because that book was so influential, I think it contributed to many people realizing this is actually a really, really important process. Yeah. So Adam Smith is probably not the first one to think of the possibility that you can improve productivity by division of labor. But I think he really, really brought. [00:35:18] Speaker B: It to many people's attention, that division of labor concept. And by the way, that automatically sounds teleological when you are talking about RNA delegating. [00:35:28] Speaker A: That's right, yes. Yes. So I took the bait. And it makes it a lot easier to understand. [00:35:33] Speaker B: It does. It's a story. It makes it more into a story. And we like, as humans, we like stories because we anthropomorphize everything as well. Which we'll get into. Exactly. [00:35:42] Speaker A: Yes. [00:35:43] Speaker B: So another important concept that follows from this division of labor is the concept of principal agent relationship. So. So it's always a tacit assumption that brains, you know, we always kind of in a hand wavy way say, of course, brains are there for helping us self replicate, for helping our genes self replicate. And Robert Sapolsky has this phrase, he says, sometimes a chicken is an egg's way of making another egg. And so you could translate that into neuroscience and say sometimes a brain, or translated into evolution, I guess, and say sometimes a brain is a genome's way of making another genome. But I didn't know that there was a formal theory or description for this kind of relationship that can be applied. The principal agent relationship. So what is the principal agent relationship with respect to brains and genes? [00:36:40] Speaker A: Principal agent theory or principal agent relationship is just like many other theories in economics is mathematical theory that tries to find a normative solution. In other words, what you should do when you're delegating your responsibility to someone else. So when there's a division of labor, you know, this is trying to think more rigorously about how the division of labor could be optimized. Because whenever you have a division of labor, there would be some asymmetry. Right. So that those principal who is trying to accomplish his own mission, who is hiring somebody else to delegate some responsibility, will not be in exactly the same position as an agent. And the most important thing that pointed out in this context is information asymmetry. Right. So somewhat paradoxically, between principal and agent, agent has a lot more information because they're in the field. So take the example of insurance company and a customer that's buying the Product, insurance, product. So obviously if you are driving your car around, you have a lot more information about the moment to moment changes in the road condition, which road is safe, et cetera, et cetera, whether you're drunk or not. Insurance company doesn't get that kind of information. Although things are changing rapidly these days with ait. But there is that fundamental information asymmetry. Somebody has more information since agent who wants to somehow, you know, shape the behavior of agent to his own liking. Since the principal doesn't have all the information that agent had that agent has access to. What kind of incentive should principal provide to the agent in order to guarantee that the agent's behavior will be as good as to the principal? That's the essence of principal agent theory or a relationship. And I think that that has a many applications beyond economics. Obviously has a huge implications in political science, because you can consider elected officials and voters as a problem of principal, age and relationship. Because we elect somebody hoping and expecting that person will work on my behalf to try to maximize my interest. [00:39:01] Speaker B: But we know that the agents have their own agendas. [00:39:05] Speaker A: Exactly. And I think that a few people have already pointed it out that this actually could be applied to biology because we see division of labor occurring multiple, you know, in multiple levels. [00:39:17] Speaker B: I guess some symbiotic relationships could fall under this formalism, right? [00:39:22] Speaker A: Yes, I think so. But I think that symbiosis is a. I think is a relatively easy case. Right. Because there, there's no conflict. Because, you know, what's good for me is what's good for you. So it's almost like a single agent. [00:39:40] Speaker B: That'S a mutual symbiosis. But I think my biology is way back in my past as well. But I thought that there were unequal symbiotic relationships where one organization. [00:39:52] Speaker A: That's correct. [00:39:54] Speaker B: Anyway, in the book you make the case that genes and brains have this principle agent relationship, respectively. [00:40:04] Speaker A: The reason I'd say in the book I mention five assumptions of principle, Asian theory or relationship. And the reason why I go over those five different assumptions is because you can use principle, agent, relationship as an analogy to the relationship between brain and the genes. That's already pretty fascinating. The relationship between brain and genes is like the relationship between employer, employee and parents and children. And that's already fascinating. But analogy is not science. There are many examples of analogies that have been completely false, even though in the beginning it sounded really, really compelling. In order to see whether you can transplant a theory from one field to another, you really have to understand and examine the Underlying assumptions. That's what I wanted to do in the book what are the assumptions of principle Asian theory? And I found out that actually you can make a reasonable case that all those assumptions actually apply to the relationship between the brain and the genes. And therefore if you have certain theorems or certain theories in the area of principal Asian relationship, that you might actually found analogous solutions being implemented in the brain evolution, for example. Those assumptions actually relatively simple and straightforward one is that actions of aging has to affect the pay of the principle. If you translate it into something that's more familiar to biology and the relationship between brain and the genes, what it means is that whatever the brain does, it's going to affect the likelihood that the genes will be successfully replicated. That is obviously true because if the brain decides to use a contraceptive, for example, then that will have huge implication as to whether it's gene's gonna get replicated or not. And then another one is like the agent, as I mentioned earlier, agent has more information. And that's also obviously true. [00:42:16] Speaker B: So in that case our brain has more information than our genes. [00:42:19] Speaker A: Of course. Right. Because it has evolved. I mean many people study perceptual sort of systems of the brain. And the reason why perception is important is because the whole, the reason why the brain evolved was because if the genes themselves are trying to acquire information about the environment directly and then change the chemical machinery inside the cell to try to like produce different receptors and things like that, it's going to be very limited and slow. This is why I use the analogy to Mars rovers, because brain is a, is a real time machine, much more so than individual cells. And that's why that assumption also applies to the relationship with the brain and genes, et cetera, et cetera. [00:43:02] Speaker B: Well, it's interesting, I feel a lot more like a brain than I feel like genes, whatever that means. Right. [00:43:08] Speaker A: Well, that means I think you're your brain. [00:43:10] Speaker B: I'm a brain. Right. But really I'm genes and I'm just a slave to my genes. Although like you're saying this principal agent relationship means that I, when I say I, I mean my brain have some independence and more knowledge about the world and also can therefore destroy myself and go off and do things that are unhealthy and unhelpful for my genes. [00:43:31] Speaker A: That's right, Right. [00:43:32] Speaker B: Which is the risk you run in a principal agent relationship. Right? [00:43:35] Speaker A: That's exactly right, yep. Because you have to delegate, in other words, if the I use the Mars rovers again as a result example, to drive this, drive this Message home. And that is that if humans on Earth try to control the Mars rovers using a joystick, then there's no point of having a sophisticated AI in Mars. So if you want to take advantage of AI on Mars so that you can make fast decisions, that means that you have to give up your control. And the same thing happens in brain gene relationship. The genes basically have to give up certain control so that it allows the brain to make its own decisions. Because on average, statistically speaking, that's going to be better for the genes as well. But there'll be always exceptions because as a decision making researchers we all know this. The outcome doesn't justify. Outcome doesn't tell you whether that decision was a good decision or not, because things are stochastic. So the fact that you got bad results doesn't necessarily mean that you made a bad decision because Hindsight is always. [00:44:38] Speaker B: 2020 in brains anyway. What does it mean? Another assumption is that the principal controls the contract between the principal and agent. What does that mean in terms of brains and genes? [00:44:52] Speaker A: So that is a superiority for the principle. The Asian has more information. Then what's the role of principle? Principal is the one that's actually filling out the details of the contract. So it presents a condition of the contract to the agent. An agent doesn't have the ability to revise a contract to its own liking. It can only either accept or veto. And the reason why that's the case is because this is my understanding that the principal agent theory is really trying to come up with a prescription for the principal, what the principle should do to maximize the efficiency of cooperation. And again, I think this also fits well in neuroscience because how the brain develops is really, it's not entirely, but largely specify the genes. And that's why different animals have different shapes of different sizes and different shapes of the brain, because they have different genes. So that I think is another sort of correspondence between principle agent theory and gene relationship. [00:46:08] Speaker B: So it controls the contract and in the contract it says you will not use contraceptives and the brain can break that contract because it's free to break any contracts. [00:46:20] Speaker A: I suppose not exactly, because the contracts are not to be broken. One of the things that I, yeah, one of the things that I explain in the book is that it's not desirable, it's not beneficial for the genes to specify what the brain should do at the level of individual behaviors, like the contraceptive example that you gave, because if it does it, it's very likely in an environment where taking exactly the same action could be Good, depending upon the circumstances. And in some other cases it could be a bad behavior. And the whole reason why you are developing the brain is because brain can make that choice fast. Depending upon. Yeah, fast, exactly. Depending upon how the situation changes and therefore the genes. You should not control the. What the brain should do at that level. You should specify to the brain as to what kind of outcomes you should try to get, you know, try to get good food or try to find a good mate, but you should not be controlling such details. [00:47:25] Speaker B: So do you see genes as being slow intelligent processes? So I had Tony Zader on the show and he talks about how evolution is basically just a slow development of intelligent priors, and that throughout our life we actually learn a lot less on top of those priors than maybe we give the priors credit for. That most, the vast, vast majority of our learning has been programmed through our genes, through evolution. Is that the way you view it? [00:47:56] Speaker A: Yes, I completely agree with that view. And learning and evolution are in a way fundamentally the same process that are unfolding at two different timescales. So if evolution can take place on a millisecond by millisecond basis, then you wouldn't need any learning unless the learning can occur in a microsecond. [00:48:18] Speaker B: Yeah, right. [00:48:19] Speaker A: Resolution. [00:48:20] Speaker B: This stuff blows my mind. It's wonderful. Okay, so that's brains and genes. But you make the case also that there's a principal agent relationship to be had between humans and AI. Can you maybe just elaborate on that? [00:48:33] Speaker A: It's basically the same logic, right? So you wouldn't build an AI unless you're doing that in a laboratory or for fun, unless that AI has a certain advantage, can handle, solve. Solve the problems in a certain context better than we are. And one thing that distinguishes AI than other machines like mortars and things like that that humans have built before is that this is an information processing machine. And therefore the amount of information that the AI must be collecting. And this is already a problem for AI, right? It has too much information, more information than the brains have. So if you actually examine these assumptions in principle of age and relationship, the relationship between the humans and AI actually also satisfy, I think all of them. And that means that again, this theory can actually have an application not just in economics and not just in biology, but also in AI. [00:49:38] Speaker B: So we'll move on from the principal agent in just a second. I just think it's a really neat way to approach thinking about this. You talk in the book about the difference between germline cells and somatic cells that they have this principal agent relationship. And you make the point that when we die, our thoughts die with our somatic nervous system, whereas our genes are passed on in our germline cells. However, what about so our thoughts die is kind of what jumped out as the neat conclusion there. But then I thought, well, we have language and we have memes and things like that that are passed on through. Well, I don't know about if memes. Yeah, I suppose memes get passed on through generations, but through processes like language, we can pass our thoughts on through generations. Is that an exception or how would you account for that in the. [00:50:36] Speaker A: So it's not an exception. I thought about a lot. I thought about the memes a lot while I was writing this book and hesitated a lot as to whether I should mention memes in the book. I didn't. And part of the reason is because it would have taken me a lot more time to actually come up with consistent explanations that I would be satisfied that I could be satisfied with. Because I obviously give a lot of credit to Richard Dawkins because he's the one that actually kind of sort of brought to many people's attention that the relationship between the gene and the brain is like the principal agent relationship. And again, he's the one who developed the concept of memes. But the reason why I didn't go deeply into memes is because it's still not clear to me as to whether memes just an analogy or whether it actually has a certain valid scientific structure so that actually can produce meaningful predictions. And I still haven't finished that thought. [00:51:47] Speaker B: It just occurred to me that and this is completely off the cuff, so I apologize if it seems extremely naive, but language and memes could be a part of a social structure. Right? So that we have a principal agent relationship. Our individual thoughts, let's say, have a. Would be the principal and the agent would be society in that terrible analogy. Right. So the memes are in the societal, the social structure, the memes and language. And there is back and forth communication. Like I said, it's off the cuff in a naive. [00:52:24] Speaker A: Yeah, I think it sounds. I think it's fascinating to think along those lines. Right. Because one reason why the theory and analogies are helpful is especially for the case of analogy, even though it. It's not the end goal in science. In other words, we shouldn't be satisfied if you come up with a good analogy because we need to test it and theorize it more vigorously. But both scientific theories and analogies are useful in that it generates New possibilities, new theoretical possibilities. In that sense, the kinds of things that you just mentioned, I think are fascinating. In other words, is there another level where you can find the principal agent relationship in human societies and beyond, and you can try to find positions, things like language and cultural transmission in a more mathematically rigorous theory, theoretical framework? That'll be fascinating. [00:53:16] Speaker B: Yeah. And then you can go the other way. And then you end up at the original principle, the Lord. Right, sorry, the Lord. [00:53:25] Speaker A: I see. Yes. Okay, so I'd like to steal. [00:53:28] Speaker B: Yeah, I always like to bring it back to the Lord. All right, so that's fascinating. I really like the principal agent way of thinking about these things. And I think you make good cases in the book for this being a useful structure, a useful theory to approach evolution of intelligence. But let's move on. And I want to talk on one more major topic that you've written about in the book. And like I said, we're skipping over so much, so it's ridiculous. But I know that you have long had. Well, you even said it in the beginning. You've long had an interest in consciousness and awareness and self awareness. It's interesting to me that some people seem to have no problem accounting for our self awareness, like our phenomenal consciousness. Others, like me, seem to forever end up with this question. You know, you talk about some cognitive function and then it's related to consciousness, and then there's always that. There was always, quote, unquote, the hard problem. Right. Where you end up, well, why would we actually need consciousness for that? Why would we need self awareness for that? So how do you see self awareness relating to intelligence? [00:54:39] Speaker A: Right. So I'd like to separate consciousness and self awareness. [00:54:43] Speaker B: Okay. [00:54:44] Speaker A: And you might have noticed that I never talk about consciousness in this book. [00:54:48] Speaker B: I did. It's self awareness. Yeah, self reflection and self awareness. And metacognition. [00:54:54] Speaker A: Exactly. Because I may change my view, but still, even at least now, I don't think that consciousness could be a topic of scientific investigation. And that's because it's entirely subjective. The thing that you're trying to study uniquely in consciousness is entirely subjective. Everything else you can study can be studied objectively, like memory, attention, perception, motor control, decision making. All of these things can be probed. [00:55:25] Speaker B: Even cognition, about our cognition. Metacognition. [00:55:28] Speaker A: Metacognition, Exactly. So we can study self awareness. We can just ask questions to humans. And we can also study this in animals as to whether they are aware of the expected outcomes of their decision making. We study confidence, which is a part of metacognition. In animals. We can ask them to report how confident you are about the decisions that you made. Therefore, there exist these operational so scientific approach that you can take to study self awareness in scientific experiments. But I don't think that we have scientifically valid method to study the subjective aspect of consciousness, such as qualia. [00:56:09] Speaker B: And everyone has slightly different definitions of all of these terms, of course, but I lump awareness into phenomenological subjective consciousness awareness. Right, so self awareness to me involves consciousness. Right, but you're using it operationally to mean essentially metacognition. Right? Cognitive functions about other cognitive functions. And so. [00:56:35] Speaker A: That's right. [00:56:36] Speaker B: So one needs not invoke phenomenological awareness. [00:56:40] Speaker A: That's right. Yes, exactly. Because to me it's conceivable that some agent animals or humans have self awareness without having phenomenological consciousness. [00:56:52] Speaker B: That's a definitional thing, right? [00:56:55] Speaker A: I think it is. [00:56:56] Speaker B: Yeah. [00:56:57] Speaker A: But if it's a definitional thing, and if there is no compelling empirical argument that you can make as to why you need to have that second concept, then Occam's razors on my part. So the reason why I think that we don't need to study consciousness is because we already studying everything else else that people that are studying consciousness are studying. Without using the term consciousness. [00:57:23] Speaker B: I know, but then there's that extra bit that is so interesting that no one had. [00:57:27] Speaker A: Except that extra bit is zero. [00:57:30] Speaker B: So. Okay, well, you think it's not. So I thought what you were saying is that we are not at a point where we can scientifically study it yet. But maybe what you're saying is that it is not a thing, Consciousness is not a thing. [00:57:42] Speaker A: I don't know. Maybe I'm a zombie. Maybe I'm the mutant that has no consciousness and therefore don't understand what somebody's referring to when they say that consciousness is separate from the whole collection of everything else that neuroscientists studying these days. [00:57:59] Speaker B: Uh oh, he opened the closet door of his philosophical closet and peeked out everybody with zombie ness. [00:58:04] Speaker A: Exactly. Yeah. Because I can't tell as to whether you're a zombie or not. [00:58:11] Speaker B: But you don't think we can make meaningful progress understanding what the right question is to ask. Because there's plenty of scientific inquiry into consciousness and people can argue about whether it's valid or actually making progress or if it's really just masked and just studying these lower level processes like you just said. Because everything we're studying is what consciousness studies study also. [00:58:42] Speaker A: Right. So you might know this privately. I've Been talking about consciousness a lot in a dinner with speakers, et cetera, et cetera, but I haven't really spoken about my view on consciousness in public. This may be the first time, and I think there's. And therefore I'd like to be a little more cautious. Sure. And I think there is a. To me, this actually might somewhat be maybe somewhat similar to the problem or the possibility of multiverse in physics that I think it's somewhat similarly controversial. In other words, I think a lot of physicists are hesitant in making comments about multiverse because how do you know whether there's other alternative universe when you can take zero measurement about it? It's a merely theoretical possibility. But what's merely theoretical possibility may change later. [00:59:39] Speaker B: There's mathematics for it, and there's not even really mathematics for consciousness. [00:59:43] Speaker A: That's right. So in a way, consciousness in a worse shape than multiverse. If there is already some mathematical theory about multiverse. I'm not an expert in mathematical theories of multiverse, but if somebody gives me a mathematical theory of consciousness that has a testable elements that is not currently testable, but specifies the conditions in which we can test later with better technology, I'll be much more interested in that. But currently the only thing that I see in the discussion or explanation of consciousness that is not covered by other things like attention, memory, perception, et cetera, et cetera, is the qualia. And whether that exist or how we would confirm its existence is a completely. [01:00:33] Speaker B: Because it's purely subjective. [01:00:35] Speaker A: Exactly. So I thought about, you know, while I was writing the book, I thought about consciousness of bacteria a lot, even though I said nothing of it. [01:00:43] Speaker B: Yeah. Like panpsychism. [01:00:45] Speaker A: Yes, exactly. [01:00:46] Speaker B: Or biopsychism, it's called. If it's in living objects, it's biopsychism. So that we don't have to say. [01:00:50] Speaker A: But then I also thought about the consciousness of rock too. [01:00:53] Speaker B: Did you? Okay. Yeah. [01:00:54] Speaker A: Yes, because I thought about the consciousness of a hydrogen atom. The first thing that we started talking about today. [01:00:58] Speaker B: Right. Oh, it is going to go back to hydrogen atoms. [01:01:01] Speaker A: Yes. In a way, you know, I. So panpsychism, psychism and multiverse have some similarity too. [01:01:08] Speaker B: Yeah. The problem with panpsychism, and we don't have to go down this road much. To me, there are two issues I have with panpsychism as I understand it, because I think that there are even different versions of panpsychism. One is that it doesn't explain anything. If it's true, it explains nothing. Right. [01:01:26] Speaker A: That's my Point. [01:01:27] Speaker B: But the other thing is, let's say it's a. So panpsychem is real. That means not only is the rock conscious, not only does the rock have consciousness, but then the rock and the single atom right next to it, that entity has consciousness and half of the rock has consciousness. Because there's no discrepancy between objects if everything has consciousness. So then everything explodes and we end up in a multiverse again, anyway. So I guess, yes. [01:01:55] Speaker A: So that's disturbing to me, which is why I don't talk about consciousness. [01:01:59] Speaker B: Okay, okay. Well, let's bring it back then to metacognition and what you call self awareness and self reflection. Because you talk about how it may be the highest form of intelligence. Why do you say that? [01:02:16] Speaker A: Well, it may be one of the highest forms of intelligence, and the reason I consider that to be one of the highest forms of intelligence is because it evolved to deal with social cognition. [01:02:29] Speaker B: Right. So one of the points that you make in the book is that our self reflection is likely a product of our social cognition. [01:02:38] Speaker A: Right. Because my hypothesis is that, again, this is probably not my own hypothesis, I'm sure there are people who had made similar points in the past, is that social cognition is much harder than solving problems only in the physical environment. And that's because by definition, social beings have their own intelligence, so they're trying to solve their own problems. And in order to make the predictions about the outcomes of your own social behaviors, you actually have to have good theories and models about how other agents will behave. This leads to recursion. What do I think about what you think about what I think about, et cetera, et cetera. [01:03:18] Speaker B: I know that you know that. I know that you know. [01:03:20] Speaker A: Exactly. So since that's a complicated problems, and I kind of am very sympathetic to the view that in social cognitive neuroscience, the default mode of operation of the human brain is actually social. Intuitively you can sort of see that, because again, with the potential pitfalls of introspections, when I think about things just kind of sort of in a free form style, most of the things that I think about are social things. Right. I think about the podcast that I was going to have today. Most of our behaviors are social behaviors. And this is something that I think will actually become much more important in the field of AI as well. Because it's my impression that part of the reason why self driving cars are much harder problems than some other people thought earlier is because without social cognition, this is not going to work, because it has to be able to make Predictions about what that driver is going to do, is it going to turn to the left or right? You can't get that information only by analyzing the motions of the wheels of the other vehicles. You have to look at their eye gaze positions and whether that driver looks angry or not. This is all in the domain of social cognition. So again, when you're in the social domain, recursion is a huge problem. And therefore there is a need to have an accurate model about other agents. But if you're in a society where you and others have the same hardware, like in human society, then once you actually have that good model about other agents, that means that you actually have a pretty good model about yourself. There is a huge benefit in being able to predict the. Predict the behaviors of other agents. And therefore, if you have a system or a life form that has developed that ability, then it's relatively easy to see that one benefit of that is that now actually you can understand yourself as well, because we all have the same hardware. [01:05:23] Speaker B: Well, so understand yourself. But in that sense, that self awareness, that self reflection, that self conception, is about a simulation of other people. But that other people is you. [01:05:38] Speaker A: Exactly. [01:05:39] Speaker B: Is the self. So self reflection to you is a simulation of what happens to be you or. That's right. What is essentially constructed as you through the simulation. [01:05:51] Speaker A: Exactly. So this may be harder to prove scientifically, but I think it's fascinating to think about that possibility that what you may be thinking about you may be in fact what you might be speculating what others might be thinking about you. [01:06:06] Speaker B: Oh, yeah. Well, then it's an infinite recursion eventually. [01:06:10] Speaker A: Yeah, but we do things like this so naturally. In other words, it's not hard for us to think about what I think about what you might think about me. We get that immediately. [01:06:20] Speaker B: Right. That means we can also be extremely wrong about ourselves relative to what our genes want, Especially to bring it back to the principal agent relationship. Right. It's interesting and frustrating to think that my thoughts about myself are really about a simulation I'm running about someone else that I conceive of as myself, and on and on. All right. One other thing I wanted to touch on in the book is the idea of negative emotions that we have to have negative emotions. Intelligence isn't all rosy and you talk about the costs of being intelligent. So just to take negative emotions as an example, what role does negative emotions play in intelligence? [01:07:05] Speaker A: Right. So one of the things that I mentioned in the book is that the negative emotions, such as regret, disappointment and jealousy, those are the three examples that I gave in the book have similar benefits of physical, you know, compared to physical pain. And when I was a child, one of the aunts of my best friend had a condition called analgesia. And it was mind boggling because she could just go in and pick up a hot pot without any facial expression. And I was like, wow, she's a superman. You know, I would like to be like that because I could then I will, I won't be afraid of like, you know, like getting cut. And I could be like a, you know, fearless soldier. And until, you know, many, until many years later when I studied psychology and biology, I really didn't understand the problems with that condition. And that is that the reason why? And that kind of, sort of, you know, made me realize as to why pains are important and because it's a protection mechanism. Protective mechanism. Right. So if you don't have ability to feel pain, that means that, you know, you won't be trying to avoid any situations where there'll be physical harms unless you're like, you know, deliberately reminding yourself all the time that this is basically what you have to avoid. And I think that negative emotions basically serve the same role that, you know, what depending upon the kinds of decisions that you make. So one of the things that I talk a lot about in the book is diversity. A multitude of algorithms that the brains deploy to make different decision making. And that's probably both a product of evolution and probably the computational necessity that it's hard to come up with a single algorithm that can solve all the problems. And therefore you just sort of have a collection of different algorithms that are specialized to different circumstances. And all of these algorithms need negative feedback. In other words, they should somehow know that something goes wrong and that's required. They need to retune themselves, they have to change their strategies. And all these. The reason why humans have many different kinds of negative emotions is because they're all tied to different specific computational algorithms. So for example, if you're familiar with model free versus model based reinforcement learning algorithm, then it's easy to kind of see the correspondence between reward prediction error versus disappointment. So disappointment is an emotion that you get when the outcome of your choice is worse than what you expected, which. [01:09:54] Speaker B: Is model free reinforcement learning, which comes. [01:09:57] Speaker A: From this class of learning algorithms that people refer to as a model free because you don't need complicated model of the world in order to run this algorithm. But there is another class of learning algorithms called model based reinforced learning algorithm, where actually you have pretty detailed description of the world that you're in. And you can actually use that model to try to predict the outcome of your behavior. [01:10:21] Speaker B: By stimulating. [01:10:22] Speaker A: Yeah, exactly. By simulating what the world would be if I take this action. And that actually creates the opportunity to have a hypothetical negative prediction error. Right. So you realize even after you take an action that, oh, I could have gotten a better outcome had I taken a different action. Again, these are these kinds of kind of actual thinking. We do it all the time. Which is in a way proof that, you know, everybody has ability to the model based reinforce learning and that error signals completely. You know, it's fundamentally different. And it's basically what people refer to as a regret. In other words, we get regret even though actually your outcome was better than what you expected. So it's easy to see that regret and disappointment are orthogonal. They could happen independently. You could have regret only, or you could have disappointment only. [01:11:10] Speaker B: You talk about some clinical evidence for this in the book as well. Yeah. [01:11:13] Speaker A: Yes. And then jealousy is another example. That's a negative emotion that's unique to a social situation. You could be happiest man in the world until you realize that there's somebody else got a better deal. [01:11:29] Speaker B: So then the take home really is that negative emotions are a fallout from the variety, the multitude of learning algorithms that are competing in our brains. [01:11:40] Speaker A: Yes. [01:11:41] Speaker B: So, Dale, finally, just to bring it back to this notion of AI and intelligence, one of the things that you end the book with is. I'll just read this quote. Actually, there's a warning toward the end of the book. If we want to remain as the principle in our relationship with AI, we should not create machines that can reproduce themselves without human intervention. So you don't want to make self replicating AI. That's the fear that once you do that, once AI acquires the status of life, then it could be beyond our control and they would no longer be the agents in our principal agent relationship. [01:12:22] Speaker A: That's the thought. Yes. So if I have an ability basically order such machines to be manufactured right now that can begin to self replicate themselves. I would not do that. But that might be because I have very limited imagination. Maybe there is a, you know, much greater benefit that could be realized. For example, I don't think that you could upload your mental phenomena because I don't want to use the word consciousness into a machine. But, you know, maybe if that's possible, maybe some people want to do that, but I don't think it's possible. I think me is me. I don't think this could be you Know, transferred to another machine. Because if that's so, you know that teletransportation occurs in the Star Trek, right? [01:13:16] Speaker B: Yeah. [01:13:16] Speaker A: And one of the mysteries in the teles transportation in Star Trek is that, well, if they can tele transport, why don't you multiply yourself? In other words, there is absolutely no reason, at least in our imagination as to why the first copy should be destroyed. And I think the reason why they create a teletrapantation where the original copy is always destroyed is because I think things get pretty chaotic and our minds just cannot catch up with the possibility that you could multiply yourself. In other words, your mind, your identity. [01:13:50] Speaker B: This is from self identity. This is work from Derek Parfitt, actually a philosopher who only recently passed away. But if you want to go down that road philosophically, I recommend his work because it's exactly. [01:14:03] Speaker A: Yes, that'll be. Maybe I heard about it indirectly already, but you know, yeah, I'd like, I would like to actually learn more about it so that, you know, I think it's something that's really fascinating that even if you can copy yourself, I don't think that actually your, your self awareness will multiply. I think you'll remain as you. Because you can do a thought experiment. If somebody has a. That's that technology and suddenly copies myself on the opposite side of the universe, will I know that before I die? Probably not. So this is another sort of, you know, why things like this actually can become very similar to multiverse because it might be a product of just limited human intelligence. In other words, we can think of these things because somehow it serves some useful purpose of simulating other minds, but it may not have any reality basis. [01:14:53] Speaker B: Makes me think of. I was going to ask you about the idea of why aren't brains, why haven't they just evolved even further? Why aren't we super intelligent already? And my guess is your answer would be because there's costs to intelligence and the agent in a principal agent relationship can't run away with it because they are under the contractual control of the principal in that case. And so it actually doesn't confer benefits to the principal. If a brain was going to run wild and become super, super intelligent, am I on track? [01:15:27] Speaker A: Yes, I agree with that. [01:15:29] Speaker B: Okay, so man, so we're kind of fundamentally limited. [01:15:32] Speaker A: I mean, our technology might open up such possibilities again someday. [01:15:37] Speaker B: Much of the chagrin of our genes perhaps. [01:15:40] Speaker A: Right, so that's the risk that you might have to take because as we discussed earlier, principal agent relationship, one of the implications One of the assumptions is that you have to delegate. In other words, you have to lose the control. So do we want to keep that up completely and do we want to basically give up the ability to even write the contract? And then we're in a way getting rid of the principal relationship, and we're basically beginning to treat AI as another principle. So I think that's just too wild to think about. [01:16:14] Speaker B: It's wild. Yeah. It's a fun book that you wrote. Before we leave off talking specifically about the book, I have a few more general questions. But so we hit sort of the main topics that I wanted to hit. And look, it took us this far already, this long already. Like, I. That's why I tried to not put everything in that I wanted to put in. But are there some main ideas or anything that we didn't talk about that you'd like to sort of highlight that you think is important that we. That we missed? [01:16:41] Speaker A: Well, so I guess I would like to add a sort of one short comment, and that is that even though we talked a lot about evolution, AI and awareness and things like that, I'm a neuroscientist. [01:16:56] Speaker B: Yeah. The bulk of the book are so many examples from the neurosciences, so we. [01:17:04] Speaker A: Didn'T cover a lot of those. But if somebody wants to understand why neuroscientists actually has to write such a book, then I would recommend my book because I think these all obviously tied to the constraints and the mechanisms in the brain. And I think pointing out those problems, rather than just having a pure philosophical discussion was the reason why I wrote the book. [01:17:29] Speaker B: Yeah. Like I said, we just touched on a few of the formalisms that you introduce in the book to think about these things. So, again, thanks for writing the book. I was looking at your Twitter. Is that David Hume's picture that you use? [01:17:44] Speaker A: No, actually, that's a Brahe. [01:17:47] Speaker B: Oh, it's Tico Brahe. [01:17:48] Speaker A: Yes, it's a Tico Brahe. Okay, let me explain that first, because I'm not handsome, so I didn't want to put my picture in the Twitter. So I was like, okay, so who do I respect? Who's my role model? [01:18:02] Speaker B: Oh, I thought you. What famous astronomist was super handsome? Oh, Tycho. That's. [01:18:08] Speaker A: No, I think he's actually not very handsome either. But the reason, if you're a student of history, of science, you know his significance. Basically, he was the last astronomer before the telescopes were invented, and the data that he collected basically led to the Copernican revolution and Galileo and Newton. So he's sort of a origin of modern science. And I think that I feel sort of like that. In other words, I don't know what I'm doing. I'm just collecting a lot of data. And I'm hoping that someday, maybe a few generations later, at least some of the data that I or someone like me collected might play some role. So that's my wishful thinking and that's why I have this picture. [01:18:56] Speaker B: Let me hold you there for one second and ask you another question then about collecting data, because this comes up a lot on the show as well. If we have the right balance of data collecting, experimentation and theory. And most often people say, well, if I say what do we need more of? People say yes to all of it. But then if you go down on one side or the other, everyone says, well, what we're lacking in neuroscience is theory and that's what we need. We need better theory, we need theory. And what you just said is like you don't know. You're just collecting data. Which is not true because you even talk about theories in your book. And you have neuroeconomic, you have game theory to back up a lot of your experiments and your experimental research. So where do you end on that spectrum? [01:19:41] Speaker A: I mean, this may be completely ignorant, but I have a huge physics envy and we talked about this in the beginning, that it was pretty depressing back then I got over it. But I'm not a genius. And if you look at the 20th century physics, it just takes a few. I mean, I may be completely wrong, but my impression is that you have few people that are extremely bright and they can figure it out, but what they need is a data. But I think that for mortals like me. [01:20:16] Speaker B: Mortals? Is that what. [01:20:18] Speaker A: Yes, most people, I think you're just average. [01:20:22] Speaker B: There you go. [01:20:23] Speaker A: Yeah, Exactly. So for 99.9% of the scientists, I think the best you could do is to generate high quality data that will make sense to somebody who's smart enough to figure it out. Doing experiments to produce a lot of data takes a lot of money, a lot of effort, a lot of time, a lot of trial and errors. And without those data, I don't think you can expect to have a good theory. [01:20:50] Speaker B: I'm going to have Steve Grossberg on soon and he talks about, and I'm going to ask him about this as well, about the history of physics and how physicists used to be also interested in psychology. But what happened was, part of what happened was that there are non linearities in networks of brains and the physics side of math is vastly linear. It was. And when nonlinearities started being introduced, it kind of broke the physicists off from studying mind and psychology because there was already a mathematical framework that existed for them to study physical properties, properties of the universe. So it was a lot more direct for a physicist or for a scientist to study physical systems because the math framework was already there, the theory was already there. I mean, you can have your theory, but the mathematical theory was already there to use to explain. Like he makes a point, even Einstein, his use of tensors, you know, he just went to a math guy, look, I need something to explain this. And the math person helped him introduce him to this tensor geometry. Right. But neuroscience doesn't have a mathematical framework. And I think that's super interesting to think about it in those terms where. So in physics there's always theory and experimentation and there's these two separate fields. Whereas in neuroscience we're kind of all either. We're kind of all both. Right. Theorists. Half theorist, half experimenter. And maybe that's no good. I mean, there are pure theorists and I don't know if there are pure experimenters because you have to have something that you're grappling with theory wise. Anyway, I just thought that was an interesting take on the history of. He brings up people like Helmholtz who studied both physics and psychological processes, but ended up studying mostly physics because the math wasn't there for the psychological processes. I digress, but I'd like to actually. [01:22:50] Speaker A: Make one comment that is that I think there is a fundamental difference between the role of math in physics and role of math in biology, which obviously includes neuroscience, and that is that the essence of biology is diversity, whereas essence of physicists is like universality. [01:23:08] Speaker B: Yes, yes. [01:23:09] Speaker A: I mean, even though I have physics envy, and I think most people agree is that physics is the sort of best example of science. I don't think you can extrapolate everything from physics to sort of more advanced mode of neuroscience because we cannot ignore diversity. [01:23:25] Speaker B: And so we need a mathematical theoretical basis of diversity. And that's happening with complexity theory, complexity science, and things of that nature. So that's right. Okay. Anyway, your Twitter profile is you're a neuroscientist and a DJ, and we've talked a little bit about your DJing. And in fact, I put on your music the other day when I was working, and it was delightful. It was really good background. [01:23:49] Speaker A: Thank you. [01:23:50] Speaker B: I mean, it's a little distracting because I knew it was you, so I was listening to it more Than, you know, but so I could kind of go in and out of focus, but you stay up all hours of the night working on this music stuff. And so what role does music play? Are you an artist or are you a scientist? Or is that a meaningless distinction? [01:24:10] Speaker A: So we talked about this offline actually a little bit. You know, what's the relationship between arts and science? I've continued to think about this actually since we talked about this last time. And I think the thing that belongs to both of these is the creativity that we are not happy with the explanation that we are given. We get tired of the song once we listen to it a few times, even if it's the best song in the world. So we constantly need novel and more, you know, beautiful theories and, you know, music and paintings, et cetera, et cetera. And how do you create, how do you become creative? And I think that is again related to our self awareness. In other words, before you can actually produce something, you think about it, you simulate it in your head and then choose among many different possibilities that this actually will look good if I actually give it a physical sort of substrate to it, whether it's a sound or, you know, physical structure and other physical structures like visual forms, et cetera, et cetera. So I tend to think that if you take a talented artist and give them a scientific training, they could probably be a decent scientist and vice versa. But you have to specialize because actual techniques that you have to acquire are very different. So I sort of going back and forth between music production and it's hard for me to say it's a music production, but I'd like to produce music. [01:25:36] Speaker B: Your heart is in it though. [01:25:38] Speaker A: My heart is definitely in it. And the papers that I write, I see a lot of similarities and some, you know, something goes more easily when I'm trying to make music. Something goes more easily when I'm trying to write a paper. But there's a pleasant sort of aspect to both of those, which. Both of which I enjoy. And the other thing is that this may be why I had difficult time, you know, when I was in college as to whether I want to be a musician or scientist, because I really, really like both. So as I get older, especially when my father started telling me shortly after he turned 70 that he's getting really frustrated because his piano skills not improving. But he didn't start practicing piano until when he turned 70. So that's when I realized that, oh, if I want to do any music, I should start soon. [01:26:28] Speaker B: Yeah, because you're what 74 now. Is that. [01:26:33] Speaker A: Yeah, something like that. [01:26:34] Speaker B: Something close to that. Yeah. No, I think about that stuff, too, as my guitar collects dust, you know. [01:26:39] Speaker A: Oh, you should. You should. You should start that, too. So I started practicing guitar, actually, almost exactly 10 years ago. [01:26:46] Speaker B: Wow. I remember in your office, you had your guitar there, and you were very proud of me. That was almost 10 years ago, maybe, that I was in your office. [01:26:53] Speaker A: Yes. [01:26:54] Speaker B: Yeah. So, but. But you've transitioned, because now you have racks and racks and cords and plugs and everything beeps and whistles and whirs and whizzes. And in the video, you can see your hands are coming in and out, and you're switching things, and it looks like chaos to me. But you know what you're doing, right? [01:27:11] Speaker A: It's called modular synthesizers. [01:27:12] Speaker B: Modular synthesizers, Yep. [01:27:14] Speaker A: So I was just practicing guitar for a few years, and then I started recording music. And then I found that actually recording and producing music is fascinating. And I got more interested in synthesizers because I don't have enough time to learn 10 different instruments. But dealing with synthesis is actually a lot faster. You can learn how to use a new synthesizer under a few minutes, if you're lucky. If you want to produce a lot of sounds in your music synthesizers, obvious is the way to go. When I started reading books about the history of synthesizers, I saw a lot of parallel between music technology and neuroscience. So, for example, you might be familiar with window discriminators, and there are actually window discriminator modules in modular synthesizer. When I saw that, I was like. [01:28:06] Speaker B: Wow, these things are converging window discriminators to sort your spikes and things like sort your different neurons. [01:28:13] Speaker A: Exactly. It's one of the ways that you can translate between analog signals to digital signals. And modular synthesizer is a combination of both digital and analog technology. The sound is obviously generated using analog circuits in most cases, but controlling it is done by digital technology. It gives you a lot of interesting things to speculate about how the brain might have evolved and what's the parallel between synthesizers and brains. I really, really enjoy it. [01:28:43] Speaker B: I tell you what. First of all, thank you for coming on the show. I've enjoyed this conversation immensely. Instead of my usual outro music, what I could do is play some daoli, maybe one of your compositions. How would you feel about that as the outro to the episode? [01:28:59] Speaker A: Oh, that sounds risky, but I'll leave it up to you. [01:29:02] Speaker B: Thanks, Dale. This was fun. [01:29:04] Speaker A: Thanks for having me. This was a lot of fun. I had to get rid of some inhibition to talk about some of the things that we discussed, but this was still fun. [01:29:25] Speaker B: Brain Inspired is a production of me and you. I don't do advertisements. You can support the show through Patreon for a trifling amount and get access to the full versions of all the episodes, plus bonus episodes that focus more on the cultural side but still have science. Go to BrainInspired Co and find the red Patreon button there to get in touch with me. Email Paul BrainInspired co. Thank you for your support. See you next time.

Other Episodes

Episode 0

February 28, 2021 01:46:35
Episode Cover

BI 099 Hakwan Lau and Steve Fleming: Neuro-AI Consciousness

Hakwan, Steve, and I discuss many issues around the scientific study of consciousness. Steve and Hakwan focus on higher order theories (HOTs) of consciousness,...

Listen

Episode 0

April 12, 2019 01:44:47
Episode Cover

BI 031 Francisco de Sousa Webber: Natural Language Understanding

Businessportrait Francisco Webber Cortical.ioThe white paper we discuss: Semantic Folding Theory And its Application in Semantic Fingerprinting. A nice talk Francisco gave: Semantic fingerprinting:...

Listen

Episode 0

June 06, 2021 01:29:24
Episode Cover

BI 107 Steve Fleming: Know Thyself

Steve and I discuss many topics from his new book Know Thyself: The Science of Self-Awareness. The book covers the full range of what...

Listen