BI 199 Hessam Akhlaghpour: Natural Universal Computation

November 26, 2024 01:49:07
BI 199 Hessam Akhlaghpour: Natural Universal Computation
Brain Inspired
BI 199 Hessam Akhlaghpour: Natural Universal Computation

Nov 26 2024 | 01:49:07

/

Show Notes

Support the show to get full episodes and join the Discord community.

The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.

Read more about our partnership.

Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: https://www.thetransmitter.org/newsletters/

To explore more neuroscience news and perspectives, visit thetransmitter.org.

Hessam Akhlaghpour is a postdoctoral researcher at Rockefeller University in the Maimon lab. His experimental work is in fly neuroscience mostly studying spatial memories in fruit flies. However, we are going to be talking about a different (although somewhat related) side of his postdoctoral research. This aspect of his work involves theoretical explorations of molecular computation, which are deeply inspired by Randy Gallistel and Adam King's book Memory and the Computational Brain. Randy has been on the podcast before to discuss his ideas that memory needs to be stored in something more stable than the synapses between neurons, and how that something could be genetic material like RNA. When Hessam read this book, he was re-inspired to think of the brain the way he used to think of it before experimental neuroscience challenged his views. It re-inspired him to think of the brain as a computational system. But it also led to what we discuss today, the idea that RNA has the capacity for universal computation, and Hessam's development of how that might happen. So we discuss that background and story, why universal computation has been discovered in organisms yet since surely evolution has stumbled upon it, and how RNA might and combinatory logic could implement universal computation in nature.

Read the transcript.

0:00 - Intro 4:44 - Hessam's background 11:50 - Randy Gallistel's book 14:43 - Information in the brain 17:51 - Hessam's turn to universal computation 35:30 - AI and universal computation 40:09 - Universal computation to solve intelligence 44:22 - Connecting sub and super molecular 50:10 - Junk DNA 56:42 - Genetic material for coding 1:06:37 - RNA and combinatory logic 1:35:14 - Outlook 1:42:11 - Reflecting on the molecular world

View Full Transcript

Episode Transcript

[00:00:03] Speaker A: One of the most important insights of the 20th century, in my opinion, was the finding that with a very simple set of rules you can achieve what's called universal computation. It's common wisdom that our models of computation achieve universality, but it's wrong and I'll explain why. When you take a step back, you see that there are these molecules within cells that resemble strings of symbols and they also fold up into these tree like structures that kind of would be very useful for doing computational stuff. [00:00:54] Speaker B: This is brain inspired, powered by the transmitter. Hello, I'm Paul. My guest today is Hessem Aglagpor. Hessem is a postdoctoral researcher at Rockefeller University in the Maimon Lab. His experimental work is in fly neuroscience, mostly studying spatial memories in fruit flies. However, we're going to be talking about a different, although somewhat related, side of his postdoctoral research. This aspect of his work involves theoretical explorations of molecular computation, which are deeply inspired by Randy Galistol and Adam King's book Memory and the Computational Brain. Randy Galastol has been on the podcast before to discuss his ideas that memory needs to be stored in something more stable than in the synapses between neurons, and how that something could be genetic material like rna. When Hessam read this book, as you'll hear him describe, he was re inspired to think of the brain the way he used to think of it before experimental neuroscience challenged his views. So it re inspired him to think of the brain as a computational system. But it also led to what we discussed today, the idea that RNA has the capacity for universal computation. So we discussed that background and story why universal computation hasn't been discovered in organisms yet, since surely evolution would have stumbled upon it by now. And how RNA and combinatory logic could implement universal computation in nature. And a little bit about how Hessem developed the ideas for how this could all come together. Show notes are at BrainInspired Co Podcast199. If you enjoyed this episode, you might also like episodes with Randy Galistol and David Glansman. Those episodes are 126 and 172 respectively, which I also link to in the show notes. Thank you to all past, present and future Patreon supporters, one of whom actually just created a brain inspired search engine which was shared in the discord. So thank you for that, Brian. I hope it's a useful resource for our little community here. Okay, here we go with hesem. Last time I guess we were off the boat. So I was at this conference, this workshop in Norway that you were at, and that's where we met. And you were talking combinatory logic and RNA then, and that's what we're going to talk about now. So it was fun on the boat with you, getting to know you a little bit, and good to see you again. [00:03:29] Speaker A: Yeah, good to see you, too. Yeah, I'm super excited about this opportunity to talk to you. I told you that I was a long. I was an old fan of this show. I started listening to it, like, very early in my podcast, and to imagine that I'd be speaking on it is a very exciting thing. [00:03:49] Speaker B: Well, I would be remiss to say you actually had mentioned the Brain Science podcast by Dr. Ginger Campbell to me and how that was an early influence. And she was like an early. I loved her podcast, too, and that was part of the inspiration eventually when I started Brain Inspired. So shout out to Ginger. [00:04:09] Speaker A: Yeah, I love that podcast. I wish it was still going on, but. Yeah, sometimes I just catch myself going back to listening to really old episodes. [00:04:20] Speaker B: Oh, yeah. Well, she does a really good job. She's a really good host. I'll just leave it at that. Yeah, unfortunately, she doesn't make it anymore. But, I mean, I remember going on runs in Nashville, Tennessee. You know, you have that memory of where you were when you heard something or when you were reading something, and maybe we'll talk about that with that Galistol book that we'll mention in a few minutes. But I just. I remember specific places in Nashville listening to, you know, her podcast and just enjoying it a lot. So. But anyway, good to have you here. So what we're going to talk about is what you've been on lately, lately for the past few years, which is the RNA and universal computation, but that's not how you came here. So I know you've worked with Drosophila. You've done a lot of experimental neuroscience work up to this point. So just what do you do in your. What's the right way to say this in real life, in your day job? [00:05:16] Speaker A: Yeah, my day job is basically doing experiments on flies. I'm in Gabby Maimon's lab here at Rockefeller, and basically I'm doing fly neuroscience. Doing behavioral experiments using genetics imaging. Yes, I'm a postdoc. Yeah. Yeah. [00:05:38] Speaker B: Okay. So. All right. So I just wanted to bring that up because what we're going to talk about is something that you and I also shared, sort of a. Well, I want you to tell your story of how you came to this, how you came to what you're studying now, just kind of as a background, because I had the same. I wonder how many, what percentage of graduate students have this sort of, what would you call it, disillusionment? Well, a lot have that, but very specific kind of disillusionment in that. Like. Oh, yeah, is this all wrong? Like what? You know, that that is a pretty major disillusionment, but not necessarily is this all wrong, but a conniption about what you're doing and stuff. So, yeah, tell the listeners. [00:06:19] Speaker A: Yeah, yeah, sure. So I base. I was in. I did my undergrad in computer engineering. I. I was really into like, computer science. Algorithms, data structures. I felt that, you know, I was very proficient at that stuff. And then for grad school, I decided that I want to go into neuroscience because this is the most exciting field right now. And the brain poses a very challenging problem to scientists. And it seems like, you know, I can use all of the skills that I learned in computer science to try to understand this very complex system that's mainly known for being a computational organ. And so I came in kind of naively thinking that, okay, all of this stuff that I learned about designing algorithms, data structures, figuring out what algorithmic complexity this algorithm runs at, what's the memory complexity, all this stuff, I thought that would be relevant to the study of the brain. And so I like. [00:07:26] Speaker B: Just relevant or you thought, oh, I'm going to find all these algorithms in the brain, I'm going to find the computational complexity and it'll map on to processes and stuff. Was it more direct or was it just relevant, you thought? [00:07:38] Speaker A: I don't know. I don't remember exactly what I thought, but I feel like I felt that I had the right skill set. But very quickly I was humbled to learn that actually not any of this is useful. I mean, it might be useful in your data analysis or coding up some kind of behavioral experiment, but to understand the brain, it's not like classical computer science isn't very relevant. And I guess the talking point that everyone would use is that brains are not designed, they're evolved, they're messy, they don't conform to, like, you know, engineering standards of design. And so we're going to come back. [00:08:23] Speaker B: To that very point when we talk about. Sorry to interrupt, but we're going to come back to that, of course. [00:08:26] Speaker A: Yeah, yeah, yeah, yeah, definitely. And so, yeah, so basically, the first few years of my grad school, I kind of learned that the brain is not a computer. It's just like a. It's a messy, a wet organ. And you're going to have to understand that the way it is and not try to impose your own idea of how computation should be on that, on that organ. And then towards the end of grad school, I kind of felt that sense of disillusionment that you were talking about, about the whole field. Like, what are we doing? It didn't feel like we're making any progress. We're just, I mean, I'm saying this in a very, I guess not in very generous way, but let me just say it in the most extreme way that we're just collecting data. We're just collecting more facts about the brain and not really making any insight. [00:09:26] Speaker B: Well, that's the old criticism of biology. When it was called. Was it Ernst Meyer who called it stamp collecting? [00:09:32] Speaker A: Yeah, stamp collecting. I mean, and I agree that this is not a very generous way to frame it. I'm just expressing the feeling that I had at the time because, you know, not, I mean, people are doing amazing work. Of course not everyone's stamp collecting. Good, safe, good safe. But you know, I really beat it. It's not as the same, but just like the general, the general direction of the field seemed kind of aimless to me. It. And then that was. [00:10:06] Speaker B: Did your own work feel that way also? Because often people think that. But then except my work, what I'm doing is right on course to solve the thing that I need to solve. Right. [00:10:18] Speaker A: Yeah, I guess. I mean, I like the work that I was doing. The issue was I had high hopes for a certain direction what I, what I was trying to do. So I worked on rodent neuroscience in grad school with Alana Whitten at Princeton. And I started off thinking that I'm going to solve basically like some very fundamental thing about how working memory functions. I was trying to like, I was thinking in terms of, okay, I'm going to optogenetically turn off all these cells and then get the rat to forget what it. The short term memory that was in its head. And that kind of like that was a very ambitious goal that I had in mind. And I didn't get to that. I got to something that was valuable. Understanding how the striatum is involved in working memory. I made a small contribution, I guess to that field, but it was more like it wasn't that related to my own stuff, that disillusionment that I felt was more just about the whole field. [00:11:28] Speaker B: Well, just. Are you familiar with the quote? I'm going to misattribute every single quote I try to quote here, but I think it's Mike Tyson or at least it's usually attributed. Attributed to him that everyone has a plan until you get punched in the face. That's kind of like how experimental neuroscience works, right? Or a lot of experimental science, I guess. [00:11:48] Speaker A: Yeah, yeah. Experimental biology. Yeah, exactly. Yeah. I hadn't heard of that, but sounds all right. [00:11:55] Speaker B: So you were doing good science, and Ilana's lab's very good, and you continued to do good experimental neuroscience research, but eventually you felt that disillusionment with the field as a whole. [00:12:06] Speaker A: Yes, until I suddenly got my hands on Randy Gallistol's book, Memory and the Computational Brain. The reason why it resonated with me was that it allowed me to unlearn what I had learned about computer science being irrelevant to the brain. And I'm talking about classical computer science, like, you know, algorithms and data structures, because the first nine chapters of the book are basically just. Honestly, I skipped the main text of those nine chapters because it was like, stuff that I already knew from, like, you know, studying computer science. I just skim skimmed through them and read the summaries at the end. But then the rest of the book was actually, you know, like, I was a grad student and, you know, I was kind of, you know, surrounded by very smart people, people who are very knowledgeable, that had a certain perspective about neuroscience. And then here comes along Randy Galistol, this professor in psychology with, you know, a very good reputation of being like a serious scientist as saying that, actually, you know what, it's okay to ignore this common wisdom that everyone is saying and treat the brain as a computer and use principles of computer science and theory of computation in your study of the brain. And that got me super excited because, like, you know, that's kind of the reason I came into neuroscience. [00:13:41] Speaker B: Yeah. Wait, so I don't know if we said the name of the book. It's Memory in the Computational Brain. So I'll link to it in the show notes, of course. But so I've had Randy on, and I've had, in a similar vein, Dan Glansman on. And now you will be the third person to be kind of talking about this. It could be RNA or something subcellular, something molecular. But yeah, so in that book, he talks about path navigation, how ants keep track of where they are, and how some of the stories that we neuroscientists we. I don't think about ants too much, but the field of neuroscience has a story about how it works. And, you know, he goes into arguments why it wouldn't work this way. Same with, like, bees, you know, so he goes through lots of examples, carefully saying, well, this would not work. And then, of course, some of the learning studies that he and others have been involved in. So that opened you up into feeling that it was okay to treat the brain like a computer again. But did that make you feel like the brain is a computer again? [00:14:47] Speaker A: I particularly remember in the last chapters of the book when they started speculating on where the solution might be. And that's where they brought up the authors of the book, Galistol and King. They brought up the idea that it could be stored in molecules the same way that we have information, genetic information stored in DNA. Maybe it's. Maybe it's. That's how cognitive memories are stored. Or it could be something else. It could be like, you know, the same way that you could have specific changes to molecules, like, I don't know, phosphorylation of some molecule, and the way that. That. That those phosphorylation rates are distributed across cells or something like that. There could be various ways that you could imagine memories being encoded. But it kind of allowed me to let go of this synaptic hypothesis that's kind of the dogma of the field. [00:15:49] Speaker B: Right. So I just want to spell that out really quickly. So. So throughout the book, Galistol and King build the argument in Galistol and his other works that there are these problems for behaviors and memory and learning that we don't have solutions for. In the spiking patterns of neurons, which has been sort of the hope and the assumption of neuroscience, everything is spiking and everything is how the neurons communicate with action potentials and the patterns. However, he goes to great pains to show in multiple cases that there is not a good story and that there doesn't seem to be possible a good story on principles. And just correct me if I'm wrong, as I'm sort of spouting this out from memory. [00:16:37] Speaker A: Yeah, that makes sense. I mean, and the arguments were. A lot of them were conceptual in that book. So, for example, there was an emphasis on the need for a read write memory. And synapses aren't really a read write memory. Like, you can't go in and write a specific value into a synapse or read a specific value that's stored in a synapse. Many other conceptual arguments that Gal has made in his other writings. How do you encode a number? What's the code? And a lot of people kind of brush those questions aside, and I kind of, you know, understand their arguments, but I just don't Agree with them anymore. [00:17:26] Speaker B: Yeah, anymore. Okay. So you had this experience reading the book. You couldn't put it down. And that book specifically, and most of Randy's work is on memory and learning and how those could be implemented at the subcellular level with hypothesized subcellular substrates like DNA, rna, proteins, et cetera, phosphorylation, methylation, other various possible means of doing things. But then you kind of took a different course on it because. Is this a good time to talk about your interest in universal computation and Turing equivalence, etc. [00:18:06] Speaker A: Yeah, this is. Would be a great segue into that. So most of Randy's arguments come from an angle of understanding memory. And for mem, like, the concept of memory isn't, like, not everyone agrees on what memory is. And, you know, there could be semantic debates that just kind of pop up on the side when you're discussing what is the physical substrate behind memory. And there's another angle which you could take, which is just as rigorous, if not even more rigorous, which is computation. You can ask, what is the scope of a. What is the computational scope of a system from the lens of theory of computation? When you ask that question, that also leads you down towards molecules and rna. And a lot of the paradigms that we have, the models for computation in neuroscience, fall short of what's called universal computation. So actually, maybe it's. Maybe it's better for me to just go straight into what I'm talking about. What is universal computation? [00:19:42] Speaker B: Yeah. And why we care. [00:19:44] Speaker A: Yes, exactly. Yeah. Okay, so in the theory of computation, there are. There are various levels of computation power that a system can have. Okay, so one system can, you know, it might be able to compute a certain set of functions. Another system might be able to compute more functions, just the same set of functions as another system, but even more so. For example, finite state machines, they can compute things like, what's the remainder of this number when you divide it by 7? There's a single finite state machine that does that, and it will do that for any number. It doesn't matter how many digits you give it, it's always going to be correct. But then there are some problems that finite state machines can't solve. And like, I don't know. What's one example? Like, I don't know. Is this string of parentheses balanced? That's a problem that. There's no finite state machine that can solve that problem for any given string. However, there are other systems of computation which could solve that. Basically, the point is that you can have different computation Systems that are able to solve different sets of functions. Okay. Now, one of the most important insights of the 20th century, in my opinion, was the finding that with a very simple set of rules, you can achieve what's called universal computation. You can build a system that's capable of solving any solvable function, any computable function. And when I say capable of solving, it requires a description of the algorithm. So it's not like, okay, I have a universal computer and I can solve everything. No, you need to find the algorithm that solves certain problems. So when I say capable of solving, it means there is a description of a program for every computable function. [00:22:05] Speaker B: It has the capacity for that description. [00:22:08] Speaker A: Yes, that's right. Yes. And so, and a really cool thing is that they had these competing models of computation back in the 1930s, general recursion theory and lambda calculus. And then Turing machines, Turing's automatic machine, which we now call Turing machines, came along, and within a few years they realized, actually, all of these systems, which were intended to be models that capture what it means to compute, they all are equivalent, meaning that you can simulate any one of these systems with another of these systems. That led to the idea that maybe we've arrived at something very profound. Maybe we've found a limit to what's computable, because there are functions that you can describe that are not computable, but like functions that. Well, we'll get to it. [00:23:08] Speaker B: We'll get to it. [00:23:10] Speaker A: But like, you know, one example is Chitin, or Chitin's constant. Actually, I'm embarrassed to say, I don't know how to pronounce his name. But let's say Chitin's constant. It's a number. It's a very well defined number, but you can't compute it. But let's not get into what the things that are not computable. The point is that you can really easily reach that level of computation power where for every computable function, you will have a description. That description can be the description of how a Turing machine's operations work. It could be a description of how a lambda calculus function works. But the point is, for every computable function, you're going to have a finite length string that determines how the system operates through time, and that will lead to solving a certain function. Okay, now this, this is. This is kind of the theory of computation lens. Now you can ask. We have these models of computation in neuroscience. Like, we have neural networks. How is a neural network a computation system? Well, for every function, you can have a Description of a network that may be able to solve that function. The description of the network would be the set of neurons, the weights between these neurons, and the activation functions that each neuron has. I can. I can. Like in a string, I could describe a network, and this network would be solving a function. And then you can ask, okay, well, what are the set of functions that neural networks can solve? Okay. And it's common wisdom that our models of computation achieve universality. But it's wrong, and I'll explain why. So. [00:25:19] Speaker B: Wait, our neural models of computation? [00:25:22] Speaker A: Well, our models of biological computation. [00:25:25] Speaker B: Okay, okay. [00:25:26] Speaker A: Okay. So, yeah. So back in the 1990s, there was a series of papers that showed that you could simulate a Turing machine with neural networks. The problem was that the kinds of neural networks and dynamical systems, too, the kinds of neural networks and dynamical systems that were shown to be able to simulate Turing machines, they are irrelevant to biology because they lack structural stability. They're even irrelevant to engineering. You couldn't even engineer these systems. They have certain. Yeah, just. [00:26:08] Speaker B: Yeah, yeah. I think you're about to kind of go into more detail on why that's the case. [00:26:12] Speaker A: Yes. Yeah. So the crux of the matter is structural stability. When you're describing a dynamical system, the system includes a number of parameters. And then you can ask, what happens if I change these parameters by some infinitesimal small amount? Will it still resemble the same dynamics of the original dynamical system? In other words, is there, in technical terms, for those who are interested, is there a homeomorphic neighborhood of dynamical systems to this system that you're describing? If there isn't one, then you're describing a singular point in parameter space. [00:27:02] Speaker B: Fragile. It could be fragile. [00:27:03] Speaker A: Yes, exactly. The smallest error in your parameters when you're trying to implement this system would result to something that's vastly different. Okay, and this comes from. This is not my argument. This is Chris Moore's argument, which was actually the first person to show that dynamical systems can be used to simulate Turing machines. He argued that structural stability is a reasonable criterion for systems that either an engineer can build or you would be able to find to occur in nature. And then he also conjectured that no universal finite dimensional dynamical system would be structurally stable. [00:27:53] Speaker B: Okay, so just to pause here, make sure I'm getting this. How do you square this with the idea of degeneracy in circuits in the brain, for example? Right. So you can. You can use this exact same circuit to produce different rhythms, or you can use different parameters in the same circuit to produce the same rhythms in this case. So that seems unstable, right? Well, it's robust. [00:28:26] Speaker A: I'm not sure if that would be structurally unstable. Because the thing is, almost all of the models that people use, even in. Not just for. Not just in, like, studies of biological neural networks, but even in AI, almost all the models that people use, they are robust to a small enough amount of error in their parameters. [00:28:48] Speaker B: Okay. [00:28:49] Speaker A: Otherwise, they would just be irrelevant to deep learning. Like, in deep learning, you're searching for a network that fits a certain function, and you're just moving in parameter space. If the target. If the solution is a single point with no clues nearby, then it's hopeless. You can't find that solution. [00:29:10] Speaker B: Right? Yeah. Okay. I thought you were saying the opposite. So I misunderstood. I thought you were saying that biological systems are inherently stable. And that is what you're saying. Biological systems, yes. [00:29:26] Speaker A: Well, I would say our models of biological computation that we actually use, that we actually think might be relevant, they're all structurally stable. [00:29:40] Speaker B: Okay, fair enough. The model is stable. [00:29:44] Speaker A: Yes. We're always talking about. I think this model is how the brain computes. And those models, they don't have this weird feature of structural instability. It's like. It's as if we're paying lip service to universal computation. If we just say, okay, look, RNNs are universally powerful, and then we never even talk about that kind of neural network that is universally powerful. We just, like, there's a subset of RNNs that we actually study, and there's another set of RNNs which are universal, and as far as we know, those don't overlap. [00:30:36] Speaker B: So for it to be universal, it has to compute that single point. Is that correct? Well, because you have to be exact when you're computing. I'm sorry, I'm so naive here, by the way. [00:30:49] Speaker A: No, no, no, no. That's a good question. The way I would say it is, people have come up with a way to describe a single neural network for every computable function. However, each one of those networks is a single point in space. [00:31:08] Speaker B: Oh, okay. I see. And just to be clear, are you talking about, like, the universal approximator theorem? [00:31:13] Speaker A: No, no, no. That's very different. So I'm talking about Siegelman and Sontag. They had a neural network system that basically uses the. I think it was conceived as the membrane potential of a single neuron as a unary stack. Like you can imagine, there's a. There's a string of digits after the decimal point that represent the membrane potential of a single neuron. And if you treat that as a unary stack, you can compute with it. If you have three stacks, you can compute with it. There are other ways to do it with dynamical system models that aren't necessarily like neural networks, but they essentially treat digits after the decimal point of a number as a string of symbols. Strings of symbols are really important, actually. That's one of the arguments for rna. They really allow you to express computational power in a computational system and you can achieve it with if you treat a number as a string of symbols. But I really don't think that's how the brain works. [00:32:29] Speaker B: Okay, so let me just rephrase this then and see if I get it right this time in my own words. There is a space of possible neural networks, parameter sets, architectures, activation functions, etc. Of that entire space. There are discrete points of the combinations of all of those different things that lead to universal computational abilities. And every other point that's not. Those discrete points are not universally computable. Computer. [00:33:03] Speaker A: Yes. Yeah. Basically the point is, if you're talking about the subset of RNNs that are relevant to biology, we don't know if that covers all computable functions. [00:33:17] Speaker B: Okay. Okay. We don't know if it matters either. Right, so it matters to you. Yeah. This is a big deal, right? [00:33:27] Speaker A: Yeah. Yes. Yeah, no, that's, that, that's a fair point. So, so the thing is, I find it really hard to accept that biology would not have stumbled upon universal computation because it's, it's such an easy thing to accidentally stumble upon when you're working in abstract systems. There, there are several examples of this, like, for example, like linear cellular Automatons, Wolfram's Rule 110. Accidentally stumbled upon universal computation. It wasn't intended to be a powerful computation system, but it was discovered to be. Weighing tiles. Conway's Game of Life. There's a lot of examples of people stumbling upon these very complex unpredictable systems and later discovering that they're universal. And so it just, it feels, I don't know, hard for me to believe that biology can evolve something as complex as the eye, that conforms to the principles of optics, that uses a lens and an aperture, but somehow it just, it doesn't care about the principles of computation and it can't. Can achieve something that's so much easier to build than an eye. And just from my intuition, it feels like, and this is an intuitive based argument, and it might not be convincing to everyone, but I just feel like a universal computation system would have Enormous selective advantages for organisms that are striving to survive and reproduce and solve complex problems. Yeah. That's why I think it's a meaningful and important question to ask. Where is life's universal computer? Where can we find it? [00:35:35] Speaker B: You had mentioned to me that you think that this is. So we've been talking basically about neuroscience and the models in neuroscience, but you think this is relevant to artificial intelligence as well? [00:35:45] Speaker A: Yes. Yeah, basically. So one of the things that I've noticed right now in the interaction between AI and neuroscience, which is actually there's no interaction. [00:35:58] Speaker B: There's no interaction. [00:36:00] Speaker A: You've talked about this before on previous episodes. I know. No. So there's this one interaction that I can confidently say exists. So I tell you the number of times that I've spoken to someone in machine learning or just non neuroscience AI, and I've explained how there's a problem that our current models of machine learning are learning functions in a space that's not Turing equivalent. And I can get into that in a moment because that's also something that would seem contrary to common wisdom. But yeah, I have a very similar critique to current approaches in machine learning. And my argument is that, hey, we're not taking universal computation seriously. And then the response that I get, I can't tell you how many times that the response that I got was, okay, well, if the point is to create an intelligent system, aren't we intelligent and aren't we neural networks? So at the end of the day, if your argument is against neural networks, how are we intelligent? So in a sense, those people that are working in AI and working on these neural net models, they're relying on the confidence of neuroscientists that this is it. This is it. It's a neural network system that's doing this. [00:37:33] Speaker B: Oh, no, they're not. No, they're not. They're not relying on neuroscientists. They're just. They're just building their models. [00:37:40] Speaker A: They're building their models. But there is an assumption that there's no barrier to the computational ability of neural networks if the target is an intelligent system. Because if you believe that. [00:37:58] Speaker B: But let me just end with brains, because that has nothing. I believe that from the common AI engineer's perspective, that has nothing to do with neuroscience. You disagree? [00:38:14] Speaker A: I don't know. I mean, I think. Well, I mean, at least in the discussions that I've had with people, I find them referring to the fact that we are intelligent and we are neural networks. [00:38:29] Speaker B: Oh, okay, fine. Yeah. [00:38:31] Speaker A: And so, you know, there's got to be a neural network solution to intelligence. [00:38:37] Speaker B: That's true. And that is the common assumption among neuroscientists as well. [00:38:41] Speaker A: Neuroscientists, yes. But I think the reason that. [00:38:46] Speaker B: But it's true also. Well, how we have neural networks. It's not. We are neural networks, but of course we have. But we have a few other things as well. Yeah. [00:38:56] Speaker A: What do you mean that it's true. [00:38:57] Speaker B: We have a brain. It's true. We have neurons. Yes, yes. [00:39:00] Speaker A: But the question is, the question is. [00:39:03] Speaker B: Is that enough for universal computation is. [00:39:06] Speaker A: Implemented through a neural network model versus some other kind of model that might be at the molecular level? And so what I was trying to get at was this, was this mutual interdependence of neuroscience and AI, how AI researchers are relying on the confidence of neuroscientists that, okay, computation is happening through neural networks and the other way around. I feel that neuroscientists see that the most advanced, cutting edge models for, you know, AI look really like neural networks. And, you know, maybe it's not exactly biologically plausible yet, but there's going to be some mapping at some point. That's kind of how both fields are relying on each other's confidences that neural networks by themselves can solve intelligence. [00:40:12] Speaker B: Okay, and you think universal computation is required to solve intelligence, whatever the hell that means? Because something you said earlier was, we're human, we're intelligent, so therefore we think we can solve intelligence. I wanted to jump in and say, yeah, we define what intelligence is. So it's not like intelligence is out there and we have some. And we know what it is, we actually define it. Right. So that's a semantic issue. [00:40:41] Speaker A: But yeah, I mean, and then, and then you could, you could come up with like a new definition that's not grounded on us. And I don't really want to get into the semantic argument of what intelligence. Well, but what intelligence is. [00:40:56] Speaker B: But, but my point is like, universal computation doesn't care about the needs of an organism, for example. Right. So every definition of intelligence of the million that are out there, there's something about learning in unpredictable environments, adapting to learn to do the thing that you need to do, solving the problem. Right. And these are all like problem solving things related to what you're, what you need to do. But universal computation doesn't, doesn't care about what you need to do. It's just a capacity to do anything, right? [00:41:26] Speaker A: Well, yeah, I mean, the question here is it's not about, okay, can I learn to universally compute the Question is, when I'm learning, what is learning about? It's about picking a function in the space of all possible functions. Okay? It's. It's about, you have a bunch of examples and you want to find the function that solves these examples. Um, now again, the question, the same question comes up. What's the space of learnable functions in your system? Is it the same space as all computable functions, or are you just leaving out a ton of functions? For example, I don't know if you're thinking about addition. Let's say you have a bunch of input and output test cases. You could solve that benchmark for addition with a lookup table. Okay? And if your learning algorithm, if your learning method is restricted to lookup tables, you're going to find a lookup table that's going to solve that benchmark, but you're not going to solve addition. If you want to solve addition, then I hope that the scope of functions that you're searching for, that you're learning in includes programs, and you might be able to stumble upon the program for addition. And that program can solve things that are not in that benchmark. It can generalize. Now, I'm not saying. I just want to be clear. I'm not saying that current methods are lookup tables, but that was just an example to illustrate the point that the space of functions that you're learning in really matters. [00:43:19] Speaker B: So shouldn't we be way better at math if we have universal computational abilities that is guiding our cognition? Sorry, it's a very naive, dumb question. [00:43:29] Speaker A: But I don't know. I mean, I don't know how to answer that. I mean, I guess we are good at math, right? I mean, there are people who are very good at math. There are examples of people, yes, but they're not running on magic, right? I mean, there has to be some kind of way. [00:43:45] Speaker B: Yeah, they are. Some. It's magic. No, but, yeah, okay, I understand that there are savants in many different areas. Maybe that's not the best example, but shouldn't we all be right? Or is the brain our neurons in our way, and if we could just get to the RNA computation, we'd all be. The brains are slowing our universal computers down, you know? [00:44:09] Speaker A: No, I mean, that's. I don't. I don't view it as like, okay, there's. There's the neural network and then there's the rna, and these two things are like, very different things. That's not how I. How I would. [00:44:19] Speaker B: The neural network just won't listen to the RNA who's trying to tell it, you know? [00:44:25] Speaker A: Yeah, yeah. I mean, and, and, and related to that, there's, there's some. If anyone wants a primer for, for this, like this small field of within neuroscience, I would really recommend Sam Gershon's paper. And the reason I brought that up right now is because it's an attempt to synthesize the view of synaptic based memory and molecular mechanisms for memory. In the second half of the paper, he lays out some model that would synthesize these views. How do you connect the idea that neurons are talking to each other through synapses with this idea that maybe memories are stored molecularly? The first part of that paper is the primer that I'm talking about because it has, it's, it's the best intro to this field that I know of. It covers a lot of the conceptual reasons and the empirical reasons why the synaptic story of memory doesn't really hold up. It also has a good, a good summary of something that happened back in the 1960s where there was a short period of time where a lot of people were working on what they called macromolecular engram theories where they thought that memories could be stored the same way that we have genetic information stored within molecules. Maybe memories are stored within macromolecules. And RNA was kind of one of the leading candidates for this. [00:46:14] Speaker B: Let me just define engram for those listening is basically a physical trace of memory in your brain, however that's instantiated. So some people think the engram is a certain set of cells that are associated with the memory. Some people think the engram is stored within the synapses. And then a growing number of people perhaps think the engram is laid out physically within these molecular structures. Macro or micro. [00:46:40] Speaker A: Yeah, yeah. And let me just clarify something that like a lot of the times when I bring up the idea that molecules could be storing memories, one common response I hear is that of course molecules are storing memories. Everything is molecules. Synapses are also molecules. Like it's going to be molecular. But the real point is where is that information stored? And maybe a better, a better description of this would be an atomic theory of memory. How are genes stored? What's the mode of genetic information encoding? It's really about how atoms are arranged within molecules, not how molecules are arranged within the cell. Right. [00:47:34] Speaker B: You're taking a very reductionist approach. I mean, that was going to be my reaction to what you said about the common response was like, well, yeah, of course. And it's also in atoms and it's also in quarks, but you can do that all the way down. [00:47:47] Speaker A: But we don't say that about genes anymore. Right now we say genesis are stored as sequences of nucleotides. I mean, there could be little, I don't know, tricks that organisms use to also like, you know, carry information through transgenerationally. Like, you know, there's epigenetics. There's also different ways that you can have inheritance of information across organisms. But the main way that we conceive of genes being stored is in the sequences of nucleotides. [00:48:28] Speaker B: So it's about what the right level of emergence and emergent properties is the level that carries the most causal information about what we're talking about. Man, that was a mouthful. Sorry. [00:48:44] Speaker A: Yeah, no, I mean, yeah, that makes sense. There's a lot of parallels and I talked to you about this when I met you a few months ago. There's a lot of parallels between how this issue is being treated now, the issue of mammary engrams, how it's being treated now versus how it was treated, how genetic information was treated. Before the discovery of DNA, people used to think that it's messy. It's like there's not going to be a clean story for it. It's in proteins. It's, you know, every cell has a different protein composition and proteins are rich in information because, I mean, they didn't use that word, information, but they're very rich. And you know, the protein composition of a cell of a turtle is going to be different from a human cell. And that's what leads to, you know, a human being formed versus a turtle being formed. [00:49:40] Speaker B: But then the central dogma came about. DNA, genes, RNA to proteins. And it turns out it is messy, just in a different way. [00:49:50] Speaker A: Well, I wouldn't say. Yeah, well, actually, I don't even know if it is messy still. The messiness, right? Yeah, the messiness that we see right now still might be a result of us not understanding the system correctly. [00:50:06] Speaker B: The way we need to do that is through universal computation. Right. Is that what you're going to. [00:50:10] Speaker A: Well, I think so. I mean there's a whole debate right now. Over the past 20 years there's been a debate over the non protein coding portion of the genome, the junk. Is it functional, is it not? Yeah, it used to be called junk. Nowadays nobody really calls it junk. But one end of the spectrum believes that it's not functional, most of it. And the other end of the spectrum thinks that most of it actually may be functional. And actually when I stumbled upon this literature, it was very exciting to read. It's one of the most heated debates that I know of that's out there in papers that you can read. [00:51:00] Speaker B: What, the functional, like the junk versus non junk? [00:51:04] Speaker A: Yes, yeah, yeah. Over the past 20 years, I guess the main proponent of the idea that the non coding DNA is functional or one of the main proponents would be John Matic and he was kind of, if you look up his publications, you can find the trace of that debate. But the idea is that. Hey, okay, well there's a couple arguments here. So the people who say that most of the, most of the DNA is non functional, they usually rely on things like conservation. And if you look at, if you use conservation as a criteria for what's functional or not, you come up with like a upper bound of let's say 20% of our genome would be functional. [00:51:56] Speaker B: What do you mean conservation? [00:51:58] Speaker A: Sorry, it's what portion of the genome is conserved across species? [00:52:07] Speaker B: Oh, that kind of conservation. Gotcha. [00:52:09] Speaker A: Yeah. [00:52:09] Speaker B: So it stays the same. Stays the same. Is the same across species. [00:52:13] Speaker A: Yes, yes, exactly. [00:52:15] Speaker B: Yeah, sorry, I'm trying to just make sure. [00:52:17] Speaker A: No, no, yeah, that makes sense. Yeah. And so the other end of the debate, they would argue that no, conservation is not a good criteria. There's many other criterias that you can use for hints for functionality. One of the arguments that John Matic actually has brought up is that you see that the non protein coding portion of the genome, the ratio of non protein coding to protein coding increases as a function of organismal complexity. So you know, in single cells it's a, it's a lower proportion and it, and it just increases as you go into multicellularity. And, and the criticism towards this is that, well, what is, what is complexity exactly? How do you, how can you assign complexity to organisms? And that's a fair criticism. But what happens is if you sort animals, if you sort species based on this criteria of what's the proportion of non coding to coding? It just looks intuitively like it's an increasing, increasing complexity. [00:53:31] Speaker B: Do you know if you do the same thing with brain, relative brain size, it's the same. [00:53:38] Speaker A: It's not, I think ants are like above us or something. Not in relative. [00:53:42] Speaker B: Yeah, okay. On the logarithmic scale of brain complexity, size to body mass. [00:53:48] Speaker A: Yeah. [00:53:49] Speaker B: All right, all right. I'm not going to look it up at the moment, but yeah, but I. [00:53:54] Speaker A: Mean, yeah, I mean, I get the point here. It's like we're looking for like Some sign that, some indicator that puts us, puts humans on the top of the chart. And that's kind of a weird thing to do. And it's very human centric, the earth has to be the center of the universe kind of approach. But nevertheless, there is this problem of, okay, what is all this, what is all this non protein coding DNA doing? Is it just transcriptional noise? Because a lot of it's transcribed. That's Basically what happened 20 years ago, is that we realized that these portions of the DNA that don't encode for proteins are being transcribed. There's a lot of specificity within the cell. Like you can see that a lot of them are localized in very specific ways and we don't know what they're doing. You can find correlations with certain traits and diseases. And then you would see that Those non coding RNAs that are associated with a certain trait are actually expressed in tissues that are relevant to that trait. If it's some neurological problem, then it's also expressed a lot in neurons. There's a lot of little hints like that that say that there's a story about, about genetics that we don't understand. Getting back to your point about, okay, look, genetics was actually messy. The thing that wasn't messy was how proteins are encoded. There's a very clean story to that. There is, you know, there's a codon for every amino acid. There's a lookup table that this system uses and it does a very simple translation of RNA molecule strings to amino acid strings which become proteins. That there's, it's a, it's a, you know, a remarkably elegant and clean system to encode proteins. Still, the story of, okay, how does this actually encode for an organism is messy, but maybe that's because we just don't understand the system well enough. In humans, less than 2% of our DNA ends up in messenger RNA and most, actually half of messenger RNA usually on average in humans is untranslated. So it ends up being less than a percentage point of our DNA encodes for proteins has sequences of nucleotides that encode for sequences of amino acids. Now right now the story is, okay, well the rest is there's a lot of regulation and it's all like, it's all about how are these proteins, how is protein synthesis regulated across different cells. But you know, it seems to me that like when I, when you take, when you take a step back, you see that there are these molecules within cells that resemble strings of symbols. Okay, a string of symbols that Come from a very small Alphabet of four letters, and they also fold up into these tree like structures that, you know, kind of would be very useful for doing computational stuff. Could it be the case that these molecules are involved in something more than just regulating the synthesis of proteins across cells? Maybe something else is going on. Maybe something, some deeper explanation would actually make it, make it, make it make sense. [00:57:35] Speaker B: Okay, hold off on that because I want to. Because you're what? I want to ask you how you even came to appreciate the combination of RNA and combining it with combinatory logic. However, let's. [00:57:50] Speaker A: Yeah. Combinatory logic. Yeah. [00:57:51] Speaker B: What am I saying? Am I. [00:57:53] Speaker A: It's. It's combinatory. Combinatorial. So there's two things. There's combinatorial logic and then there's combinatory logic. And combinatory logic is the one that. [00:58:05] Speaker B: What have I been saying? [00:58:07] Speaker A: No, you, you, you said, you said. You just pronounced it differently. You said combinatory. [00:58:12] Speaker B: Combinator. [00:58:14] Speaker A: Combinatory, yeah. Combinatory. Yeah. [00:58:17] Speaker B: Geez. All right, I'm going to. [00:58:19] Speaker A: Combinatory logic. [00:58:19] Speaker B: Combinatory. I'm going to. Now I'll just have to go back and edit myself the whole. [00:58:23] Speaker A: Yeah, sorry. I think that the. [00:58:25] Speaker B: Combinatory. Combinatic. [00:58:27] Speaker A: Combinatory, yes, yes, that's right. The emphasis is on the first syllable. [00:58:31] Speaker B: Yeah, Aglo. Aglopore. [00:58:33] Speaker A: Yeah, exactly. Yeah. [00:58:36] Speaker B: Okay. This is staying here, by the way. [00:58:39] Speaker A: Oh, no. [00:58:40] Speaker B: Yeah, of course. Because it's just me pronouncing things like an idiot. Okay. But what I wanted to hang on to for just a minute before, because I want to ask you about how you came to appreciate this or how you came to this idea is so the regulation story, right? What I've come to appreciate through works like Alicia Guerrero, Terence Deacon, the autopoetic, like, you know, Varela Monte Mor. Like the whole is like in life systems. Like if you have in a system the contextual things, the regulation things are way more important than the thing that we think is doing the thing, right? So you drop so in like a water maze, the walls or let's say a river, right? You don't have a river without the banks. Right. The river affects the banks and banks affect the river. But when we talk about rivers, it's like the river is the thing, but the river's not the thing. It's the banks and the river and they're affecting each other. So there's top down causation, bottom up causation. And so, but the idea. So then I think. So Alicia wrote this book, I think it's called Context Changes Everything. And she has Argued for this strongly that these processes, like, are all affecting you. Like you have to appreciate the context of whatever process is happening within which a process is happening just as much as what you consider the process. So then, and unfortunately, something like DNA to RNA to protein, that manufacturing system, if you want to call it that, could be like a huge bureaucracy where we have like all this super unfortunate regulation that seems somehow necessary for like a giant democracy to get nothing done, but maybe just like the very little that we can get done. I'm sorry, that's a bad analogy. But of course, the other analogy I would make is people who work on cognitive architectures have come to appreciate that, all right, you have a working memory module, you have a long term memory module, you have an executive module. Getting those things to work by themselves, not that hard. Getting them to work in concert, that's the really hard part. So it's how do you regulate, how do you make these things act together? And it turns out that the regulation part of it is a huge part of it. So I just wanted to linger on that for a second to say, well, yeah, I mean, there does need to be a lot of regulation. Maybe it's not all regulation. [01:01:25] Speaker A: Right, well, okay, I mean, now that you describe regulation like that, I think I would agree with all of what you said. I mean, but I was imagining that as computation, right? Like there's decisions that need to be made of how do I direct, you know, a protein to the membrane, the cells function. Yes, exactly. A protein to the membrane, the cell function. You know, where does the cell actually go in space, in the body plan? You know, what kind of, what kind of proteins do I need to. Well, so the issue is there's, there's this concept of gene regulatory networks. Okay, yeah. And that's kind of, I think, what people mean when they say gene regulation. If what you mean is something more broad, then I think I would agree with you. Because gene regulatory networks, at least the way that they're conceived right now, are as powerful as finite dimensional dynamical systems. They're just the same issue that comes up with neural networks, comes up with gene regulatory networks, where you have this gene is inhibiting this gene and this other gene is, you know, promoting this other gene. And you know, you have a big, big network of gene interactions that define, okay, which, which genes are expressed and which are not. And it's very similar to a neural network kind of model of competition. If you think that's how it's being regulated. I think that's, I would take an issue with that because of the computational capacity of the system that you're describing. I think it needs to be. It's very likely that if biology can or has achieved universal computation at the molecular scale, it would use it for development, for implementing a body plan. [01:03:23] Speaker B: What about cognition? [01:03:25] Speaker A: Cognition, too? Yeah. I mean, the thing is, there's so many domains of life in which computation is not just useful, but essential. What is your blog called? [01:03:38] Speaker B: Life is Computation. [01:03:40] Speaker A: Yeah, that's the name of the blog. Life is Computation. [01:03:45] Speaker B: So there's this. We're in a very. Everything is computation in neuroscience. And I'm reacting generally over the past few years to that, over the course of a lot of conversations I've had on this podcast. Like, let's say it's true. Like, let's say RNA has this universal computational capacity. And man, I don't want to just blabber on here. I want to get to the story, but what if. I mean, if life is not computable, and I don't believe it is, because the universe is open, and so there are only solvable problems in closed domains. I'm not using rigorous mathematical terms here. [01:04:30] Speaker A: Yeah, I don't know if that's what people mean when they say non computable in classical theory of computation. [01:04:36] Speaker B: Okay, that's your. [01:04:37] Speaker A: The thing is. Okay, so. So you just said there's an open domain. Like it's an open system. It's not a closed system. [01:04:47] Speaker B: Yeah. [01:04:47] Speaker A: And in fact, that. That's. That's kind of the. That's how you get universal computation, a system that is able to recruit more dimensions to store its state. That's the key ingredient for arriving at universal computation. Like, if you think of a program. Yeah, go ahead. [01:05:08] Speaker B: No, I didn't mean to interrupt. I'll come back to it. Go ahead. [01:05:11] Speaker A: Okay, so if. Just to. Just to illustrate what I mean by that, if you think of a program that's running on a computer, it occupies a certain amount of memory, and it has the instructions that would enable it to expand in memory to recruit more memory if it needs to. And it might run out of memory on your computer, which you can conceive of as a closed physical device. But that program isn't really. You can't describe it as a closed system because its progression in time requires it to interact with the environment and recruit more space to actually go on with the computation. And in all of these, like, the reason I say that that's the miss. The key ingredient is that in a lot of these abstract systems that stumble upon universal computation Accidentally they happen to have this ingredient. Like if you think of Conway's Game of Life, you can think of like some pattern of on and off cells. And it's, you know, it's always finite in size, but it can expand in the surrounding space. And if you're implementing it on a piece of paper, you might run out of space on your piece of paper, but you know that you failed to stick to the rules of the system at some point and you need to add more paper to it. You're describing a system that's essentially open. [01:06:40] Speaker B: So we have not that much time left. And I want to, because we haven't talked about combinatory logic yet really and its connection to rna. So you have this problem. All right, let me just sum up here. So you go through this disillusionment in graduate school, you stumble across Gallister's book, you start thinking about computation and you realize, well, maybe you return to the idea. Maybe I can think of the brain as a computing system or a computer. How did you come across the idea that RNA is a universal, is universally has, that has universal computational capacity. And then how did you, how did you connect that with combinatory logic? [01:07:32] Speaker A: Yeah, so RNA was like kind of in the spotlight already just from like all these other people that are in the field that have put it forth as a candidate for storing memories or for computation. I mean it was in the spotlight for me like since grad school. [01:07:51] Speaker B: And that's because it's kind of stable enough that it could last like not a protein that would get degraded over a day or whatever and for other reasons. [01:08:01] Speaker A: Yeah, I mean we've learned that it's stable Recently, I guess 10 years ago. I mean there's like, there's a paper that came out earlier this year that shows that you can have RNA strands in the nucleus that last for years, for the lifetime of the animal. But I can send you that paper if you want to link that in the show notes or whatever. But I mean a lot of people think of RNA as being short lived and transient. I think the reason why rna, well, there are many reasons that it, that it has come up, but I think just, there's something very appealing about it being a string of symbols. But yeah, I mean that, that it was basically in the spot. I mean in the 1960s it was the like main candidate for these macromolecular theories. They would like, you know, they would claim that we purified RNA from a planarian and injected it to another one and it worked. And that's, this is related to your interview that you had with David Glansman. He's kind of also landed on RNA methylation as a very promising candidate for memory, an epigenetic. [01:09:26] Speaker B: Hang on, let me ask you. So was it Also in the 60s, when did the idea that RNA may have preceded DNA as, like, the original life molecule? Right. So it used to be used. People used to think, well, DNA is the original molecule, and then RNA came out of that, and then, you know, to produce proteins. [01:09:46] Speaker A: Well, I don't know if. I don't know if there was a time where people thought DNA was the first molecule. I know that the idea that there was an RNA world, that there was a world where before DNA and proteins, there was rna, that comes from the realization that RNA both has a coding encoding capacity in the sense that you could have a reader like a ribosome actually read its content and translate it to something. But it also has an enzymatic capacity, so it can serve as an enzyme to chemical reactions. And so these two capacities, people would argue, are the main components, molecular components in life. And RNA has both of them, although not as efficiently like, proteins are way more catalytic and DNA is like, way more Better at storing information. So then these two things evolve later. I think that's the main argument for the RNA world hypothesis. I don't know actually how serious people take it today. [01:10:58] Speaker B: You know, what else? So DNA involved for better storage, proteins involved for better enzymatic activity, and brains involved for better cognition. [01:11:08] Speaker A: Yeah, that evolved later, too. But as long as you're willing to concede that RNA had the computational capacity to begin with. [01:11:17] Speaker B: Yeah, capacity is great. It's just like whether. How. Whether it's implemented and stuff. So. Yeah, yeah, capacity. I'm all about capacity. Sure. [01:11:26] Speaker A: Yeah. So where was I? [01:11:29] Speaker B: So I interrupted you and asked about the RNA progenitor story with respect to DNA in the. Like, the RNA was the early storage thing. But you were about to. So you were talking about in the 1960s, that was the molecule that people thought could be used, like, as a symbolic string. String of symbols. Right. [01:11:53] Speaker A: Yeah, I mean. I mean, a lot of people, I guess, based on just the discovery that genes are actually encoded in molecules, maybe that was an inspiration to the theory that cognitive memories might actually also be stored in molecules. And so some people thought it was proteins. In fact, the earliest proposals were in the 19 in 1950, before the discovery of the double helical structure of DNA. And those were based on proteins. Those were in a time where. Yeah. [01:12:23] Speaker B: A time when people thought proteins were more stable. Is that what you're going to say? [01:12:26] Speaker A: Well, a time that people thought proteins were genes. The genes were proteins, actually. Okay, yeah, yeah. The majority. I mean, at least according to this book, the eighth day of creation until the discovery of, of the double helical structure of DNA, most biologists thought that genes were proteins. Okay, all right, yeah. But anyway, the point is that in that era, in the 1960s, there are like, dozens of different papers that are working on this idea or doing different kinds of experiments on this idea of, of memories being encoded in molecules. And they used different approaches. Not all of it was like this crazy feeding planarians to other planarians that James McConnell is known for. People, for example, showed that you have changes in RNA composition with learning. And. And they would do experiments where, like, I don't know, you put a rat on a tightrope and then you would show that, okay, the ratio of RNA nucleotides, changes in this brain region. And so, yeah, but the issue, I mean, that field kind of died out in the beginning of the 1970s. And I guess that one of the main critiques to that theory, to that. To that subfield, was that how do you know that the changes that you're seeing are actually encoding memories? Like, you know, people could. You could extract some kind of chemical from a learned animal and inject it to some other animal, but that could just be like a hormone for fear or something. Like, how do you know that that's actually encoding the content of the memories? And they really didn't have the tools at the time to really study that question deeply. [01:14:37] Speaker B: Okay, so I remember now where we kind of were, because I was asking you how you came to the linkage between combinatory logic and the RNA story, and you started to talk about the 60s and how this is kind of an old story that went out of favor, but you were. So then how did. Yeah, go ahead. [01:14:56] Speaker A: Yeah, so RNA was kind of like, already in the spotlight for me just because, like, there's many people that have. That have brought it up as a good candidate. And then since reading Randy Galastol's book, I had been thinking about, okay, how would a computation system that's really universal. How would a really actually universal computation system look like at the molecular level? Because you could imagine, like, okay, it's cool. RNA are strings of symbols. Are you going to treat it like a Turing machine tape? Like with a Turing machine that goes in and edits this symbol and moves next? I mean, that can't possibly exist in cells. You would see it in the ribosome which does something very simple, which is just translates every triplet of nucleotides to an amino acid. That's huge. And it's visible in electron microscopy. So trying to say that something like a Turing machine might exist isn't very plausible. So what could a computation system look like? And this has been on my mind ever since. And I think the connection between combinatory logic and RNA happened when I. When I realized, well, two things basically. One was like, I just learned that RNA has a secondary structure, meaning that it. The same way that DNA strands can come in and fold and form these double helices, like by you bring in two strands of DNA and they connect to each other and form a double stranded helix. The same thing can happen within the same RNA molecule. So an RNA molecule is a strand, is a string of nucleotides, and you can have one segment of that strand, pair, base pair with another segment of that strand. And that can happen, sort of fold. [01:17:04] Speaker B: On itself, and the pairs have to be the right pairs that would match. And it can do that. Sorry, I'm sorry, I'm trying to just be crystal clear. Yes, no, it can do that. Let's say it's like 100 pairs long and if, like the first and last four pairs are the ones that match, it would fold in on itself and then you'd have like a big loop of all these pairs that weren't matching and then those four that just connected at the bottom, like a little tiny lot. [01:17:29] Speaker A: Yes, but usually, usually there's a lot of matches. It's not just. I wanted to go simple. Yeah, yes, yeah, exactly. So, so, you know, you can, like, people do this, that you can. You can study a certain RNA strand and study its secondary structure. And usually the maps look really pretty. They're very intricate. There's many layers of, essentially parentheses. When people want to represent the secondary structure, they use parentheses. Because if you think about it, one part of the strand coming to another part of the strand is matching a nucleotide to another nucleotide. Usually it turns out it has a parenthesis structure. Sometimes you get these things that they call pseudonaughts, which kind of deviate from a balanced parenthesis structure. But usually it's a balanced parenthesis structure. [01:18:27] Speaker B: Sorry, this is an aside. But so in a given, I don't know, 1000 base nucleotide RNA sequence, there are going to be lots of places that could attach. So what is the possible number of secondary structures that a piece that long could form? [01:18:53] Speaker A: Yeah, that's a great question. And the Answer is a lot. And in fact, we know that a lot of RNA strands have many different confirmations that they can take. And it's not always static. It's not always a single structure. [01:19:15] Speaker B: Are you saying a single thing can fold up one way and then in a given solution, relax and then fold up a different way? [01:19:23] Speaker A: Yes. Yeah, absolutely. And in fact, that's how ribozymes work. There could be an element within an RNA strand that folds differently, depending on temperature. And so that can be used as a sensor for temperature. But, yeah, okay, so that's a very important feature of RNA strands. And then just out of curiosity, like, I was like, I don't know, actually, I can't remember exactly how I stumbled upon combinatory logic. [01:19:58] Speaker B: Did you see lambda calculus first? I mean, you probably knew about lambda calculus. [01:20:02] Speaker A: I think. I think, yeah, I think I saw lambda calculus first. I don't even remember the first time I was confronted with it, but, like, I was trying to learn how it works, and then. And then at some point, it just clicked in. Like, I think. I think I remember the moment that it clicked in. [01:20:22] Speaker B: Oh, what was that? You gotta tell that story. What was that? [01:20:26] Speaker A: It was. It was. There's nothing special about it. I was just, like, I was sitting at my desk and I was on a Wikipedia page, I think a Wikipedia page for commentary, logic. And then I was like, wait a minute. It's very similar to how we represent RNA strands. There's these parentheses that you use for representing a secondary structure. And you have a limited Alphabet of combinators. Usually it's and K or bc, K and W. And that's it. With that limited Alphabet, you can express any computable function. And the rules for running that function are very simple. They're very local. [01:21:09] Speaker B: Wait, how long was this moment? [01:21:13] Speaker A: Probably a minute. I don't remember where you had all these thoughts. Well, yeah, I mean, I guess just noticing the parallels between the two. And immediately afterwards, I was sure that somebody had written about this because it's so obvious. [01:21:29] Speaker B: Once you. [01:21:30] Speaker A: Yeah, somebody. I was sure somebody wrote had. Like, there was some. There was. There had to be some paper or something. And I kept on searching. I couldn't find anything, which kind of made me more and more excited. [01:21:42] Speaker B: Wait, okay. But that was your. Okay, so I'm sorry, I'm interested in this. So you had this moment at the computer and you realized it, and then how soon? So that must have been like, oh, my God. And then. And then immediately you're like, oh, well, this must have been done. That's kind of a downer on that. On the moment, right? [01:22:04] Speaker A: I don't know. Yeah, I guess. I guess. Yeah. I was. I was preparing myself to be very disappointed that somebody else had brought this up. [01:22:11] Speaker B: That's right. You'd been in academia for a while and an experimental neuroscience. [01:22:17] Speaker A: Yeah, okay. [01:22:17] Speaker B: Yeah, it all makes sense. Yeah, sorry. [01:22:21] Speaker A: Yeah, I know. Yeah. And so. And then we had this, like, a journal club with my advisor here, Gabby Maimon. He's also, like, very interested in how RNA might be involved in computation. And he has a lot of. He shares a lot of the same ideas with this a growing group of people that think molecules might be involved in computation, more so than is appreciated. And so we had this journal club with two other neuroscientists, Abbas Rizvi and Jeremy Dittman. We were, like, reading papers and just thinking about how molecules could be involved in memory and computation. And that's kind of where I had first tested the waters with these ideas. And it was very critical in kind of just polishing the theory to get the feedback in this group. It was really nice to have this group of intimate four people sitting around and then critiquing that this part doesn't make sense or. And a lot of the things, like a lot of the perspectives that I had about that I just recently described, for example, that the ribosome is a very impressive molecule and it's doing something pretty simple and it's large and you can see it. That's something I specifically learned from Jeremy, just that understanding of that appreciation of the ribosome. And so, yeah, so that's kind of where I had the opportunity to kind of flesh out the details of this idea. [01:24:21] Speaker B: So I asked you about lambda calculus because I know that combinatory logic and lambda calculus share a lot of similarities. Right. But you have mentioned. I've heard you mention that combinatorial logic is. Combinatory logic is precedes Turing machines and lambda calculus. So what the hell? What is that story? [01:24:43] Speaker A: Yes. I love that you brought that up, because it's not appreciated that the first mathematical system, the first abstract system that humans came up with that had a universal computation capacity was combinatory logic. It was discovered by this mathematician named Moses Schonfinkel. And that's. And that's kind of the only thing that I know he did. I don't know if he had other. Any other contributions. And then he was just forgotten. And then Haskell Curry rediscovered it again, and then. And then realized, oh, somebody else had worked on it. And so he gave credit to Moses Schonfingel. But I guess even this emphasizes the point. It was discovered twice independently, before we had any other system of computation by only two humans. [01:25:36] Speaker B: And if we have this universal computation shouldn't have happened much more. I'm going to keep coming back to this stupid. [01:25:42] Speaker A: I don't know. Yeah, maybe prehistoric times. [01:25:48] Speaker B: Okay, Four humans, two prehistoric and two post historic. Okay, but so then what's the connection? Why is it special? [01:25:58] Speaker A: Because I think the point I'm trying to make about it being the first one is that it's simple. It's very simple. It's like. And it's also very beautiful. It starts off, it's a functional programming language. There's nothing that's not a function. Everything is a function. They call it combinators. Everything's a combinator. The inputs of these combinators or other combinators, there's no primitive data types. [01:26:31] Speaker B: So it's functions of functions of functions that take in functions and spit out functions. [01:26:37] Speaker A: Exactly, yes. And every function takes a single function and spits out another function. The way that you can get a function that takes two inputs, the way that you can build a function that takes many inputs is to say, okay, like, let's say I want a function that does addition. Addition takes two inputs. So what it does is the way I can do that is I can say, I'll take. I'll define a function that takes a number and then spits out the function that adds that number to any new input that it gets. And that's. That's a technique called Currying. After Haskell, Curry was the second person who invented a combinatory logic. [01:27:22] Speaker B: There's no fin shinkling. Is that the. What's that? [01:27:26] Speaker A: No, no, I guess, I guess this technique was. Was actually exclusively Curry's idea. I'm not sure about that, actually. I might be wrong about that. But anyway, so it's very simple and then it just also very nicely maps on to RNA biology. Because when you. If you want to implement something like this at the molecular level, the main challenge is parenthesis matching. [01:28:00] Speaker B: Regulation, in other words. [01:28:04] Speaker A: Yeah, I guess. I mean, if we're even broadening the definition of regulation even more now. [01:28:09] Speaker B: Okay. It's like housekeeping, right? Like, I mean, it's just counting parentheses is not. Doesn't sound. It's not the sexy computational thing. It's like, I gotta keep track of where I am in this nested series of functions. [01:28:23] Speaker A: Yes, yes. Yeah. And so you could do that explicitly. Like, if I'm actually evaluating a combinatory logic Term on paper, that's probably what I would do. I would like keep track of the depth of a parenthesis and just keep on going and then use that technique to determine what's a term, what's a single term, and then you could do stuff with that term. But in rna, because of that intrinsic secondary structure that RNA have, you don't need an explicit machine that goes in and does this parenthesis matching because matched parentheses are already proximal in space, in physical space. What that allows you to do is you can implement every single one of these handful of application rules and combinatory logic with local operations, with local strands. [01:29:26] Speaker B: Of rna, with local. [01:29:28] Speaker A: With local operations on a part of an R. On a. Pardon, A part of an rna, some. [01:29:34] Speaker B: Of which is paired with itself in a certain section and some of which is in this open loop. [01:29:41] Speaker A: Yes, exactly. So, I mean, I guess most people will be listening to this. So there's no, like, there's no illustration. But I'll try to describe it in the most illustrative way. [01:29:54] Speaker B: And for people who are watching, I'll just put up a still of you giving a talk with like the hairpin loop structures. [01:30:02] Speaker A: Right, sure. Yeah, yeah, Maybe, maybe like the rules or like it doesn't. [01:30:06] Speaker B: I'll also. [01:30:07] Speaker A: I don't have to be in it, but like, I don't have to be in the picture. Just like, just like showing how one of the rules can be implemented, one of the combinatory logic rules can be implemented. So these application rules and combinatory logic, they're just very simple operations like swap two elements or like delete an element or add parentheses around two elements that come afterwards or something like that. And to implement that molecularly, you only need some kind of enzyme that detects the motif that encodes for that combinator first. Like, let's say, I don't know, we have three primitive combinators. And that's a key point. There's only. You only need a handful of primitives. So let's say you have three primitive commodities. So we have three different codes for these different combinators. So some enzyme should detect that. Hey, we have a motif here that encodes for one of these combinators. And that enzyme also carries with it the rule for the application rule of that combinator. [01:31:13] Speaker B: But I want to start with you and say, yeah, we haven't figured out how this could be implemented. I mean, so these are completely hypothetical. [01:31:21] Speaker A: Yes, yes, completely hypothetical. But I do want to stress that the point of this model is that it doesn't require extraordinarily complex molecules, very different from what we already know to exist in cells. So RNA strands are frequently modified after they're transcribed. The most commonly discussed modification is splicing. There's something called the spliceosome that goes in and selects segments within this RNA strands and excises them and then attaches the two loose ends. That's a way to delete certain parts within an RNA strand. These operations exist, and we know that they're within the reach of evolution of molecular biology. So. And the point here is that I can imagine a system that's very simple, that doesn't require, like, huge molecules that do complex operations. And a system like this could have gone undetected over the many decades of molecular biology. But, yeah, I mean, going back to how it would work, it's basically, every enzyme would execute cleavage and ligation operations. So it would cleave a certain part of the strand and ligate a different part of the strand and cut it. [01:32:55] Speaker B: And put it back together. [01:32:56] Speaker A: Exactly. Cut it and put it back together. And the locations of the cuts and the connections are fixed relative to the motif because by virtue of the secondary structure bringing the parentheses together, now you can say, okay, I only need to cut at this position and connect these two parts without caring about how. What's inside the parenthesis. Like, I don't, I don't care about, like, how many layers of nested parentheses are inside. This enzyme just goes in and mindlessly does this single operation. And so through this system, this hypothetical system, to emphasize it's just a theory, it's obviously going to be wrong in its details. [01:33:38] Speaker B: I mean, it's cool. It's so cool, but it's wild. [01:33:41] Speaker A: Yeah. But I want to acknowledge that it's obviously wrong. It's going to be wrong in its details. I had to come up with details that allow the system to work that I know, that I just came up with. But the point is that this is a proof of principle that you can imagine something like this happening at the molecular level, implementing a computation system. And at the same time, you have, like, all this RNA and all this DNA that we don't know what, what it's doing. Like, we haven't attributed a function to most of the genome, at least in humans. And. And so it's just very. I don't know. It's a very intriguing, compelling, compelling. Yes. Intrigue. Yes, It's a very compelling problem. Yeah. [01:34:29] Speaker B: Oh, it's compelling problem, but it's a compelling hypothetical solution. [01:34:35] Speaker A: Yes. Yeah. Well, I would call it, I mean, it's compelling me at least towards a research direction, towards. Towards, you know, at the end of the day, I want to study things empirically. I don't want to, like, you know, I don't expect anyone, myself included, to believe this theory until, you know, we find evidence for it. You know, we have to do. Do the science. And so it's a research direction that I'm arguing for, not like a specific model. And the direction being, let's figure out if RNAs are being edited in ways that can implement computation. [01:35:18] Speaker B: So you were disillusioned. First of all, you were disillusioned and now you're hopeful. Would you call yourself hopeful? How would you describe your outlook? [01:35:28] Speaker A: Yes, I would describe it as passionate. I would say I'm very passionate about this problem. [01:35:34] Speaker B: Sure are. [01:35:35] Speaker A: I feel like it's extremely exciting. I think it's just, I mean, sometimes I forget and then I remind myself of all of the hints that are obviously pointing at rna. And it's also, it's heartwarming, I guess, for lack of a better word, to know that there are also many other reputable scientists that take these ideas seriously and that that circle is growing and I hope it grows. I hope it doesn't end up being something like that subfield in the 1960s. And it really depends on us. It really depends on us trying to make this argument that this is a worthwhile research direction. I don't want everyone to be working on it. I don't want you to, like, pour like 50% of all, you know, research budgets towards it. But at least in the spirit of diverse approaches, I feel like we should be able to maintain a consistent research direction along the lines of molecular computation and memory. [01:36:48] Speaker B: You just need a little drip out of the fire hose of funding. [01:36:54] Speaker A: Basically. Yeah. [01:36:55] Speaker B: So those scientists, those reputable scientists that you mentioned often get labeled. Kind of like, often don't feel fully respected. Right. In the field and often get labeled, I don't want to say crazy, but, you know, out of the mainstream, out of the dogma. [01:37:12] Speaker A: Yeah. [01:37:13] Speaker B: Do you ever feel. Do you ever think, am I crazy? [01:37:18] Speaker A: I don't know. Actually, I might regret saying this, but sometimes I think, do people look at me the same way that I look at Roger Penrose? About, like, yes, microtubules. And like. [01:37:33] Speaker B: That's more Stuart Hameroff with the. But Penrose is on board with that. But. [01:37:36] Speaker A: Yeah, yeah. [01:37:38] Speaker B: Or the way I look at that. Right. So I look at the microtubule things and I'm like, exactly, yes. [01:37:44] Speaker A: Yeah. I don't Know, but do you feel crazy? [01:37:49] Speaker B: Not about how you think other people view you. Do you ever think like, oh, this. Am I crazy? [01:37:55] Speaker A: I don't think so. I think. I think, like, I think the arguments and the evidence that I'm resting on are very rigorous. And again, I just. I want to be clear. Like, it's. It's the threshold of evidence you need to believe a theory is much higher than the threshold of evidence you need to work on pursuing a theory. And I think, you know, I think it meets the latter threshold. [01:38:31] Speaker B: So I've had lots of conversations with people right now. Alex Gomez Marin comes to mind because he's studying things that are sort of outside the norms of what the scientific community, especially in neuroscience, would consider okay to study. Right. Or get funding for or something. And you mentioned these people in the 1960s, right? This is not a new idea. As Yuri Bujaki says, there's no new ideas under the sun. These are all like recycled things. Right. He says it in a different way. And how brains do computations, which. He doesn't mention rna, but that's besides the point. But so you mentioned these people from the 60s. The idea of RNA kind of came up and then it died down. So what I wanted to ask you is, like, how do have you reflected on how you feel, what this tells you about how science progresses? Because most people, like, stay, they get into, let's say, experimental neuroscience, right, And Drosophila, and then that's their career. It's just studying this sort of space of problems, fairly narrow. But now you've done that and you've learned about an alternative framework for universal computation, which is one of your interests. But. And then you realized, like, well, this was not new. This kind of ebbed and flowed already. How does this make you feel about, like, the history and progress of science in general? [01:40:03] Speaker A: It makes me realize that we're in a fragile place. It makes me realize that it may very well be that people look back at this era as some. Sometime where some idea just kind of reemerged and it died out again. Now, that's fine if this idea is wrong, but if, you know, 100 years down the road they realized this was all correct, but it kept on resurfacing and dying out, that's a possibility. I kind of want to prevent it. I think there's no. There's no inevitability when it comes to these sociologically decided things. And that's why I'm trying to get people to, if they don't want to work on it at least agree that it's worth putting some resources into. I just want to say that also the era in the 1960s, the ideas that were there were mostly around memory, were mostly around molecules encoding memory. And I think I have, like, I think this is not the same thing really. It's about computation and it's about actually bringing in the insights from the theory of computation to these strings of symbols. And it's obviously related to memory. Memory comes up in computer science all the time, but it's a different kind of perspective. And also just all of the conceptual arguments are much more mature now. If you read Galistol's work, the arguments are much more solid and convincing than anything that anyone wrote in the 1960s. And also just, we just have better tools and it's a different era. It's a very different era and it's a different idea, but it's very related. [01:42:16] Speaker B: All right, sorry, I have to ask this. So we've talked a lot about molecules and biology and you have a computer science, computer engineering background and then you got into experimental neuroscience. And a lot of people who start off in neuroscience have this computational bent. Right. But then a lot of people who are in cell biology, for example, don't have that. That's where like the stamp collecting began. Right. And speciation and things like that. But has this made you how. Sorry, I'm baiting you by the. I'm trying not to bias your. How has this altered your appreciation or lack of appreciation for the micromolecular world. Right. Relative to like your kind of computational mindset. I'm just leading you into the answer. Sorry, but I wanted your reflections on it. [01:43:12] Speaker A: Yeah, I guess, I mean, I kind of maybe wish that in retrospect that I had studied, I had studied molecular biology just because that seems like a very relevant field now for the problem I'm trying to work on. And just to say it's not that molecular biologists or geneticists are completely foreign to these ideas or find these ideas, computational ideas foreign. John Maddock, for example, explicitly argues that these non coding RNAs might be. Might be implementing a digital computation device. And there are people who are definitely on this. These ideas resonate with their ideas. I guess it's just, it's just right now neither. Not in neuroscience, not in molecular biology. Is anyone really trying to take universal computation seriously? Yeah. [01:44:31] Speaker B: Are you having fun? [01:44:32] Speaker A: I don't know if that was the correct. Maybe I misunderstood your question. [01:44:37] Speaker B: No, well, my question. So I've come to appreciate, I've come to have A little more awe. Just how goddamn complex everything is. Right? And, like, the world of the cell, that's a whole world. You know, the brain is the most complex thing in the universe. Although Terry Sinowski pointed out to me that his wife said to him, well, actually, it's two brains is more complex than one brain. So two people talking. [01:45:06] Speaker A: Right? [01:45:07] Speaker B: Which is true. [01:45:08] Speaker A: Yeah, Right. [01:45:11] Speaker B: But these stories of the story that you're working on, if it turns out to have validity in one form or another, I mean, just the capacity, the astounding results of evolution that continue on, it's just amazing to me. And so that wet biology part. So when I got into neuroscience, it's all computation spikes. That's how they're doing it. Information, blah, blah, blah. But then you look in the cell and it's like, man, that is messy and hot and. But it's doing just as awesome a job, whatever the job it has, you know? I mean, it's just amazing that anything works in biology. [01:45:53] Speaker A: Well, yeah. I mean, it's. It's. It's amazing until you understand how it works. Exactly. Like, I mean, I guess. [01:46:01] Speaker B: Is that what you're saying? [01:46:02] Speaker A: Well. Well, then you've explained it. Like. Like, until you explain it, it's a mystery. It's like, how. How the hell is this single cell creating a human fetus with all the intricate, you know, body parts and, like, it's. It's a. It's a. It looks like a miracle. Now, if somebody writes a program that draws something on the screen, I wouldn't call that a miracle, because I know how programs work. Draw something. I mean, like some complex pattern that's like, you know, looks really cool and as complex as a human fetus or something. I don't know. Maybe that was a bad example. [01:46:40] Speaker B: Well, no, but. Well, this gets back. Oh, we're out of time. Another time. Another time, perhaps. [01:46:47] Speaker A: Okay. Yeah, sure. [01:46:49] Speaker B: Because I've taken you over. I've gone over. Good to see you again. Thanks for doing this. It looks like you're having fun. You having fun? [01:46:56] Speaker A: Yes. Yeah, this was extremely fun. No, no, not this conversation. [01:47:00] Speaker B: I mean, in your. In these research questions. [01:47:03] Speaker A: In life. Yeah. Yeah. [01:47:04] Speaker B: Hey, am I fun? Am I fun? [01:47:09] Speaker A: Yeah. No, I mean, you are definitely fun, Paul. [01:47:11] Speaker B: Yeah, thanks, but I meant, you know, it seems like you're having fun. That's a great place to be at. [01:47:18] Speaker A: Yes. As long as I know that I can survive in academia. Is. It would be the. You know, the condition that I would add to that. Yeah, yeah. There's. There's. Yeah. [01:47:32] Speaker B: Okay. [01:47:32] Speaker A: I wish there's that question too. Yeah. [01:47:34] Speaker B: I wish you survival. If we are on a boat again and you go overboard, I will throw you what's the life vest or something like that. [01:47:42] Speaker A: Yes. Thank you. I appreciate that. I appreciate that. [01:47:44] Speaker B: Okay, thanks. [01:47:46] Speaker A: Thank you so much, Paul. [01:47:47] Speaker B: Yeah. [01:47:47] Speaker A: Thank you for your time. This was really fun. Take care. [01:47:55] Speaker B: Brain Inspired is powered by the Transformation, an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advanced research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives written by journalists and scientists. If you value Brain Inspired, support it through Patreon to access full length episodes, join our Discord community and even influence who I invite to the podcast. Go to BrainInspired Co to learn more. The music you're hearing is Little Wing performed by Kyle Donovan. Thank you for your support. See you next time.

Other Episodes

Episode 0

August 28, 2024 01:30:34
Episode Cover

BI 192 Àlex Gómez-Marín: The Edges of Consciousness

Support the show to get full episodes and join the Discord community. Àlex Gómez-Marín heads The Behavior of Organisms Laboratory at the Institute of...

Listen

Episode 0

February 08, 2021 01:23:57
Episode Cover

BI 097 Omri Barak and David Sussillo: Dynamics and Structure

Omri, David and I discuss using recurrent neural network models (RNNs) to understand brains and brain function. Omri and David both use dynamical systems...

Listen

Episode 0

March 17, 2021 01:08:43
Episode Cover

BI 100.3 Special: Can We Scale Up to AGI with Current Tech?

Part 3 in our 100th episode celebration. Previous guests answered the question: Given the continual surprising progress in AI powered by scaling up parameters...

Listen