BI 126 Randy Gallistel: Where Is the Engram?

January 31, 2022 01:19:57
BI 126 Randy Gallistel: Where Is the Engram?
Brain Inspired
BI 126 Randy Gallistel: Where Is the Engram?

Jan 31 2022 | 01:19:57

/

Show Notes

Support the show to get full episodes and join the Discord community.

Randy and I discuss his long-standing interest in how the brain stores information to compute. That is, where is the engram, the physical trace of memory in the brain? Modern neuroscience is dominated by the view that memories are stored among synaptic connections in populations of neurons. Randy believes a more reasonable and reliable way to store abstract symbols, like numbers, is to write them into code within individual neurons. Thus, the spiking code, whatever it is, functions to write and read memories into and out of intracellular substrates, like polynucleotides (DNA, RNA, e.g.). He lays out his case in detail in his book with Adam King, Memory and the Computational Brain: Why Cognitive Science will Transform Neuroscience. We also talk about some research and theoretical work since then that support his views.

Transcript

Randy    00:00:03    Usually when I ask neuroscience is how you and code a number either in Asian apps or however many synopsis they think might be necessary. That’s a conversation stopper. All I get is hand ways, you know? Well, you see, there are lots of synopsis and it’s a pattern. It’s an absence. Well, could you say something about the pattern? I mean, how does the pattern for 11 different for the pattern from three, for example, can you shed a little light on that? You do not want to answer that question. The end is the low-hanging fruit, because it has a really simple job to store the information just like DNA’s job is to store the information.  

Paul    00:00:52    What is the role of synaptic plasticity?  

Randy    00:00:57    I honestly have no idea since I literally believe that an associative bond never formed in the brain of any animal. And since the plastics inadequacies, transparently conceived of as an associate bond, right? I certainly don’t think that’s what they all, could they play a role in the complications carried out in signals? Sure.  

Speaker 3    00:01:26    This is brain inspired.  

Paul    00:01:40    Hey everyone, it’s Paul. Engram. It’s a term coined by Richard semen in the early 19 hundreds. And it refers to the physical substrate that stores the information that makes up our memories. In other words, the, the trace of our memories, we still don’t have a definitive answer to the question of how our brains store memories, what makes up the end gram many neuroscientists would say a given memory resides in a specific pattern of neurons and the activity of those neurons and that the formation of new memories and changes in existing memories that is learning depends on changes in the connections between neurons, synaptic, plasticity. And of course, training deep learning artificial networks is fueled by adjusting the weights between their units to learn tasks, but not everyone agrees with this story. That memories are somehow stored in neural connectivity patterns and the activity of the neurons in those patterns as Tomas Ryan puts it and Tomas will be on my next episode.  

Paul    00:02:44    At what level does an in gram lie is an in gram in the cell or as a cell in the Ingram. Randy gala Stoll is my guest today. He’s a distinguished professor emeritus at Rutgers, and he’s been at this for over 60 years. And he’s been arguing much of those 60 years that the in gram must lie within the cell. Not that a cell is in the, in Graham and his argument, which we’ll hear him flush out is that brains are computational organs. And to compute you need symbols, namely numbers. And Randy thinks the only reliable way to store numbers over long periods of time, which is necessary. And to be able to read from those numbers and write new numbers is to use sub cellular molecules like DNA or RNA or something similar. He also detailed his arguments in a great book memory in the computational brain with Adam King, which was published over 10 years ago.  

Paul    00:03:41    I recommend that book. I have distinct, uh, episodic memories, uh, reading that book in my office in Nashville, for example. And I’ve gone back to it multiple times since then it goes over the fundamentals of information theory and uses examples from animal behavior like navigation and foraging to argue his case. So today we talk about some of those ideas, uh, some of the evidence to support those ideas and a host of other bells and whistles, including his long successful career, studying the many abstract processes, underlying our learning memory and behavior. You can find show notes at brain inspired.co/podcast/ 126 on the website. You can also choose to support the podcast through Patrion and join our brand inspired discord community. If you want and get access to all the full episodes I publish through Patrion, or just to throw a couple dollars my way each month to express your appreciation. I appreciate you listening. I hope this podcast is enriching your minds and bringing you some joy. Here’s Randy, Randy, you’re 80. You just told me you’re 80 years old. Yes. Well, uh, when, when, when did you turn 80?  

Randy    00:04:53    Uh, back in may.  

Paul    00:04:55    Okay. Well, happy belated 80th. So I know that you have been interested in memory since the 1960s, at what point. So, you know, we’ll get to the, uh, the big idea, uh, here in a moment, but at what point in your career did you start questioning the, uh, typical neuroscience story about memory  

Randy    00:05:20    Way back in the sixties? Uh, when I was an undergraduate in Tony Dutch’s lab and I’m deciding that I wasn’t going to be a social psychologist, I was going to be a physiological psychologist as we called them in those days. And now we call them behavioral neuroscientists. And, uh, I really became an, a pasta, um, during while running my first experiment, which was, uh, a runway experiment with rats and mine would watch them and just watching their ads. I became absolutely persuaded that they, um, they knew what they were doing. They, uh, it wasn’t habits. Uh, I had already become enamored of hole’s vision of a mathematically rigorous theory of mind and brain computation, what we would now call computational neuroscience. Uh, but, um, I had already become an, a positive from the rest of his doctrine because I, you know, with all it was all habits. And of course there are many competitional neuroscientists for which that’s still true, but that’s what I mean when I said a moment ago before we were recording that, uh, uh, nothing has changed in the 60 years. I go to meetings now and I listened to some of the talks. I think this is the same shit I was listening to in 19.  

Paul    00:06:49    Well, well, so, you know, one of the things that, um, you talk about in your book, um, memory and the computational brain, why cognitive science will transform neuroscience is that there is this large gap between cognitive science and neuroscience. Uh, and, and I heard you talk recently and you’ve written, uh, about this as well, that, you know, actually even back, that was 2009, 2010, when that book came out and, uh, computational neuroscience was still a small swath of neuroscience writ large. Right. But that’s changed hasn’t it has, has computational neuroscience, which to me seems like is the majority of neuroscience, what’s your view on that? Has computational neuroscience come along  

Randy    00:07:33    Well in terms of the number and quality of people doing it? Yes. I, I certainly don’t see it as dominating neuroscience. I mean, neuroscience, you go to the annual, you know, the meeting, uh, the society for neuroscience, there’s 30,000 people there, right? I mean, there are two poster sessions and they, in this, uh, the poster sessions are so big that even if you try to, you couldn’t go by all the posters right on there, two of them every day and so on. And, you know, competitional neuroscience is kind of small. And, uh, then that big picture. And also when I think about it, the computational neuroscience, I guess, so at least, certainly my world view was dominated by vision people back in the day. Right. I mean, there still is. They’ve been very computational now for decades. In fact, there’s a fascinating, uh, book by, um, by Google and, and weasel, uh, which they reproduce their papers. Uh, it was clearly a, a project of David Hubel and, uh, he produces 25 of their plastic papers and there are introductions and epilogues to each per my Google, and he repeatedly rants against the mathematician. Uh, you know, the math of the fact that all the engineers and mathematicians now come in division, right? Because like so many of the early people, he really didn’t know much math.  

Randy    00:09:11    And these days you cannot do cutting edge vision without a fairly serious mathematics education. Right. Um, but that was already through 30 years ago. Um, so I think what you’re reacting to is now, of course, there are many people, uh, doing computational neuroscience and focusing on learning and memory, which did not use to be true. I mean, those fields used to be completely non-mathematical, I’ve had more than one colleague and friend tell me they went into this business precisely because they didn’t have blur. Right.  

Paul    00:09:50    That’s right. Yeah. Well, I mean, it seems like these days, and, uh, and I, again, this is my own bias because I, uh, I learned computational neuroscience through my career, my short kinda, uh, academic career, but going in, I didn’t really, I had some mathematics background, but I didn’t have modeling background. I didn’t have, you know, a real, a good footing in the computational world. So I kind of learned that through my training. Um, but didn’t, you, you kind of applied yourself and learned some necessary mathematics a little bit later in your career? No. Oh yeah, for sure.  

Randy    00:10:22    Um, I’ve been learning various bits of mathematics throughout the last 60 years. Um, I, for example, I, uh, I mean, I had the calculus as an undergraduate, but I didn’t, um, linear algebra and I taught the undergraduate linear algebra course at Irvine during, after I was already in a tenured associate professor during my first sabbatical, when I was, uh, we’re working with Duncan Lewis and studying also linear systems theory, which I also basically taught myself. I went partly of course, Dunkin was two orders of magnitude, better mathematician than I ever imagined I would ever meet, but he was incredibly good at explaining things. And I was teaching myself by reading those textbooks on linear systems there. And there was stuff. For example, I remember I could not wrap my mind around the convolution integral. So I said, Duncan, can you explain the, what convolution is? And he sat me down and I remember it was a half hour later. I absolutely understood what conflict.  

Paul    00:11:36    And that was on, uh, the, did he use a Blackboard or did he use PowerPoint? I’m just kidding.  

Randy    00:11:40    It was basically just verbal, although he may, I would, this was a long time ago talking about it. It would have been the Blackboard, there may have been some recourse to the Blackboard, but mostly well, anyway, somehow he found there were examples that made it clear and then I was able to use it. And that was satisfying.  

Paul    00:11:58    If you had to go back, would you enter by studying mathematics first? Because I asked because you have a deep knowledge of the behavior surrounding learning and memory, which you also had to have to get to where you are.  

Randy    00:12:16    Yeah, sure. Well, that was, I mean, first of all, that was what I took courses in. And second of all, um, I mean, that’s what I taught for 50 years, right? So, you know, the behavior I’m in the more mathematical treatment I rarely taught at the undergraduate level. Right. Because it wouldn’t take a very special undergraduate seminar to do it. I did teach it at the graduate level. Uh, and as every teacher knows, you don’t really understand the subject until you’ve tried to teach it. Right. You get sometimes as the experience where you’re busy explaining this had happened to me, even when I was teaching introductory psychology, I’m halfway through an explanation. And all of a sudden the little voice says, you know what you’re saying? Doesn’t make sense to you.  

Paul    00:13:13    That’s true. You really find out what you don’t know.  

Randy    00:13:16    Oh boy, this argument has just gone off the tracks.  

Paul    00:13:23    Well, this idea of the, um, the brain as a computing device among other things has dominated your thoughts for a few decades now, right.  

Randy    00:13:37    Since way back since  

Paul    00:13:39    Way back. Yeah.  

Randy    00:13:40    And so I was in graduate school at Yale, a very behavior in school, um, in Neil Miller’s land, uh, you know, he was hell’s most prominent student. Um, but I have, as I said, I had already become a heretic, uh, as an undergraduate. So, uh, I wasn’t buying it, uh, nor was I buying it when I took the advanced course and learning from Alan wagon. Meanwhile, I was building special purpose computers to run the experiments I was running. And I was reading the theory of computation and books on how computers worked and so on. And, uh, and Chomsky was coming along. I went to a talk, this guy, I barely heard of Noam Chomsky. He came to speak at year. Uh, and I had just been reading the stuff that Skinner and, uh, Osgood then written on language.  

Randy    00:14:40    I didn’t know anything by language, but I thought this is rubbish. Um, and so I went to hear this talk by John scan. I was an instant convert. Okay, this isn’t rubbish. So I, uh, I embraced the computational theory of mind. And I thought since those days that I’m in many of these days, most neuroscientists pay lip service, at least to it. Right. But many of them would immediately, and yes, I think abuse, but it doesn’t compete the way a computer can pitch. This is the story having studied, how computers compute and, uh, I mean, I’ve programmed all the way down to the machine level. So I know how it goes, what goes on under the hood and is on. And, uh, I’ve always thought, well, wait a second, there isn’t any other way to compute. I mean, tell me how it is you compete, but not the way computers can do it. Right. I thought, I thought during settled that  

Paul    00:15:47    Well, so, so I had a, a brain inspired listener question about Chomsky’s influence on you. So it really, you remember going to a talk and, and having that sort of solidify your approach.  

Randy    00:15:59    Oh yeah. I remember being very impressed and then I read his, well, it didn’t come out to later, but when it came out, I read his reflections on language. Also at Penn, Ben was a cognitive science was very much a happening thing at Penn. And I had colleagues like lilac Lightman and Henry Gladman and Duncan Lewis. So I was strongly influenced, uh, by, by them. Uh, and Dick nicer was on sabbatical there. The second year I was on an assistant professor. Um, so I was influenced by all of those people and all of those people were influenced by Chomsky. I mean, Johnson, Johnson, king sort of ran through the way we all thought. There’s a kind of interesting story about that. Some years later after I’d been publishing a bunch of stuff that isn’t quite a number of years later, um, no mum I’d met once or twice and who I’ve often corresponded with subsequently, but he wrote me a very polite letter. It’s a letter. I think this was before email, um, gently complaining that I was, uh, channeling him without ever citing him. And I was very embarrassed and I thought, you know, he’s absolutely right. So I wrote back apologizing and saying, look, you’re so much a part of the intellectual mill you in which I swim. I just didn’t occur to me though, acknowledged or even recognized my intellectual debts anyway.  

Paul    00:17:37    Interesting. So, okay. Well maybe we, maybe we can return to a Chomsky later, but because I know you wrote a manuscript in 2006, I believe where you acknowledged the reflections on language and how that also influenced you. But I assume you got the letter before 2006 because  

Randy    00:17:53    Oh yeah, for sure. I had, it was a long time ago.  

Paul    00:17:58    So, um, memory and the computational brain, of course you, um, you detail your ideas, uh, in that book, but you’ve also, you know, continued writing. Um, and you know, I, there’s a recent 2017 piece on the coding question where you re revisit these, these ideas and you you’ve continued to give talks about them. So maybe just in the broadest strokes, uh, could you summarize your, the idea and your, uh, position, um, and then we can kind of go through some of the details as needed.  

Randy    00:18:32    So computation is operations on symbols, um, right before the emergence of computing machines, symbols and representations, all those things were regarded as hand waves, right. Uh, but we’ve computing machines. So when someone said, well, what do you mean by similarly? So you see this bit registering, you see that pattern of ones and zeros that’s been put into that, the, those switches that’s the number eight. That’s what I mean by a symbol, right? There’s nothing even faintly mystical about it. It’s a, it’s a fundamental, um, in this sense, symbols are the stuff of con mutation where I’m using stuff in the physical sense, right. It’s there the material realization upon which computational operations operate. And, uh, once I got into information area, I realized, yeah, right. And an even better way of putting it. And this will became apparent in the book with that on thing that these symbols carry forward in time information in Shannon’s sense of the term.  

Randy    00:19:51    Um, so that you can quantify, right. You can say, this physically realized thing is carrying this amount of information. So you could wave a sign, all the fears about dualism and so on that tormented that behavior. So we’re all terrified by the specter of bills. Right. Um, and, uh, so what, as far as I’m concerned, the computers just put paid to those worries, right? Uh, we had a completely physical theory. It was, uh, I thought then, and still think gave you a viable theory of mind, get my, when I at Stanford and Yale and the behavior and stage, if you said, well, the rat expects to find food at the end of the runway, you can see there were saying, well, I don’t think we maybe should have been admitted. Um, somebody who is so soft headed as to talk about expectation,  

Paul    00:20:55    Because it was related to theory of mind, or  

Randy    00:20:59    Because before the appearance of computers, I mean, Skinner, denounced expectations in the most uncompromising berms is on the side of defect. Right? And so you couldn’t see them, you, you couldn’t feel them. They had no business in science. And, uh, and of course, as soon as you began programming computers, you would set up one number that was going to be compared to another number sentence. So then I just turn them and say, Hey, look, here’s my program. It runs on that machine. I don’t think there’s a ghost in that little computer. I built this number is what it expects. And this is the operation by which it compares another number to that to decide whether what it expects was actually the case and the story get off my back.  

Paul    00:21:53    Yeah. W but is that a, is that a redefinition of expectation over the years toward a more cause you know, the word expect one conjures, a notion of someone having a subjective feeling of expectation. Right. But now when someone says, expect, at least in the cognitive science, computational neuroscience world, all you think of is like a predictive processing, a numerical abstract process. Sure.  

Randy    00:22:19    Now these days where everybody’s talking about prediction, error, they’re taking for granted that there’s an expectation and the terms in which I’m talking about it, I’m never worried about these phenomenological things, right? Like what does an expectation feel like? Not the kind of question I’m interested in, why not? Because I don’t think it’s possible to get a hold of it and a strong for just the reasons you were finding out. Right. That is all I need for expectation is what I just described. Right. And it’s perfectly clear and there’s no problem with it. And now that we have competing machines and we see this going on all the time, when people ask, well, does the computer feel the way I feel when I have an expectation? I think, I don’t know. And I don’t care. It’s not the kind of question I’m interested in. Right. In fact, if you notice what I’ve worked on almost entirely, particularly in recent years, uh, the last few decades, it’s what I call the psychophysics of the abstract quantities, uh, distance duration, probability, numerosity, and so on that quantities that are a fundamental part of our experience, but they have no quality. Right? I mean,  

Paul    00:23:45    What you said, quality. Yeah. Oh,  

Randy    00:23:48    Well, I said it precisely to say that if you work on those things, you don’t worry about quality because they have no quality. If people say, what is the duration feel like? Right. So all the philosophers that are beating themselves up about, uh, you know, what’s it like to be a bat. Um, and they’re all worried about quality as well. Well, you just, isn’t something my worry about, uh, because first of all, I think the quality of are the things that have quality are a relatively minor interests. If you want to know what behavior is founded on is founded on the instructions. I was just talking about the probabilities, the numbers, the durations, the directions, the distances, all, all these abstractions. There are what drive behavior all the way down to insects, right? As you probably know, I’m a huge fan of the insect navigation better, right?  

Paul    00:24:49    You liked the bees, you liked the ants. He liked the, uh, the  

Randy    00:24:52    Butterflies, the beetle the butterfly had done because the dung beetles walking backwards with their ball though, walking home backwards,  

Paul    00:25:04    Maybe the central argument or one of the central arguments, is that the story in neuroscience that the numbers and the numerical abstract symbols, I should just say, symbols are encoded in the synopsis, right? In the connections between neurons, among populations of neurons. But you have a hard time believing that that could be the case.  

Randy    00:25:27    Well, actually, I, usually when I ask neuroscience is how you code a number either in ACE and naps or however many synapses they think might be necessary. That’s a conversation stopper. Um, I don’t know if he ever viewed my YouTube of my talk at Harvard where, uh, John Lisman was discussing. And, uh, I posed that question at the end of my talk St. John, when you get up here, you’ll tell us how you store a number in a scenario. And he got out and gave a lengthy discussion in which he never brought that topic up. This was a very unusual in that I got a rebuttal. I would get another chance to speak. And I said, John, I’m going to give you another chance.  

Randy    00:26:20    How do you store a number in the center has come on, John and the audience began to laugh and he stood up and he would not, he would not answer the question. Um, and I had a somewhat similar experience with Jean-Pierre Sean ju much more recently. Uh, in fact, the question made him so angry that he wouldn’t allow the debate that we uploaded to. It’s going to say, I didn’t see that one. So, and I’ve gone so far, often in my thoughts, I say, come on guys, I can offer you two alternatives. Uh, I mean, it’s not as if it’s impossible to think of an answer or what I just sin. And I often proceed to say, well, look, the synopsis usually conceptualized by computational neuroscience is a real value of variable and distance direction. Probability. They’re all real valued variables, right? So you can always represent a real value variable by a real value variable, right?  

Randy    00:27:21    So we could say, well, if the synopsis this, if the weight is this big, then the distance is that far. Right? And if the way there’s this big, you want to go there? I found practically no one wants to go there. Oh, you don’t want to go there. Here’s a radically different alternative. Supposedly have a bank of the people who talk about the synaptic plasticity are very vague about how many states are synapse can assume, but one school of thought thinks they’re binary. All right, fine. There. I like that. That’s a switch. Okay. So we’ll have an array of binary synopses, and we throw this maps to this state and this snaps to the zero state. And now we’ve got something just like a computer registered. You liked that story. No, most people don’t like that story. All right, we’ll watch your story. Uh, and at that point, all I get is hand ways.  

Randy    00:28:29    You know? Well, you see, there are lots of synapses and it’s a pattern that’s in absence. Well, could you say something about the pattern? I mean, how does the pattern for 11 different for the pattern from three, for example, could you shed a little light on that? People do not want to answer that question because the answer to that question is to admit that there are symbols in the brain. And even to this day, many people do not want to go there. And what’s your answer? My answer is that isn’t in the synopsis. I mean, I point out that there are several labs around the world that are busy studying how to use bacterial DNA as the memory elements in a conventional computer, right? Any engineer, anybody familiar with the computing machines that actually work, uh, and that we know how they work. Once you show them a polynucleotide and explain that any nuclear that had can be adjacent to any other nucleotide, any engineer worth of says Wolf’s could store like nobody’s business. In fact, one of the people who introduced me in that talk, I gave a couple of years ago in the introduction, showed a very grainy video of a running horse where the video, the entire video had been passed through the bacterial, just to drive home the fact that, that if you’re looking for a place to store numbers, well, uh,  

Paul    00:30:14    Well we know, yeah. We know DNA stores, the genetic code. Um, but there are other possibilities as well. I’m wondering what your current, so DNA is one possibility, right? Where a code could be stored intracellularly and to you, um, the key, I don’t know, I don’t know what your current thoughts on this, because it used to be that, uh, you didn’t know, um, that there were, you know, a handful of intracellular mechanisms whereby you might store these things, proteins degrade a little too fast, right. But then there are, polymerases like RNA, uh, could be one of the, uh, substrate DNA could be a substrate, but as DNA, fast enough, what’s your current thinking on what might be the, uh, substrate?  

Randy    00:30:58    Well, my I’m still sticking with polynucleotide, so I lean much more strongly to RNA than to DNA, probably complex with a protein destabilize. It, my thinking has taken a huge boost lately from a wonderful paper, by a young guy in Gabby Nayman’s lab on the Rockefeller named the pessimist Dean, uh, flooding poor, but it’s just appeared in the journal of theoretical biology in the last couple of weeks. Uh, and, uh, he come he’s in a stylish guy because he, he has a truly deep knowledge of, um, theoretical computer science, much deeper than mine. I mean, he really knows the Lambda calculus, right. Whereas for me, it’s just kind of a name. Um, but at the same time, he really, he has a much deeper knowledge of RNA biology than I do. But the most astonishing thing is that they, I mean, those two things are about as far apart, as conceptually, as you can readily imagine.  

Randy    00:32:02    And, but he has this very rare mind that can bring those two things together. And he lays out a detailed story about computation performed at the RNA level, in which RNA is both the symbols and the machinery that operates on the symbols. And then you use a builds. It is on the, uh, on the Lambda calculus. And he sh and he lays out in his appendix in great detail on RNA machine that will add arbitrarily large numbers. Now, for all those computational out there in your audience, I claimed that that has never been done by a CNN and that it never will be done by at least by a non recurrent C a bio straight through CNN. And even if it’s done by a recurrent one, right, they’re going to result resort to that a little recycling, the, you know, cause they’re going to have to store addition is inescapably serial, right?  

Randy    00:33:12    So you’ve got the, you’ve got to do the earlier, the less significant digits first. And you have to store that as a result and then transfer the carry to the next one and so on. So you need memory and uh, so how do you get memory? Well, that’s where recurrent nets come in, right? It keeps sending them around the loop, which, uh, in this paper by Aquila poorer that I recommend in the strongest terms, uh, he also has a wonderful discussion of dynamic systems and why and why they’re not stable, right? The very guy Moore who proved that they, that they were turning complete, also argued very strongly that, uh, they weren’t stable. So they weren’t physically realizable the turn complete ones were just kind of mathematical dream and they weren’t physically stable.  

Paul    00:34:14    Well, you, you, I didn’t know about that more recent paper you used to hang your hat on and maybe you still do, uh, the per Kenji cell, uh, finding in the cerebellum. And maybe you’ll just add this more recent finding with RNA to your, um, uh, to your talks. Now  

Randy    00:34:33    You’re absolutely right. I mean, I, I still think Frederick Johansson’s discovery of the development of that preparation, which was the culmination of a 40 year effort in Jerry hustler’s lab. I still think that that, that what he has done his hand, the molecular biological community, uh, what they need on a platter. And for the first time, I think we could actually know the physical basis of memory while I’m still sentient. Uh, and, uh, that would be a miracle because he’s identified the beginning and end of an intracellular cascade. And one of the steps in that cascade clearly contains the end gram that encodes the CSU S interval. I think his PhD work proved that beyond reasonable argument and, you know, well, I can, her biologists know how to follow intercellular cascades, right? I mean, he identified that the post synaptic receptor at the start of this cascade, and this is a metabotropic receptor, right.  

Randy    00:35:37    Which means that it transfers the message from an extra cellular source to an intracellular signaling chain. And, you know, there’s almost certainly a Jeep protein on the inside of that membrane and that transforms and gooses the next thing. And so on. And molecular biologists have been chasing these cascades now for decades. And so, yeah, it’s always been, how would I know that I got to the end grid, but no Hudson has solved that problem for them. If only they realize it because he brewed that the information about the CSU S interval is not in the incoming signal that triggers this cascade. Right. But he also identified a potassium channel and inward rectifying potassium channel at the other end of the cascade, a channel that’s a key to producing the pause, the timed pause that comes out of the cell. Right. All right. So you’re following this cascade. And until you get to the end, gram the information about the duration of the interval, won’t be in any step you’re seeing. Right. And on the other side of the end gram, the information will be in the chain, right. Uh, because it’s there by the time you get to this potassium channel. So you’re following the cascade and at some points here, whoa, where does it go look at that this step is informed by Lin breath. All right. So at the end Ram lies between the preceding step and this step. Whoa.  

Paul    00:37:15    Yeah. But yeah, so, so there was, is the, is the more recent, uh, theoretical biology paper with the RNA? Uh, does it address the reading and writing mechanisms because that’s, that’s what you’d have to follow right. To address reading and writing?  

Randy    00:37:31    Well, keep in mind, in fact, I strongly suspect if I can guess how things will play out, that we will discover the in-room before we understand either the writing or the reading mechanisms. And again, I would appear here the appeal here to the history of, uh, of DNA, right? The engram is the low-hanging fruit because it has a really simple job. His only job is to store the information just like DNA’s job is to store the information. So we are still learning how that stored information gets translated into actual organisms. Right now we’ve made enormous progress in that. Uh, but there’s still a very long way to go. And this has been going on now for decades, right? 40, 50 years ever since 1953. So the DNA story that emerged pretty quickly, right. That the basic, okay. Here’s how the information is encoded. Here’s how this carried forward in time.  

Randy    00:38:39    There’s a story about, uh, how it’s red is five orders of magnitude, more complicated, right? I mean, you can explain DNA to a smart undergraduate in half an hour, right. Uh, if he then asks or she then asked, uh, oh, okay. How do you get an eye? Then you say, well, okay, come to my advanced graduate seminar. And, uh, we will spend the whole seminar, um, discussing what we understand about how you get from a gene doing not right. One of the astonishing things we’ve learned is that there is one gene that, you know, there’s a gene, you turn it on, you get an eye where we turned it on. Right. When I was being taught biology, uh, we are being taught one gene, one protein, which is of course still true, but everyone took it to be a corollary that if you thought there could be a gene for an eye, you were stupid. No one could imagine what, what you, I mean, there was this huge gap between, okay, you got, you know, we’re coding for a sequence of amino acids, right. An I isn’t a sequence of amino acids. Um, how now again, I would say the reason they couldn’t imagine how it’s done is they didn’t know enough computer science, because it turns out that the, the protein that, that gene and goats for isn’t a building block in the eye it’s transcription factor, right? It’s  

Paul    00:40:22    All transcription  

Randy    00:40:23    Factors. You have to go five or six steps down before you get past the transcription factors. Now, anybody who knows how relational databases work would say, well DOE or, or how a function works, right. When you, you know, when you call the name of a function in MATLAB that just accesses the code for that  

Paul    00:40:45    And on and on,  

Randy    00:40:48    That’s how you build complex operations out of simple operations. Right. And that’s what got the addition is all about.  

Paul    00:40:56    Let me, let me try this out on you, because I’m just thinking about this, uh, talking about the, I know you just said that the Reed and the Reed mechanism is orders of magnitude more complicated, and then the right mechanism must be even more complicated. I would imagine  

Randy    00:41:13    Until we know what the engram is. I think we, I refuse to think very long about this issue, because I think, I don’t know what it is I need to know in order to think productively about because the right mechanism has to translate from an incoming code in the spike train. And since we still, despite the Rica at all book, which I worshiped and from which I learned my information theory, I have friends, even my collaborator, Peter Lytham who thinks that’s a great book, but I think that, well, it’s just about the fly sensor. It’s the answer to how spike trains carry information period. Right. It’s, it’s all in the inner spike intervals. Well, and there’s several bits per interspike interval. Well, there’s no agreement about that. Right. So right until there’s agreement about how the information is encoded in the incoming signal and the agreement about how it’s encoded in the written thing you can’t think productively about what could the machinery looked like that would take this code and produce that code any more than you could get from, um, DNA to homeobox genes. Right. But without knowing all the very complicated stuff that goes on in between, and then knowing how homie a box genes work, right. I mean, they code for abstractions, anterior distal it’s as if, uh, somebody went to a and that dummy lesson back in the Precambrian, right. Yeah.  

Randy    00:42:58    And they said, well, we got a code, but here we got to have a code for, um, the end, but whatever it is, we’re building, we have to have another code for what anyway. You’ll get.  

Paul    00:43:09    Yeah. Well, well, let’s, let’s pretend for a moment just as a thought experiment let’s we don’t, it doesn’t have to be RNA, but there’s some in intracellular mechanism. Right. And, um, you just mentioned, uh, so this is going to be kind of a long winding, um, thought train here, but you, you had just mentioned, you know, about the receptors and how there is this, uh, enormously complex cascade from receptors to intracellular processes. And, um, that, that, anyway, that that’s a long cascade, you also mentioned convolutional neural networks in a derisive way, playfully derisive way. Um, however, thinking about a read-write mechanism. So you probably know that, um, you know, given a large enough, uh, neural network, that they are a universal function approximators right. They can transform from input to output and the mathematically proven that the universal function approximators talking about, uh, the, the cascade from, uh, extra cellular membrane protein to intracellular happenings, uh, sounds eerily like a neural network kind of process because you have all these interacting, uh, sub components. Right? The other thing, um, that you, you mentioned that we just talked about briefly is that the majority product from DNA from genes is recursive is transcription factors, which feeds back onto the DNA, which regulates the protein synthesis. And the next protein is another transcription factor. That sounds eerily like recurrent neural networks, right. Feeding back. So, so these, these processes are, um, uh, one could make a very loose argument that they are, oh, what’s the word? Not similar, not analog, uh, analogous in some fashion,  

Randy    00:45:07    They are analogous. They clearly are. Those analogies are traced out in the, um, Peter Sterling and Simon Lockland book on which they argue that compute with chemistry. It’s cheaper. I think they’re spot on by that I would add 10 orders of magnitude cheaper. Right? I think they don’t slam just on how much cheaper. Um, but they do these dynamic systems, uh, analogs. Now this same  guy has on brand new blood boats. I just got, I just saw it yesterday or day before yesterday, and which he takes up that proof of a universal function approximately there. And it shows, first of all, that it’s not really true. It’s only true on the closed interval, not the open interval. So, but second of all, he, he revisits the arrogant. And so all the processes that you’re describing are dynamic systems and he revisits why you can’t really do computation with stored information with dynamic systems.  

Randy    00:46:21    He has a much more sophisticated technique on this, uh, um, take on it anchored in a much deeper understanding of the foundations of theoretical, theoretical computer science. But my much simpler I can move to. I know he agrees with me, um, is like those proofs said, well, what do we mean by a universal, uh, by a function approximately a function approximator gets an input vector and it generates an output vector. Oh, okay. Uh, that’s the way a mathematician thinks about it, but it shows how, not the way a computer scientist thinks about it. Um, because there’s no memory in that. Right. And a computer scientist is very aware that in your average computation information comes in, some of which was acquired 50 years ago as we sit here talking, right. Uh, as I’m summoning up the words in the English language, right. I learned most of them, uh, when I was, uh, less than five years old. Right. It’s, uh, they’ve been rattling around in there and now for 75 years,  

Paul    00:47:29    However, now I’m forgetting many of them.  

Randy    00:47:32    It doesn’t get better. Let me dang it. I’m beginning to have noticeable word finding problems and someone whose verbal facility was all as one of their great strengths. That’s very painful. And I’m sorry, I couldn’t the other day I was explaining something and I couldn’t someone, the word factorial. I was, I wanted to say the Sterling approximation. I couldn’t say what it was an approximation to because I couldn’t return to the word factorial. Oh, geez. Anyway, um, the point is that real-world computations require memory because you get one bit of information, you put it in memory, you get another bit, maybe 10 days later, maybe a year later, maybe 20 years later, you put that memory. And so if you look at most of what we do, it’s putting together at a given moment information that was acquired at many different times in the past.  

Randy    00:48:35    And that’s what brains when you’re talking about real. So I hope it’s clear why this makes that proof totally irrelevant, right? Because that proof assumed that all that information had been very thoughtfully assembled for you by some genius and packaged into one humongous vector. And that we fed it to the computer in a generator in the neural net and then generated an output vector. Well, of course, that’s where you have to think about the system and when it has no memory, but that’s of course just why in throwing out the memory, they threw out the baby with the bath, right?  

Paul    00:49:15    Well, the memory would be in the distributed connections, right? The distributed weight,  

Randy    00:49:20    That’s a finite state machine in the proper definition of a finite state machine, which not, it is not that it’s finally a finite state machine is a Turing machine that cannot read what it has written. Okay. I asked the mathematically equivalent to the usual definition. Um, but it showed, but if you’re thinking about these things, it shows you what the huge difference between, um, attorney machine and a financial state machine.  

Paul    00:49:48    Um, it can only, it can only go from state to state with some transition rule and probability in, so,  

Randy    00:49:54    And we hammer on this a bit. So if your iPhone or your mobile phone with its camera or a finite state machine, then it would have stored in its wiring diagram, every picture that you’re ever going to take with that phone. I don’t think so. You can take more different pictures with that phone. Then there are elementary particles in the knowable universe, right. That’s my definition of a true infinity. Right. Okay. So we didn’t put all the possible pictures in the wiring diagram of that farmer. Right. We put in something that would convert quantum catches to switch throwers to memory elements. And of course, then the phone immediately gets busy running some compression algorithm, um, because there’s huge redundancy and the pixels, right? So, uh, uh, but a device without memory can’t do any of that, right? No, no memory, no iPhone.  

Paul    00:51:11    So just stepping back, because often on this podcast, we talk about the current successes of the deep learning, um, folks. And a lot of that is being applied to neuroscience to understand, uh, how brains function. And I know that you are aware of, um, the, that the line of deep learning wherein from like Alex Graves and so on where external memory has been supplied to the neural network. Um, but the book memory, uh, and the computational brain was actually written before the quote unquote deep learning revolution when, uh, deep learning started to dominate. So, um, for fear that, uh, this diatribe could take the rest of the time, keep it short. I’m curious about your thoughts on the, uh, success and the ubiquity now of, of deep learning, uh, and its application to understanding how neuroscience, how brains might function.  

Randy    00:52:12    Well, trying to keep it short. You remember them, you don’t  

Paul    00:52:16    Have to keep it, sorry, but I, you know, we  

Randy    00:52:18    All have us lie line from the graduate plastics. Well, my, my, uh, wisdom distilled down to a very few words would be adversarial images.  

Paul    00:52:31    Sure. But what happens when that gets solved, but okay, well, well,  

Randy    00:52:35    Yeah,  

Paul    00:52:37    Well, I am definitely.  

Randy    00:52:40    Yeah. So the last time I checked no solution wasn’t insight and it reflects a deep truth about how those systems work. Right? Most people don’t realize that when they, um, image recognitions. So system inside Elon Musk, Scott warns the rest of the system, that there’s a stop sign there, right. That system, because it’s a deep neural net. Uh, and because they don’t know how to extract shape, what is really decided is look, these pixels are stopped signage. And this region of the pixels has the statistics of a stop sign, right? If you were to, well, is it octagonal, then that would respond. What’s an octagon.  

Randy    00:53:33    And you would, if you explain what an octagon is, the net, the net would say, look, I don’t do shape. Uh, and at least I have noticed, and I think others will have noticed that the hype about we were going to have auto self-driving cars has died down very considerably because the adversarial images taught the malevolent smart, but multilevel and high school students of which there are two greatest supply, how to go out and hack every stop sign in town, right? Uh, with, uh, you get yourself a cran tape, you get a cran and you make various graffiti on the stop signs. And Elon Musk’s cars will blow right through this nonsense. Uh, okay. So, uh, Hey guys, I think it’s wonderful that you, uh, got the system to work the point where you could do, I’m not discounting this achievement, but when you start telling me, this is how the brain works, and that means the brain has no memory. I say, I don’t think so because you can’t do deep learning. I taught Jay McClellan years ago and he and I have been arguing ever since,  

Paul    00:54:50    Oh, he’s one of, he’s one of the ones who’s working on, building math and reasoning into,  

Randy    00:54:55    I keep telling him KJ, forget math and reasoning. Look the ant and the be do dead reckoning. Why don’t you try that? Uh, I want to see how dead reckoning works in a system that has no memory. I’ve been taunting. I’ve been trolling him with that challenge now for 20 years. And, uh, he doesn’t bite. Uh, cause I think like anyone you look at dead reckoning and say, whoa, uh, we are going to have to store the current location. Right. I mean, there’s no way of getting around it. Uh, and that’s going to extend over hours.  

Paul    00:55:28    Yeah. Well, and yet, okay. So over hours as a point, you might, um, bring up again here because I wanted to ask you, first of all, whether you’re aware of, and then secondly, your thoughts on, um, there, there are, there have been, uh, deep learning networks paired with reinforcement learning techniques in the AI world that have used convolutional neural networks and used LSTs that have done path integration in little maze environments, virtual amaze environments. And that’s not  

Randy    00:56:00    Toy environments in which tile, the maze c’mon in order to make it fit into the reinforcement learning thing. They say, well, look, here’s how we represent the maze. Right? You see this tie that we tile it, right? And then each tile knows, well, then it gets interesting. Uh, I think very few of them actually give the tiles metric information. That is, um, I know that the, a star algorithm, which is how the Google maps finds, uh, routes of course has metric information, right? It’s all, it’s all there and the cost function. Uh, so that’s why Google send around cars with GPS is right there. Record extremely precise metric information all over, uh, all over the world. But in the ones that I’ve seen, that the reinforcement learning ones, they, uh, you know, reinforcement learning, they say, well, when you’re in state one, you do you learn to do action one.  

Randy    00:57:08    And when you’re in state two, you learn to do action. To, first of all, this, they don’t seem to realize that this is essentially identical to Clark calls theory. Uh, that’s why when I say, Hey, I was listening to this nonsense 60 years ago by, you know, they don’t put in any metric information. Come on, I’m a sailor, I’m a Navigant, I’m a back country skier. I ski alone in the back country. Hey, uh, you know, you don’t tile the winter wilderness, you say, okay, I’m headed this way. The sun’s over there. Uh, you work the way navigation has always worked. What direction am I going? How far am I going?  

Paul    00:57:55    However, the average human is easily lost in that scenario. Whereas the average B or aunt, uh, isn’t isn’t lost, right.  

Randy    00:58:03    Well, there are plenty of the other ones aren’t lost either. I’m by no means the only one who does back country skiing, even alone. And of course, uh, Joshua Slocum sailed alone around the world, right? Uh, um, using totally traditional navigational, uh, methods, boasting with some good reason about his accomplishments. But, uh, the, the reason people don’t know how to do this in the modern world is they always live in cities and they get from one place to another on taxi cabs on subways. So they’re never, they’re never called upon to do this. But when I was in, uh, college, I worked one summer for a month until I turned them into the better business bureau with a Collier’s encyclopedia selling encyclopedias door to door under the tutorship of a, of a man who had been doing this all his life. And, uh, in those days you sold these in the newly built suburbs, which had all these twisty roads and pulled the sacks.  

Randy    00:59:07    And so, and then came in and you went all around and so on. And this guy was intensely proud of the fact that he always knew exactly how to get back out of there. And we would be driving her out. I’d be totally lost. And he’d say, which way is the entrance? No idea. He would point it at it. He knew which way it was to the interest within tender Reese, no matter how long we’d been in there. So it’s a matter of somewhat of talent. Some people have more Dalen for it, but it’s also a matter of habit, right? I mean, if you walked alone in strange foreign cities, maybe the first time you got seriously lost, but you learn something from it. Now when you leave the hotel and you walk down and you get to the first corner, you have turned around. In fact, it’s just what the bees do when they leave, they turn around and look back.  

Paul    01:00:02    But the fact that we can basically unlearn that skill and we would have, you you’d have to learn it back, right. Uh, argues it could argue multiple different things. You know, the question that I want to ask you is if you think that there could be multiple memory mechanisms, you know, obviously the quote-unquote memory, um, there are multiple types of memory defined and that is continuing to change. What kinds of memory that we have. So, you know, for example, something like episodic memory, where you can recall an event, right? And I know that you don’t care about, uh, mental phenomenon, uh,  

Randy    01:00:38    Only an episodic memory. Uh, crystal, Jonathan Bristol has demonstrated a beautifully and rats and, and of course, uh, um, Nikki Clayton and, uh, Tony Dickinson demonstrated it spectacularly in those food cashing, birds. Right,  

Paul    01:00:56    Right. On  

Randy    01:00:57    Board with episodic memory, but it’s all numbers,  

Paul    01:01:01    Right? So I’m thinking more of  

Randy    01:01:03    The right amount, uh, texture, uh, what goes in one episode, right. Numbers.  

Paul    01:01:13    So to your mind, there is one memory mechanism, uh, in all brains.  

Randy    01:01:19    That’s what I think is by far the most likely, uh, of course, I don’t know. And of course it’s conceivable that there are different learning mechanisms, but once you grasp how simple and how fundamental memory is memory understood the way I understand it. Right. Which is just, Shannon’s memory is a medium of communication. It’s, um, machinery, the medium, the material stuff, by which the past communicates with the future. Now Shannon in his opening paragraph pointed out that, Hey, look, if you’re inter if communication is what you’re about, and he might’ve stood up and said, I’m a communication engineer, and they’d pay me here at bell labs for communication. If communication is what it’s about, you don’t give a shit about the content. That was a truly profound insight. And I don’t see why that doesn’t apply just as much to the brain as it does to, to computing this.  

Randy    01:02:26    Right. When I go buy a new stick of gum to save a terabyte of information by they don’t ask me, well, are you going to use this for word processing or spreadsheets or MATLAB files? It’s all just information when it comes to communication and memory is, is communication. So it’s a really, I think DNA is, again, look, evolution solved this problem. Once it found a really good solution, that was a, probably 2 billion years ago. It same as the animals have been navigating since the Precambrian we can tell just from their tracks in the mud. Right. So, um, you can navigate without a map, without a memory, all these. So in one of your other questions you asked about how about skills, right? Motor skills and motor skills. Yeah. If it’s going, if there is going to be a case where it’s different than I would say, well, that could easily be where, but I kind of doubt it because I think skills can be, and I’m a student of the motor literature I’ve written about it at some length occasionally. Um, I think skills can be learned as parameter tuning that is, you’ve got, you’ve got a system that’s an incredibly versatile memory system. This stuff was all in my first book and the  

Paul    01:03:55    Best the organization of learning.  

Randy    01:03:57    No, they are tradition of action. There’s another book 10 years before that. Um, but I, and what I’m saying is this is not original with me. This is very much there in the literature that was in that book. And, and it’s there and say, even martyrs work right with a stigmatic gastric ganglion, right? You’ve got this set of oscillators, a very simple circuit, right. But there’s oscillators. And some feedback, feedback is important. Don’t get me wrong, but only under circumstance in certain circumstances. And there’s a, of course, inputs, inhibitory inputs, and what have you. But the way the system is basically controlled is by, um, signals that come down from higher and the nervous system and adjust the parameters. Right? So, and parameters, we’re back to numbers, right? What are parameters? There are numbers.  

Paul    01:04:59    So I have a memory from, well, I don’t know if it was three or four years ago. So my memory for times is not great, but, uh, we held my wife and I, uh, held a chili cook-off at our house. And, um, I won’t tell you how my entry did. I didn’t tell you I didn’t win the trophy, but, um, there was a particular entry, uh, that tasted a lot w the flavor was dominated by celery. Um, and I remember this and I think it got last place. It was just overwhelming celery. Uh she’s she’s, uh, she was a vegetarian and a kind of a whole holistic medicine also person. But anyway, I was talking to her about it the other day. And I can remember that, uh, she felt a little, um, you know, sad about this, but, but I, but I, but I have this episodic memory and we don’t need to go on about episodic memory. I have this, you know, experiential memory of what that was like, and the flavor of the celery and me not winning also, you know, and all that kind of, um, and I can picture our house and stuff. So I guess the question is, does the new, the intracellular numerical mechanism, uh, account for that type of experiential memory?  

Randy    01:06:11    Well, not without some spelling out of additional hypotheses. So, but I did address your question at considerable length in the near final chapter of my 1990 book entitled the unity of remembered experience. That book has been cited many thousands of times, but as near as I can tell, no one ever read that chapter, if they did, they dismissed it, uh, because it, it, it addresses exactly the question you’re posing, uh, how to these diverse aspects of an experience and the experience extends over time and space and involves many different components that teeth taste of the celery and so on. How do they all get knit together? And I argue there that first of all, they’re not knit together by associations because that brings you into an explosion, right? You have a combinatorial explosion, you’d get this net of a honorably, many associations, the unity, the phenomenological unity, uh, arises in the process of recollecting the experience and that you use time and place indices, all memories on this story have a timestamp and a location stamp.  

Randy    01:07:29    And I present, I review experimental evidence for that flame. And this is now of course, 30 years old. Uh, and there are more evidence for it. Have a somewhat similar nature has emerged, uh, with, uh, in the intervening 30 years. But I spell out in some detail how you could use if every memory, if they’re all in separate bins, in separate neurons and song, but they all have as one of their fundamental components, a time and a location stamp, which plays the role of the, um, the opera wrong in, uh, DNA, right? It’s the address. Then you can move, uh, among these memories in recollecting and experience that is you because the episodes are always located in at a time and in a place they’re located in space time. Right. And so you can retrieve the facts if using those indices is I read the hippocampus literature, then they’re outside. I think, uh, I see, I think someday they’re somewhat, well, I actually, I can, I come down and the guy who died, Howard  

Paul    01:08:47    Howard,  

Randy    01:08:47    He was starting to argue this same sort of thing. And I wrote him, I said, Hey, Howard, go read my chapter. I, this is what I was doing 30 years ago. And he wrote back and he said, yeah, I’ve been reading it. You’re right. You were a guy that’s outside, you’re in the, in the future. And then he died  

Paul    01:09:08    Totally aside. But this happens over and over. And you’ve been around long enough to have experienced this personally where new ideas are not new ideas they’ve been written about in buried in chapters. Uh, um, so how many times this has happened to you?  

Randy    01:09:24    Oh, I don’t get uptight about it for one thing as a, because I’m a sinner. I both listened to her and sinned against us witness that he is. And, uh, and I, I w I wasn’t bang. Right. Just, I thought Howard and I could make common cause here. Right. And I was deeply disappointed in what he does.  

Paul    01:09:47    You got to stay alive to keep doing science. You got to  

Randy    01:09:50    Stay with me here. It’s not, I think that’s the general answer. I mean, take salary for a month for a specific, so it has quality of pace, but then the vote is color, right? And there’s one thing we’ve known now for more than a century color is represented in our brains with three numbers. And, uh, recently the story for both taste and odor has emerged the same. They’re all vector representations. The dimensionality of the spaces is higher, but these days, Doris Tsao and lots of people are pushing vector representations really hard. And of course back they’re just strings of numbers, right. And, and they represent faces and, and, um, Chuck Smith has argued that the same story is true for odor even in Drosophila. So again, the salary, it’s all numbers, right? It’s a, the tastes are represented in a four-dimensional space. Colors are represented in a three-dimensional space. Faces are represented in a 50 dimensional space. You can get the idea  

Paul    01:10:56    Two more questions, and then I’ll let you go. And I appreciate you hanging around with me. One, what is the role of synaptic plasticity?  

Randy    01:11:05    Um, no one knows the least of all me.  

Paul    01:11:09    I thought I assumed that you were going to say in coding writing.  

Randy    01:11:14    I honestly have no idea. I, since I literally believe that an associative bond never formed in the brain of any animal and since the plastics in emphasis transparently conceived of as an associate of bond. Right. I certainly don’t think that’s what they all, could they play a role in the complications carried out in signals? Sure. Do I have, it seems likely that they probably do, but I do I have any good ideas what that role might be? No. Um, does anyone else? I don’t know. I don’t follow the literature very carefully, but everybody seems so hung up on the idea that there are associative bonds that I think until they dig themselves out of that conceptual hole, they’re never gonna find out what they’re really about.  

Paul    01:12:05    What’s keeping you up at night, these days. What are you thinking hard about? That’s just beyond reach to you.  

Randy    01:12:17    Well, how’d it get the molecular biologist to realize that Fredrico Hudson has offered them the world on a plate.  

Paul    01:12:30    How’s that fight going  

Randy    01:12:32    Very slowly and they’re hung out for what is best I can make out our quads and a metaphysical reason. So for example, don’t ask Ryan who,  

Paul    01:12:44    Uh, he’ll be, he’ll be on the next episode.  

Randy    01:12:47    So you can follow up on this or you can ask him what’s his problem with Randy’s story? Yes, because he and I have been arguing in correspondence. He, I never heard of him. I, I had given talks at MIT where I imagined he was present. And, uh, and I met on a go a few times, but, um, whose lab he came out of, but he emailed me that when the day it was an embargoed, his science paper, when he was still in Tonegawa was lab showing that they could make the plastics in emphasis go away. And the information was still there. And the email said, I think you’ll find this interesting. And I wrote back, yes. I find this very, very interesting indeed. Okay. So he, and I agree that the information isn’t stored in this and now in the plastic sentences, and he admits that he does not have a story about how the information is stored  

Paul    01:13:44    The in grim, but  

Randy    01:13:46    These all focused on these cell assemblers, he’s focused on this sparse coding. And I say, yeah, Thomas Tomas, that’s all very interesting, but we both think that the real name of the game is looking for the end brand and those cell assemblies. They aren’t, they haven’t ran your own work, shows that it must be inside those cells. I can’t get them to go out and it’s all hung up about information. He doesn’t like the idea that we have to think in terms of Shannon information, he’s read Dennett. And he, he believes that there’s in semantic information. And I, I know Ben very well. We have a lengthy email correspondence in which I’m trolling Bennet, Daniel, and saying, Daniel fact is you have no idea what you mean by semantic information. And then it more or less admits that that’s true. I said, you know, Shannon information is the only game in town. Uh, semantic information is just philosophers hand-waving  

Paul    01:14:51    So, but, but, but the recent op the genetic work where, um, you know, particular cells and networks of cells,  

Randy    01:14:58    Then they can excite behavior that is informed by the store information. They’ve shown that over and over again, and now people are showing it in the hippocampus, right?  

Paul    01:15:08    But that doesn’t change your story. It doesn’t change your view  

Randy    01:15:11    Because it doesn’t even address the question I’m posing, which is alright, you excite those cells. And the output signals from that cell is informed by acquired information. Where is it? Did some neighboring cells say, oh, you need to know this, right? Uh, or as your own experiments tend to show, they got that information from inside themselves. Well, once you get inside a cell, it’s all molecules, right? Uh, very big, complicated molecules and  

Paul    01:15:52    Networks of molecules,  

Randy    01:15:54    Even railroads and structures build them in the ribosome, for example. But, but basically we’re down to the molecular level of structure. Right. And, uh, and, uh, it keeps saying your own work shows that that’s the case. I cannot persuade him and it’s just driving me nuts. I mean, uh, Rick is a five or six years old now and I thought, oh, wow, this is the breakthrough. Now all those, uh, insanely ambitious molecular biologists, they’ll jump on this. And they’ll trace that cascade. And they’ll use this ability to, to observe single molecules fluorescing inside individual cells. I mean, they’ve created the most astonishing tools. And once they get to the end, they can slice and dice it with CRISPR and so on. And they can find out that code. It seems to me like, this is so arduous. I cannot,  

Paul    01:16:55    You’ve learned multiple things throughout your career. Why don’t you just go learn molecular experimental, molecular biology and start on it.  

Randy    01:17:07    And, uh, you know, it takes long time to become a molecular biologist. And besides that, I would have to get a grant. And so, I mean, that’s the other thing, the, uh, and then there are no way with somebody with my background could get a grant. I mean, this effort, although it seems to me obvious what the general strategy is. I don’t mean to minimize how difficult it would be and the kind of resources. I mean, you need the kind of money that only a molecular biologist can get. I mean, people like me, we get the, the, the rounding error in the molecular biology grants, right. Uh, so you’re not gonna pursue that cascade with a $20,000 a year, right. It’s going to be more like $5 million a year. Right. And it needs to become competitive, which it always does in microbiology. That is if one or two of the smartest young upstarts started doing this, then the rest of the field would say, oh shit, maybe I’m missing the train. Uh, maybe, maybe I better get on that train before it leaves the station. Right. I’m trying to stir up that kind of anxiety. But so far I have not succeeded.  

Paul    01:18:21    Well, you’ve been driving your train for a long time along those, those very tracks. So this is a great place to leave it. I’m going to I’ll play that last little clip there for Tomas when we talk perhaps, and now can respond. Thank you for the very, very fun, uh, conversation. Keep up the good fight, Randy. I appreciate it. I  

Randy    01:18:40    Enjoyed this. Thank you.  

0:00 – Intro
6:50 – Cognitive science vs. computational neuroscience
13:23 – Brain as computing device
15:45 – Noam Chomsky’s influence
17:58 – Memory must be stored within cells
30:58 – Theoretical support for the idea
34:15 – Cerebellum evidence supporting the idea
40:56 – What is the write mechanism?
51:11 – Thoughts on deep learning
1:00:02 – Multiple memory mechanisms?
1:10:56 – The role of plasticity
1:12:06 – Trying to convince molecular biologists

View Full Transcript

Episode Transcript

Speaker 1 00:00:03 Usually when I ask neuroscience is how you and code a number either in Asian apps or however many synopsis they think might be necessary. That's a conversation stopper. All I get is hand ways, you know? Well, you see, there are lots of synopsis and it's a pattern. It's an absence. Well, could you say something about the pattern? I mean, how does the pattern for 11 different for the pattern from three, for example, can you shed a little light on that? You do not want to answer that question. The end is the low-hanging fruit, because it has a really simple job to store the information just like DNA's job is to store the information. Speaker 2 00:00:52 What is the role of synaptic plasticity? Speaker 1 00:00:57 I honestly have no idea since I literally believe that an associative bond never formed in the brain of any animal. And since the plastics inadequacies, transparently conceived of as an associate bond, right? I certainly don't think that's what they all, could they play a role in the complications carried out in signals? Sure. Speaker 3 00:01:26 This is brain inspired. Speaker 2 00:01:40 Hey everyone, it's Paul in Graham. It's a term coined by Richard semen in the early 19 hundreds. And it refers to the physical substrate that stores the information that makes up our memories. In other words, the, the trace of our memories, we still don't have a definitive answer to the question of how our brains store memories, what makes up the end gram many neuroscientists would say a given memory resides in a specific pattern of neurons and the activity of those neurons and that the formation of new memories and changes in existing memories that is learning depends on changes in the connections between neurons, synaptic, plasticity. And of course, training deep learning artificial networks is fueled by adjusting the weights between their units to learn tasks, but not everyone agrees with this story. That memories are somehow stored in neural connectivity patterns and the activity of the neurons in those patterns as Tomas Ryan puts it and Tomas will be on my next episode. Speaker 2 00:02:44 At what level does an in gram lie is an in gram in the cell or as a cell in the Ingram. Randy gala Stoll is my guest today. He's a distinguished professor emeritus at Rutgers, and he's been at this for over 60 years. And he's been arguing much of those 60 years that the in gram must lie within the cell. Not that a cell is in the, in Graham and his argument, which we'll hear him flush out is that brains are computational organs. And to compute you need symbols, namely numbers. And Randy thinks the only reliable way to store numbers over long periods of time, which is necessary. And to be able to read from those numbers and write new numbers is to use sub cellular molecules like DNA or RNA or something similar. He also detailed his arguments in a great book memory in the computational brain with Adam King, which was published over 10 years ago. Speaker 2 00:03:41 I recommend that book. I have distinct, uh, episodic memories, uh, reading that book in my office in Nashville, for example. And I've gone back to it multiple times since then it goes over the fundamentals of information theory and uses examples from animal behavior like navigation and foraging to argue his case. So today we talk about some of those ideas, uh, some of the evidence to support those ideas and a host of other bells and whistles, including his long successful career, studying the many abstract processes, underlying our learning memory and behavior. You can find show notes at brain inspired.co/podcast/ 126 on the website. You can also choose to support the podcast through Patrion and join our brand inspired discord community. If you want and get access to all the full episodes I publish through Patrion, or just to throw a couple dollars my way each month to express your appreciation. I appreciate you listening. I hope this podcast is enriching your minds and bringing you some joy. Here's Randy, Randy, you're 80. You just told me you're 80 years old. Yes. Well, uh, when, when, when did you turn 80? Speaker 1 00:04:53 Uh, back in may. Speaker 2 00:04:55 Okay. Well, happy belated 80th. So I know that you have been interested in memory since the 1960s, at what point. So, you know, we'll get to the, uh, the big idea, uh, here in a moment, but at what point in your career did you start questioning the, uh, typical neuroscience story about memory Speaker 1 00:05:20 Way back in the sixties? Uh, when I was an undergraduate in Tony Dutch's lab and I'm deciding that I wasn't going to be a social psychologist, I was going to be a physiological psychologist as we called them in those days. And now we call them behavioral neuroscientists. And, uh, I really became an, a pasta, um, during while running my first experiment, which was, uh, a runway experiment with rats and mine would watch them and just watching their ads. I became absolutely persuaded that they, um, they knew what they were doing. They, uh, it wasn't habits. Uh, I had already become enamored of hole's vision of a mathematically rigorous theory of mind and brain computation, what we would now call computational neuroscience. Uh, but, um, I had already become an, a positive from the rest of his doctrine because I, you know, with all it was all habits. And of course there are many competitional neuroscientists for which that's still true, but that's what I mean when I said a moment ago before we were recording that, uh, uh, nothing has changed in the 60 years. I go to meetings now and I listened to some of the talks. I think this is the same shit I was listening to in 19. Speaker 2 00:06:49 Well, well, so, you know, one of the things that, um, you talk about in your book, um, memory and the computational brain, why cognitive science will transform neuroscience is that there is this large gap between cognitive science and neuroscience. Uh, and, and I heard you talk recently and you've written, uh, about this as well, that, you know, actually even back, that was 2009, 2010, when that book came out and, uh, computational neuroscience was still a small swath of neuroscience writ large. Right. But that's changed hasn't it has, has computational neuroscience, which to me seems like is the majority of neuroscience, what's your view on that? Has computational neuroscience come along Speaker 1 00:07:33 Well in terms of the number and quality of people doing it? Yes. I, I certainly don't see it as dominating neuroscience. I mean, neuroscience, you go to the annual, you know, the meeting, uh, the society for neuroscience, there's 30,000 people there, right? I mean, there are two poster sessions and they, in this, uh, the poster sessions are so big that even if you try to, you couldn't go by all the posters right on there, two of them every day and so on. And, you know, competitional neuroscience is kind of small. And, uh, then that big picture. And also when I think about it, the computational neuroscience, I guess, so at least, certainly my world view was dominated by vision people back in the day. Right. I mean, there still is. They've been very computational now for decades. In fact, there's a fascinating, uh, book by, um, by Google and, and weasel, uh, which they reproduce their papers. Uh, it was clearly a, a project of David Hubel and, uh, he produces 25 of their plastic papers and there are introductions and epilogues to each per my Google, and he repeatedly rants against the mathematician. Uh, you know, the math of the fact that all the engineers and mathematicians now come in division, right? Because like so many of the early people, he really didn't know much math. Speaker 1 00:09:11 And these days you cannot do cutting edge vision without a fairly serious mathematics education. Right. Um, but that was already through 30 years ago. Um, so I think what you're reacting to is now, of course, there are many people, uh, doing computational neuroscience and focusing on learning and memory, which did not use to be true. I mean, those fields used to be completely non-mathematical, I've had more than one colleague and friend tell me they went into this business precisely because they didn't have blur. Right. Speaker 2 00:09:50 That's right. Yeah. Well, I mean, it seems like these days, and, uh, and I, again, this is my own bias because I, uh, I learned computational neuroscience through my career, my short kinda, uh, academic career, but going in, I didn't really, I had some mathematics background, but I didn't have modeling background. I didn't have, you know, a real, a good footing in the computational world. So I kind of learned that through my training. Um, but didn't, you, you kind of applied yourself and learned some necessary mathematics a little bit later in your career? No. Oh yeah, for sure. Speaker 1 00:10:22 Um, I've been learning various bits of mathematics throughout the last 60 years. Um, I, for example, I, uh, I mean, I had the calculus as an undergraduate, but I didn't, um, linear algebra and I taught the undergraduate linear algebra course at Irvine during, after I was already in a tenured associate professor during my first sabbatical, when I was, uh, we're working with Duncan Lewis and studying also linear systems theory, which I also basically taught myself. I went partly of course, Dunkin was two orders of magnitude, better mathematician than I ever imagined I would ever meet, but he was incredibly good at explaining things. And I was teaching myself by reading those textbooks on linear systems there. And there was stuff. For example, I remember I could not wrap my mind around the convolution integral. So I said, Duncan, can you explain the, what convolution is? And he sat me down and I remember it was a half hour later. I absolutely understood what conflict. Speaker 2 00:11:36 And that was on, uh, the, did he use a Blackboard or did he use PowerPoint? I'm just kidding. Speaker 1 00:11:40 It was basically just verbal, although he may, I would, this was a long time ago talking about it. It would have been the Blackboard, there may have been some recourse to the Blackboard, but mostly well, anyway, somehow he found there were examples that made it clear and then I was able to use it. And that was satisfying. Speaker 2 00:11:58 If you had to go back, would you enter by studying mathematics first? Because I asked because you have a deep knowledge of the behavior surrounding learning and memory, which you also had to have to get to where you are. Speaker 1 00:12:16 Yeah, sure. Well, that was, I mean, first of all, that was what I took courses in. And second of all, um, I mean, that's what I taught for 50 years, right? So, you know, the behavior I'm in the more mathematical treatment I rarely taught at the undergraduate level. Right. Because it wouldn't take a very special undergraduate seminar to do it. I did teach it at the graduate level. Uh, and as every teacher knows, you don't really understand the subject until you've tried to teach it. Right. You get sometimes as the experience where you're busy explaining this had happened to me, even when I was teaching introductory psychology, I'm halfway through an explanation. And all of a sudden the little voice says, you know what you're saying? Doesn't make sense to you. Speaker 2 00:13:13 That's true. You really find out what you don't know. Speaker 1 00:13:16 Oh boy, this argument has just gone off the tracks. Speaker 2 00:13:23 Well, this idea of the, um, the brain as a computing device among other things has dominated your thoughts for a few decades now, right. Speaker 1 00:13:37 Since way back since Speaker 2 00:13:39 Way back. Yeah. Speaker 1 00:13:40 And so I was in graduate school at Yale, a very behavior in school, um, in Neil Miller's land, uh, you know, he was hell's most prominent student. Um, but I have, as I said, I had already become a heretic, uh, as an undergraduate. So, uh, I wasn't buying it, uh, nor was I buying it when I took the advanced course and learning from Alan wagon. Meanwhile, I was building special purpose computers to run the experiments I was running. And I was reading the theory of computation and books on how computers worked and so on. And, uh, and Chomsky was coming along. I went to a talk, this guy, I barely heard of Noam Chomsky. He came to speak at year. Uh, and I had just been reading the stuff that Skinner and, uh, Osgood then written on language. Speaker 1 00:14:40 I didn't know anything by language, but I thought this is rubbish. Um, and so I went to hear this talk by John scan. I was an instant convert. Okay, this isn't rubbish. So I, uh, I embraced the computational theory of mind. And I thought since those days that I'm in many of these days, most neuroscientists pay lip service, at least to it. Right. But many of them would immediately, and yes, I think abuse, but it doesn't compete the way a computer can pitch. This is the story having studied, how computers compute and, uh, I mean, I've programmed all the way down to the machine level. So I know how it goes, what goes on under the hood and is on. And, uh, I've always thought, well, wait a second, there isn't any other way to compute. I mean, tell me how it is you compete, but not the way computers can do it. Right. I thought, I thought during settled that Speaker 2 00:15:47 Well, so, so I had a, a brain inspired listener question about Chomsky's influence on you. So it really, you remember going to a talk and, and having that sort of solidify your approach. Speaker 1 00:15:59 Oh yeah. I remember being very impressed and then I read his, well, it didn't come out to later, but when it came out, I read his reflections on language. Also at Penn, Ben was a cognitive science was very much a happening thing at Penn. And I had colleagues like lilac Lightman and Henry Gladman and Duncan Lewis. So I was strongly influenced, uh, by, by them. Uh, and Dick nicer was on sabbatical there. The second year I was on an assistant professor. Um, so I was influenced by all of those people and all of those people were influenced by Chomsky. I mean, Johnson, Johnson, king sort of ran through the way we all thought. There's a kind of interesting story about that. Some years later after I'd been publishing a bunch of stuff that isn't quite a number of years later, um, no mum I'd met once or twice and who I've often corresponded with subsequently, but he wrote me a very polite letter. It's a letter. I think this was before email, um, gently complaining that I was, uh, channeling him without ever citing him. And I was very embarrassed and I thought, you know, he's absolutely right. So I wrote back apologizing and saying, look, you're so much a part of the intellectual mill you in which I swim. I just didn't occur to me though, acknowledged or even recognized my intellectual debts anyway. Speaker 2 00:17:37 Interesting. So, okay. Well maybe we, maybe we can return to a Chomsky later, but because I know you wrote a manuscript in 2006, I believe where you acknowledged the reflections on language and how that also influenced you. But I assume you got the letter before 2006 because Speaker 1 00:17:53 Oh yeah, for sure. I had, it was a long time ago. Speaker 2 00:17:58 So, um, memory and the computational brain, of course you, um, you detail your ideas, uh, in that book, but you've also, you know, continued writing. Um, and you know, I, there's a recent 2017 piece on the coding question where you re revisit these, these ideas and you you've continued to give talks about them. So maybe just in the broadest strokes, uh, could you summarize your, the idea and your, uh, position, um, and then we can kind of go through some of the details as needed. Speaker 1 00:18:32 So computation is operations on symbols, um, right before the emergence of computing machines, symbols and representations, all those things were regarded as hand waves, right. Uh, but we've computing machines. So when someone said, well, what do you mean by similarly? So you see this bit registering, you see that pattern of ones and zeros that's been put into that, the, those switches that's the number eight. That's what I mean by a symbol, right? There's nothing even faintly mystical about it. It's a, it's a fundamental, um, in this sense, symbols are the stuff of con mutation where I'm using stuff in the physical sense, right. It's there the material realization upon which computational operations operate. And, uh, once I got into information area, I realized, yeah, right. And an even better way of putting it. And this will became apparent in the book with that on thing that these symbols carry forward in time information in Shannon's sense of the term. Speaker 1 00:19:51 Um, so that you can quantify, right. You can say, this physically realized thing is carrying this amount of information. So you could wave a sign, all the fears about dualism and so on that tormented that behavior. So we're all terrified by the specter of bills. Right. Um, and, uh, so what, as far as I'm concerned, the computers just put paid to those worries, right? Uh, we had a completely physical theory. It was, uh, I thought then, and still think gave you a viable theory of mind, get my, when I at Stanford and Yale and the behavior and stage, if you said, well, the rat expects to find food at the end of the runway, you can see there were saying, well, I don't think we maybe should have been admitted. Um, somebody who is so soft headed as to talk about expectation, Speaker 2 00:20:55 Because it was related to theory of mind, or Speaker 1 00:20:59 Because before the appearance of computers, I mean, Skinner, denounced expectations in the most uncompromising berms is on the side of defect. Right? And so you couldn't see them, you, you couldn't feel them. They had no business in science. And, uh, and of course, as soon as you began programming computers, you would set up one number that was going to be compared to another number sentence. So then I just turn them and say, Hey, look, here's my program. It runs on that machine. I don't think there's a ghost in that little computer. I built this number is what it expects. And this is the operation by which it compares another number to that to decide whether what it expects was actually the case and the story get off my back. Speaker 2 00:21:53 Yeah. W but is that a, is that a redefinition of expectation over the years toward a more cause you know, the word expect one conjures, a notion of someone having a subjective feeling of expectation. Right. But now when someone says, expect, at least in the cognitive science, computational neuroscience world, all you think of is like a predictive processing, a numerical abstract process. Sure. Speaker 1 00:22:19 Now these days where everybody's talking about prediction, error, they're taking for granted that there's an expectation and the terms in which I'm talking about it, I'm never worried about these phenomenological things, right? Like what does an expectation feel like? Not the kind of question I'm interested in, why not? Because I don't think it's possible to get a hold of it and a strong for just the reasons you were finding out. Right. That is all I need for expectation is what I just described. Right. And it's perfectly clear and there's no problem with it. And now that we have competing machines and we see this going on all the time, when people ask, well, does the computer feel the way I feel when I have an expectation? I think, I don't know. And I don't care. It's not the kind of question I'm interested in. Right. In fact, if you notice what I've worked on almost entirely, particularly in recent years, uh, the last few decades, it's what I call the psychophysics of the abstract quantities, uh, distance duration, probability, numerosity, and so on that quantities that are a fundamental part of our experience, but they have no quality. Right? I mean, Speaker 2 00:23:45 What you said, quality. Yeah. Oh, Speaker 1 00:23:48 Well, I said it precisely to say that if you work on those things, you don't worry about quality because they have no quality. If people say, what is the duration feel like? Right. So all the philosophers that are beating themselves up about, uh, you know, what's it like to be a bat. Um, and they're all worried about quality as well. Well, you just, isn't something my worry about, uh, because first of all, I think the quality of are the things that have quality are a relatively minor interests. If you want to know what behavior is founded on is founded on the instructions. I was just talking about the probabilities, the numbers, the durations, the directions, the distances, all, all these abstractions. There are what drive behavior all the way down to insects, right? As you probably know, I'm a huge fan of the insect navigation better, right? Speaker 2 00:24:49 You liked the bees, you liked the ants. He liked the, uh, the Speaker 1 00:24:52 Butterflies, the beetle the butterfly had done because the dung beetles walking backwards with their ball though, walking home backwards, Speaker 2 00:25:04 Maybe the central argument or one of the central arguments, is that the story in neuroscience that the numbers and the numerical abstract symbols, I should just say, symbols are encoded in the synopsis, right? In the connections between neurons, among populations of neurons. But you have a hard time believing that that could be the case. Speaker 1 00:25:27 Well, actually, I, usually when I ask neuroscience is how you code a number either in ACE and naps or however many synapses they think might be necessary. That's a conversation stopper. Um, I don't know if he ever viewed my YouTube of my talk at Harvard where, uh, John Lisman was discussing. And, uh, I posed that question at the end of my talk St. John, when you get up here, you'll tell us how you store a number in a scenario. And he got out and gave a lengthy discussion in which he never brought that topic up. This was a very unusual in that I got a rebuttal. I would get another chance to speak. And I said, John, I'm going to give you another chance. Speaker 1 00:26:20 How do you store a number in the center has come on, John and the audience began to laugh and he stood up and he would not, he would not answer the question. Um, and I had a somewhat similar experience with Jean-Pierre Sean ju much more recently. Uh, in fact, the question made him so angry that he wouldn't allow the debate that we uploaded to. It's going to say, I didn't see that one. So, and I've gone so far, often in my thoughts, I say, come on guys, I can offer you two alternatives. Uh, I mean, it's not as if it's impossible to think of an answer or what I just sin. And I often proceed to say, well, look, the synopsis usually conceptualized by computational neuroscience is a real value of variable and distance direction. Probability. They're all real valued variables, right? So you can always represent a real value variable by a real value variable, right? Speaker 1 00:27:21 So we could say, well, if the synopsis this, if the weight is this big, then the distance is that far. Right? And if the way there's this big, you want to go there? I found practically no one wants to go there. Oh, you don't want to go there. Here's a radically different alternative. Supposedly have a bank of the people who talk about the synaptic plasticity are very vague about how many states are synapse can assume, but one school of thought thinks they're binary. All right, fine. There. I like that. That's a switch. Okay. So we'll have an array of binary synopses, and we throw this maps to this state and this snaps to the zero state. And now we've got something just like a computer registered. You liked that story. No, most people don't like that story. All right, we'll watch your story. Uh, and at that point, all I get is hand ways. Speaker 1 00:28:29 You know? Well, you see, there are lots of synapses and it's a pattern that's in absence. Well, could you say something about the pattern? I mean, how does the pattern for 11 different for the pattern from three, for example, could you shed a little light on that? People do not want to answer that question because the answer to that question is to admit that there are symbols in the brain. And even to this day, many people do not want to go there. And what's your answer? My answer is that isn't in the synopsis. I mean, I point out that there are several labs around the world that are busy studying how to use bacterial DNA as the memory elements in a conventional computer, right? Any engineer, anybody familiar with the computing machines that actually work, uh, and that we know how they work. Once you show them a polynucleotide and explain that any nuclear that had can be adjacent to any other nucleotide, any engineer worth of says Wolf's could store like nobody's business. In fact, one of the people who introduced me in that talk, I gave a couple of years ago in the introduction, showed a very grainy video of a running horse where the video, the entire video had been passed through the bacterial, just to drive home the fact that, that if you're looking for a place to store numbers, well, uh, Speaker 2 00:30:14 Well we know, yeah. We know DNA stores, the genetic code. Um, but there are other possibilities as well. I'm wondering what your current, so DNA is one possibility, right? Where a code could be stored intracellularly and to you, um, the key, I don't know, I don't know what your current thoughts on this, because it used to be that, uh, you didn't know, um, that there were, you know, a handful of intracellular mechanisms whereby you might store these things, proteins degrade a little too fast, right. But then there are, polymerases like RNA, uh, could be one of the, uh, substrate DNA could be a substrate, but as DNA, fast enough, what's your current thinking on what might be the, uh, substrate? Speaker 1 00:30:58 Well, my I'm still sticking with polynucleotide, so I lean much more strongly to RNA than to DNA, probably complex with a protein destabilize. It, my thinking has taken a huge boost lately from a wonderful paper, by a young guy in Gabby Nayman's lab on the Rockefeller named the pessimist Dean, uh, flooding poor, but it's just appeared in the journal of theoretical biology in the last couple of weeks. Uh, and, uh, he come he's in a stylish guy because he, he has a truly deep knowledge of, um, theoretical computer science, much deeper than mine. I mean, he really knows the Lambda calculus, right. Whereas for me, it's just kind of a name. Um, but at the same time, he really, he has a much deeper knowledge of RNA biology than I do. But the most astonishing thing is that they, I mean, those two things are about as far apart, as conceptually, as you can readily imagine. Speaker 1 00:32:02 And, but he has this very rare mind that can bring those two things together. And he lays out a detailed story about computation performed at the RNA level, in which RNA is both the symbols and the machinery that operates on the symbols. And then you use a builds. It is on the, uh, on the Lambda calculus. And he sh and he lays out in his appendix in great detail on RNA machine that will add arbitrarily large numbers. Now, for all those computational out there in your audience, I claimed that that has never been done by a CNN and that it never will be done by at least by a non recurrent C a bio straight through CNN. And even if it's done by a recurrent one, right, they're going to result resort to that a little recycling, the, you know, cause they're going to have to store addition is inescapably serial, right? Speaker 1 00:33:12 So you've got the, you've got to do the earlier, the less significant digits first. And you have to store that as a result and then transfer the carry to the next one and so on. So you need memory and uh, so how do you get memory? Well, that's where recurrent nets come in, right? It keeps sending them around the loop, which, uh, in this paper by Aquila poorer that I recommend in the strongest terms, uh, he also has a wonderful discussion of dynamic systems and why and why they're not stable, right? The very guy Moore who proved that they, that they were turning complete, also argued very strongly that, uh, they weren't stable. So they weren't physically realizable the turn complete ones were just kind of mathematical dream and they weren't physically stable. Speaker 2 00:34:14 Well, you, you, I didn't know about that more recent paper you used to hang your hat on and maybe you still do, uh, the per Kenji cell, uh, finding in the cerebellum. And maybe you'll just add this more recent finding with RNA to your, um, uh, to your talks. Now Speaker 1 00:34:33 You're absolutely right. I mean, I, I still think Frederick Johansson's discovery of the development of that preparation, which was the culmination of a 40 year effort in Jerry hustler's lab. I still think that that, that what he has done his hand, the molecular biological community, uh, what they need on a platter. And for the first time, I think we could actually know the physical basis of memory while I'm still sentient. Uh, and, uh, that would be a miracle because he's identified the beginning and end of an intracellular cascade. And one of the steps in that cascade clearly contains the end gram that encodes the CSU S interval. I think his PhD work proved that beyond reasonable argument and, you know, well, I can, her biologists know how to follow intercellular cascades, right? I mean, he identified that the post synaptic receptor at the start of this cascade, and this is a metabotropic receptor, right. Speaker 1 00:35:37 Which means that it transfers the message from an extra cellular source to an intracellular signaling chain. And, you know, there's almost certainly a Jeep protein on the inside of that membrane and that transforms and gooses the next thing. And so on. And molecular biologists have been chasing these cascades now for decades. And so, yeah, it's always been, how would I know that I got to the end grid, but no Hudson has solved that problem for them. If only they realize it because he brewed that the information about the CSU S interval is not in the incoming signal that triggers this cascade. Right. But he also identified a potassium channel and inward rectifying potassium channel at the other end of the cascade, a channel that's a key to producing the pause, the timed pause that comes out of the cell. Right. All right. So you're following this cascade. And until you get to the end, gram the information about the duration of the interval, won't be in any step you're seeing. Right. And on the other side of the end gram, the information will be in the chain, right. Uh, because it's there by the time you get to this potassium channel. So you're following the cascade and at some points here, whoa, where does it go look at that this step is informed by Lin breath. All right. So at the end Ram lies between the preceding step and this step. Whoa. Speaker 2 00:37:15 Yeah. But yeah, so, so there was, is the, is the more recent, uh, theoretical biology paper with the RNA? Uh, does it address the reading and writing mechanisms because that's, that's what you'd have to follow right. To address reading and writing? Speaker 1 00:37:31 Well, keep in mind, in fact, I strongly suspect if I can guess how things will play out, that we will discover the in-room before we understand either the writing or the reading mechanisms. And again, I would appear here the appeal here to the history of, uh, of DNA, right? The engram is the low-hanging fruit because it has a really simple job. His only job is to store the information just like DNA's job is to store the information. So we are still learning how that stored information gets translated into actual organisms. Right now we've made enormous progress in that. Uh, but there's still a very long way to go. And this has been going on now for decades, right? 40, 50 years ever since 1953. So the DNA story that emerged pretty quickly, right. That the basic, okay. Here's how the information is encoded. Here's how this carried forward in time. Speaker 1 00:38:39 There's a story about, uh, how it's red is five orders of magnitude, more complicated, right? I mean, you can explain DNA to a smart undergraduate in half an hour, right. Uh, if he then asks or she then asked, uh, oh, okay. How do you get an eye? Then you say, well, okay, come to my advanced graduate seminar. And, uh, we will spend the whole seminar, um, discussing what we understand about how you get from a gene doing not right. One of the astonishing things we've learned is that there is one gene that, you know, there's a gene, you turn it on, you get an eye where we turned it on. Right. When I was being taught biology, uh, we are being taught one gene, one protein, which is of course still true, but everyone took it to be a corollary that if you thought there could be a gene for an eye, you were stupid. No one could imagine what, what you, I mean, there was this huge gap between, okay, you got, you know, we're coding for a sequence of amino acids, right. An I isn't a sequence of amino acids. Um, how now again, I would say the reason they couldn't imagine how it's done is they didn't know enough computer science, because it turns out that the, the protein that, that gene and goats for isn't a building block in the eye it's transcription factor, right? It's Speaker 2 00:40:22 All transcription Speaker 1 00:40:23 Factors. You have to go five or six steps down before you get past the transcription factors. Now, anybody who knows how relational databases work would say, well DOE or, or how a function works, right. When you, you know, when you call the name of a function in MATLAB that just accesses the code for that Speaker 2 00:40:45 And on and on, Speaker 1 00:40:48 That's how you build complex operations out of simple operations. Right. And that's what got the addition is all about. Speaker 2 00:40:56 Let me, let me try this out on you, because I'm just thinking about this, uh, talking about the, I know you just said that the Reed and the Reed mechanism is orders of magnitude more complicated, and then the right mechanism must be even more complicated. I would imagine Speaker 1 00:41:13 Until we know what the engram is. I think we, I refuse to think very long about this issue, because I think, I don't know what it is I need to know in order to think productively about because the right mechanism has to translate from an incoming code in the spike train. And since we still, despite the Rica at all book, which I worshiped and from which I learned my information theory, I have friends, even my collaborator, Peter Lytham who thinks that's a great book, but I think that, well, it's just about the fly sensor. It's the answer to how spike trains carry information period. Right. It's, it's all in the inner spike intervals. Well, and there's several bits per interspike interval. Well, there's no agreement about that. Right. So right until there's agreement about how the information is encoded in the incoming signal and the agreement about how it's encoded in the written thing you can't think productively about what could the machinery looked like that would take this code and produce that code any more than you could get from, um, DNA to homeobox genes. Right. But without knowing all the very complicated stuff that goes on in between, and then knowing how homie a box genes work, right. I mean, they code for abstractions, anterior distal it's as if, uh, somebody went to a and that dummy lesson back in the Precambrian, right. Yeah. Speaker 1 00:42:58 And they said, well, we got a code, but here we got to have a code for, um, the end, but whatever it is, we're building, we have to have another code for what anyway. You'll get. Speaker 2 00:43:09 Yeah. Well, well, let's, let's pretend for a moment just as a thought experiment let's we don't, it doesn't have to be RNA, but there's some in intracellular mechanism. Right. And, um, you just mentioned, uh, so this is going to be kind of a long winding, um, thought train here, but you, you had just mentioned, you know, about the receptors and how there is this, uh, enormously complex cascade from receptors to intracellular processes. And, um, that, that, anyway, that that's a long cascade, you also mentioned convolutional neural networks in a derisive way, playfully derisive way. Um, however, thinking about a read-write mechanism. So you probably know that, um, you know, given a large enough, uh, neural network, that they are a universal function approximators right. They can transform from input to output and the mathematically proven that the universal function approximators talking about, uh, the, the cascade from, uh, extra cellular membrane protein to intracellular happenings, uh, sounds eerily like a neural network kind of process because you have all these interacting, uh, sub components. Right? The other thing, um, that you, you mentioned that we just talked about briefly is that the majority product from DNA from genes is recursive is transcription factors, which feeds back onto the DNA, which regulates the protein synthesis. And the next protein is another transcription factor. That sounds eerily like recurrent neural networks, right. Feeding back. So, so these, these processes are, um, uh, one could make a very loose argument that they are, oh, what's the word? Not similar, not analog, uh, analogous in some fashion, Speaker 1 00:45:07 They are analogous. They clearly are. Those analogies are traced out in the, um, Peter Sterling and Simon Lockland book on which they argue that compute with chemistry. It's cheaper. I think they're spot on by that I would add 10 orders of magnitude cheaper. Right? I think they don't slam just on how much cheaper. Um, but they do these dynamic systems, uh, analogs. Now this same guy has on brand new blood boats. I just got, I just saw it yesterday or day before yesterday, and which he takes up that proof of a universal function approximately there. And it shows, first of all, that it's not really true. It's only true on the closed interval, not the open interval. So, but second of all, he, he revisits the arrogant. And so all the processes that you're describing are dynamic systems and he revisits why you can't really do computation with stored information with dynamic systems. Speaker 1 00:46:21 He has a much more sophisticated technique on this, uh, um, take on it anchored in a much deeper understanding of the foundations of theoretical, theoretical computer science. But my much simpler I can move to. I know he agrees with me, um, is like those proofs said, well, what do we mean by a universal, uh, by a function approximately a function approximator gets an input vector and it generates an output vector. Oh, okay. Uh, that's the way a mathematician thinks about it, but it shows how, not the way a computer scientist thinks about it. Um, because there's no memory in that. Right. And a computer scientist is very aware that in your average computation information comes in, some of which was acquired 50 years ago as we sit here talking, right. Uh, as I'm summoning up the words in the English language, right. I learned most of them, uh, when I was, uh, less than five years old. Right. It's, uh, they've been rattling around in there and now for 75 years, Speaker 2 00:47:29 However, now I'm forgetting many of them. Speaker 1 00:47:32 It doesn't get better. Let me dang it. I'm beginning to have noticeable word finding problems and someone whose verbal facility was all as one of their great strengths. That's very painful. And I'm sorry, I couldn't the other day I was explaining something and I couldn't someone, the word factorial. I was, I wanted to say the Sterling approximation. I couldn't say what it was an approximation to because I couldn't return to the word factorial. Oh, geez. Anyway, um, the point is that real-world computations require memory because you get one bit of information, you put it in memory, you get another bit, maybe 10 days later, maybe a year later, maybe 20 years later, you put that memory. And so if you look at most of what we do, it's putting together at a given moment information that was acquired at many different times in the past. Speaker 1 00:48:35 And that's what brains when you're talking about real. So I hope it's clear why this makes that proof totally irrelevant, right? Because that proof assumed that all that information had been very thoughtfully assembled for you by some genius and packaged into one humongous vector. And that we fed it to the computer in a generator in the neural net and then generated an output vector. Well, of course, that's where you have to think about the system and when it has no memory, but that's of course just why in throwing out the memory, they threw out the baby with the bath, right? Speaker 2 00:49:15 Well, the memory would be in the distributed connections, right? The distributed weight, Speaker 1 00:49:20 That's a finite state machine in the proper definition of a finite state machine, which not, it is not that it's finally a finite state machine is a Turing machine that cannot read what it has written. Okay. I asked the mathematically equivalent to the usual definition. Um, but it showed, but if you're thinking about these things, it shows you what the huge difference between, um, attorney machine and a financial state machine. Speaker 2 00:49:48 Um, it can only, it can only go from state to state with some transition rule and probability in, so, Speaker 1 00:49:54 And we hammer on this a bit. So if your iPhone or your mobile phone with its camera or a finite state machine, then it would have stored in its wiring diagram, every picture that you're ever going to take with that phone. I don't think so. You can take more different pictures with that phone. Then there are elementary particles in the knowable universe, right. That's my definition of a true infinity. Right. Okay. So we didn't put all the possible pictures in the wiring diagram of that farmer. Right. We put in something that would convert quantum catches to switch throwers to memory elements. And of course, then the phone immediately gets busy running some compression algorithm, um, because there's huge redundancy and the pixels, right? So, uh, uh, but a device without memory can't do any of that, right? No, no memory, no iPhone. Speaker 2 00:51:11 So just stepping back, because often on this podcast, we talk about the current successes of the deep learning, um, folks. And a lot of that is being applied to neuroscience to understand, uh, how brains function. And I know that you are aware of, um, the, that the line of deep learning wherein from like Alex Graves and so on where external memory has been supplied to the neural network. Um, but the book memory, uh, and the computational brain was actually written before the quote unquote deep learning revolution when, uh, deep learning started to dominate. So, um, for fear that, uh, this diatribe could take the rest of the time, keep it short. I'm curious about your thoughts on the, uh, success and the ubiquity now of, of deep learning, uh, and its application to understanding how neuroscience, how brains might function. Speaker 1 00:52:12 Well, trying to keep it short. You remember them, you don't Speaker 2 00:52:16 Have to keep it, sorry, but I, you know, we Speaker 1 00:52:18 All have us lie line from the graduate plastics. Well, my, my, uh, wisdom distilled down to a very few words would be adversarial images. Speaker 2 00:52:31 Sure. But what happens when that gets solved, but okay, well, well, Speaker 1 00:52:35 Yeah, Speaker 2 00:52:37 Well, I am definitely. Speaker 1 00:52:40 Yeah. So the last time I checked no solution wasn't insight and it reflects a deep truth about how those systems work. Right? Most people don't realize that when they, um, image recognitions. So system inside Elon Musk, Scott warns the rest of the system, that there's a stop sign there, right. That system, because it's a deep neural net. Uh, and because they don't know how to extract shape, what is really decided is look, these pixels are stopped signage. And this region of the pixels has the statistics of a stop sign, right? If you were to, well, is it octagonal, then that would respond. What's an octagon. Speaker 1 00:53:33 And you would, if you explain what an octagon is, the net, the net would say, look, I don't do shape. Uh, and at least I have noticed, and I think others will have noticed that the hype about we were going to have auto self-driving cars has died down very considerably because the adversarial images taught the malevolent smart, but multilevel and high school students of which there are two greatest supply, how to go out and hack every stop sign in town, right? Uh, with, uh, you get yourself a cran tape, you get a cran and you make various graffiti on the stop signs. And Elon Musk's cars will blow right through this nonsense. Uh, okay. So, uh, Hey guys, I think it's wonderful that you, uh, got the system to work the point where you could do, I'm not discounting this achievement, but when you start telling me, this is how the brain works, and that means the brain has no memory. I say, I don't think so because you can't do deep learning. I taught Jay McClellan years ago and he and I have been arguing ever since, Speaker 2 00:54:50 Oh, he's one of, he's one of the ones who's working on, building math and reasoning into, Speaker 1 00:54:55 I keep telling him KJ, forget math and reasoning. Look the ant and the be do dead reckoning. Why don't you try that? Uh, I want to see how dead reckoning works in a system that has no memory. I've been taunting. I've been trolling him with that challenge now for 20 years. And, uh, he doesn't bite. Uh, cause I think like anyone you look at dead reckoning and say, whoa, uh, we are going to have to store the current location. Right. I mean, there's no way of getting around it. Uh, and that's going to extend over hours. Speaker 2 00:55:28 Yeah. Well, and yet, okay. So over hours as a point, you might, um, bring up again here because I wanted to ask you, first of all, whether you're aware of, and then secondly, your thoughts on, um, there, there are, there have been, uh, deep learning networks paired with reinforcement learning techniques in the AI world that have used convolutional neural networks and used LSTs that have done path integration in little maze environments, virtual amaze environments. And that's not Speaker 1 00:56:00 Toy environments in which tile, the maze c'mon in order to make it fit into the reinforcement learning thing. They say, well, look, here's how we represent the maze. Right? You see this tie that we tile it, right? And then each tile knows, well, then it gets interesting. Uh, I think very few of them actually give the tiles metric information. That is, um, I know that the, a star algorithm, which is how the Google maps finds, uh, routes of course has metric information, right? It's all, it's all there and the cost function. Uh, so that's why Google send around cars with GPS is right there. Record extremely precise metric information all over, uh, all over the world. But in the ones that I've seen, that the reinforcement learning ones, they, uh, you know, reinforcement learning, they say, well, when you're in state one, you do you learn to do action one. Speaker 1 00:57:08 And when you're in state two, you learn to do action. To, first of all, this, they don't seem to realize that this is essentially identical to Clark calls theory. Uh, that's why when I say, Hey, I was listening to this nonsense 60 years ago by, you know, they don't put in any metric information. Come on, I'm a sailor, I'm a Navigant, I'm a back country skier. I ski alone in the back country. Hey, uh, you know, you don't tile the winter wilderness, you say, okay, I'm headed this way. The sun's over there. Uh, you work the way navigation has always worked. What direction am I going? How far am I going? Speaker 2 00:57:55 However, the average human is easily lost in that scenario. Whereas the average B or aunt, uh, isn't isn't lost, right. Speaker 1 00:58:03 Well, there are plenty of the other ones aren't lost either. I'm by no means the only one who does back country skiing, even alone. And of course, uh, Joshua Slocum sailed alone around the world, right? Uh, um, using totally traditional navigational, uh, methods, boasting with some good reason about his accomplishments. But, uh, the, the reason people don't know how to do this in the modern world is they always live in cities and they get from one place to another on taxi cabs on subways. So they're never, they're never called upon to do this. But when I was in, uh, college, I worked one summer for a month until I turned them into the better business bureau with a Collier's encyclopedia selling encyclopedias door to door under the tutorship of a, of a man who had been doing this all his life. And, uh, in those days you sold these in the newly built suburbs, which had all these twisty roads and pulled the sacks. Speaker 1 00:59:07 And so, and then came in and you went all around and so on. And this guy was intensely proud of the fact that he always knew exactly how to get back out of there. And we would be driving her out. I'd be totally lost. And he'd say, which way is the entrance? No idea. He would point it at it. He knew which way it was to the interest within tender Reese, no matter how long we'd been in there. So it's a matter of somewhat of talent. Some people have more Dalen for it, but it's also a matter of habit, right? I mean, if you walked alone in strange foreign cities, maybe the first time you got seriously lost, but you learn something from it. Now when you leave the hotel and you walk down and you get to the first corner, you have turned around. In fact, it's just what the bees do when they leave, they turn around and look back. Speaker 2 01:00:02 But the fact that we can basically unlearn that skill and we would have, you you'd have to learn it back, right. Uh, argues it could argue multiple different things. You know, the question that I want to ask you is if you think that there could be multiple memory mechanisms, you know, obviously the quote-unquote memory, um, there are multiple types of memory defined and that is continuing to change. What kinds of memory that we have. So, you know, for example, something like episodic memory, where you can recall an event, right? And I know that you don't care about, uh, mental phenomenon, uh, Speaker 1 01:00:38 Only an episodic memory. Uh, crystal, Jonathan Bristol has demonstrated a beautifully and rats and, and of course, uh, um, Nikki Clayton and, uh, Tony Dickinson demonstrated it spectacularly in those food cashing, birds. Right, Speaker 2 01:00:56 Right. On Speaker 1 01:00:57 Board with episodic memory, but it's all numbers, Speaker 2 01:01:01 Right? So I'm thinking more of Speaker 1 01:01:03 The right amount, uh, texture, uh, what goes in one episode, right. Numbers. Speaker 2 01:01:13 So to your mind, there is one memory mechanism, uh, in all brains. Speaker 1 01:01:19 That's what I think is by far the most likely, uh, of course, I don't know. And of course it's conceivable that there are different learning mechanisms, but once you grasp how simple and how fundamental memory is memory understood the way I understand it. Right. Which is just, Shannon's memory is a medium of communication. It's, um, machinery, the medium, the material stuff, by which the past communicates with the future. Now Shannon in his opening paragraph pointed out that, Hey, look, if you're inter if communication is what you're about, and he might've stood up and said, I'm a communication engineer, and they'd pay me here at bell labs for communication. If communication is what it's about, you don't give a shit about the content. That was a truly profound insight. And I don't see why that doesn't apply just as much to the brain as it does to, to computing this. Speaker 1 01:02:26 Right. When I go buy a new stick of gum to save a terabyte of information by they don't ask me, well, are you going to use this for word processing or spreadsheets or MATLAB files? It's all just information when it comes to communication and memory is, is communication. So it's a really, I think DNA is, again, look, evolution solved this problem. Once it found a really good solution, that was a, probably 2 billion years ago. It same as the animals have been navigating since the Precambrian we can tell just from their tracks in the mud. Right. So, um, you can navigate without a map, without a memory, all these. So in one of your other questions you asked about how about skills, right? Motor skills and motor skills. Yeah. If it's going, if there is going to be a case where it's different than I would say, well, that could easily be where, but I kind of doubt it because I think skills can be, and I'm a student of the motor literature I've written about it at some length occasionally. Um, I think skills can be learned as parameter tuning that is, you've got, you've got a system that's an incredibly versatile memory system. This stuff was all in my first book and the Speaker 2 01:03:55 Best the organization of learning. Speaker 1 01:03:57 No, they are tradition of action. There's another book 10 years before that. Um, but I, and what I'm saying is this is not original with me. This is very much there in the literature that was in that book. And, and it's there and say, even martyrs work right with a stigmatic gastric ganglion, right? You've got this set of oscillators, a very simple circuit, right. But there's oscillators. And some feedback, feedback is important. Don't get me wrong, but only under circumstance in certain circumstances. And there's a, of course, inputs, inhibitory inputs, and what have you. But the way the system is basically controlled is by, um, signals that come down from higher and the nervous system and adjust the parameters. Right? So, and parameters, we're back to numbers, right? What are parameters? There are numbers. Speaker 2 01:04:59 So I have a memory from, well, I don't know if it was three or four years ago. So my memory for times is not great, but, uh, we held my wife and I, uh, held a chili cook-off at our house. And, um, I won't tell you how my entry did. I didn't tell you I didn't win the trophy, but, um, there was a particular entry, uh, that tasted a lot w the flavor was dominated by celery. Um, and I remember this and I think it got last place. It was just overwhelming celery. Uh she's she's, uh, she was a vegetarian and a kind of a whole holistic medicine also person. But anyway, I was talking to her about it the other day. And I can remember that, uh, she felt a little, um, you know, sad about this, but, but I, but I, but I have this episodic memory and we don't need to go on about episodic memory. I have this, you know, experiential memory of what that was like, and the flavor of the celery and me not winning also, you know, and all that kind of, um, and I can picture our house and stuff. So I guess the question is, does the new, the intracellular numerical mechanism, uh, account for that type of experiential memory? Speaker 1 01:06:11 Well, not without some spelling out of additional hypotheses. So, but I did address your question at considerable length in the near final chapter of my 1990 book entitled the unity of remembered experience. That book has been cited many thousands of times, but as near as I can tell, no one ever read that chapter, if they did, they dismissed it, uh, because it, it, it addresses exactly the question you're posing, uh, how to these diverse aspects of an experience and the experience extends over time and space and involves many different components that teeth taste of the celery and so on. How do they all get knit together? And I argue there that first of all, they're not knit together by associations because that brings you into an explosion, right? You have a combinatorial explosion, you'd get this net of a honorably, many associations, the unity, the phenomenological unity, uh, arises in the process of recollecting the experience and that you use time and place indices, all memories on this story have a timestamp and a location stamp. Speaker 1 01:07:29 And I present, I review experimental evidence for that flame. And this is now of course, 30 years old. Uh, and there are more evidence for it. Have a somewhat similar nature has emerged, uh, with, uh, in the intervening 30 years. But I spell out in some detail how you could use if every memory, if they're all in separate bins, in separate neurons and song, but they all have as one of their fundamental components, a time and a location stamp, which plays the role of the, um, the opera wrong in, uh, DNA, right? It's the address. Then you can move, uh, among these memories in recollecting and experience that is you because the episodes are always located in at a time and in a place they're located in space time. Right. And so you can retrieve the facts if using those indices is I read the hippocampus literature, then they're outside. I think, uh, I see, I think someday they're somewhat, well, I actually, I can, I come down and the guy who died, Howard Speaker 2 01:08:47 Howard, Speaker 1 01:08:47 He was starting to argue this same sort of thing. And I wrote him, I said, Hey, Howard, go read my chapter. I, this is what I was doing 30 years ago. And he wrote back and he said, yeah, I've been reading it. You're right. You were a guy that's outside, you're in the, in the future. And then he died Speaker 2 01:09:08 Totally aside. But this happens over and over. And you've been around long enough to have experienced this personally where new ideas are not new ideas they've been written about in buried in chapters. Uh, um, so how many times this has happened to you? Speaker 1 01:09:24 Oh, I don't get uptight about it for one thing as a, because I'm a sinner. I both listened to her and sinned against us witness that he is. And, uh, and I, I w I wasn't bang. Right. Just, I thought Howard and I could make common cause here. Right. And I was deeply disappointed in what he does. Speaker 2 01:09:47 You got to stay alive to keep doing science. You got to Speaker 1 01:09:50 Stay with me here. It's not, I think that's the general answer. I mean, take salary for a month for a specific, so it has quality of pace, but then the vote is color, right? And there's one thing we've known now for more than a century color is represented in our brains with three numbers. And, uh, recently the story for both taste and odor has emerged the same. They're all vector representations. The dimensionality of the spaces is higher, but these days, Doris Tsao and lots of people are pushing vector representations really hard. And of course back they're just strings of numbers, right. And, and they represent faces and, and, um, Chuck Smith has argued that the same story is true for odor even in Drosophila. So again, the salary, it's all numbers, right? It's a, the tastes are represented in a four-dimensional space. Colors are represented in a three-dimensional space. Faces are represented in a 50 dimensional space. You can get the idea Speaker 2 01:10:56 Two more questions, and then I'll let you go. And I appreciate you hanging around with me. One, what is the role of synaptic plasticity? Speaker 1 01:11:05 Um, no one knows the least of all me. Speaker 2 01:11:09 I thought I assumed that you were going to say in coding writing. Speaker 1 01:11:14 I honestly have no idea. I, since I literally believe that an associative bond never formed in the brain of any animal and since the plastics in emphasis transparently conceived of as an associate of bond. Right. I certainly don't think that's what they all, could they play a role in the complications carried out in signals? Sure. Do I have, it seems likely that they probably do, but I do I have any good ideas what that role might be? No. Um, does anyone else? I don't know. I don't follow the literature very carefully, but everybody seems so hung up on the idea that there are associative bonds that I think until they dig themselves out of that conceptual hole, they're never gonna find out what they're really about. Speaker 2 01:12:05 What's keeping you up at night, these days. What are you thinking hard about? That's just beyond reach to you. Speaker 1 01:12:17 Well, how'd it get the molecular biologist to realize that Fredrico Hudson has offered them the world on a plate. Speaker 2 01:12:30 How's that fight going Speaker 1 01:12:32 Very slowly and they're hung out for what is best I can make out our quads and a metaphysical reason. So for example, don't ask Ryan who, Speaker 2 01:12:44 Uh, he'll be, he'll be on the next episode. Speaker 1 01:12:47 So you can follow up on this or you can ask him what's his problem with Randy's story? Yes, because he and I have been arguing in correspondence. He, I never heard of him. I, I had given talks at MIT where I imagined he was present. And, uh, and I met on a go a few times, but, um, whose lab he came out of, but he emailed me that when the day it was an embargoed, his science paper, when he was still in Tonegawa was lab showing that they could make the plastics in emphasis go away. And the information was still there. And the email said, I think you'll find this interesting. And I wrote back, yes. I find this very, very interesting indeed. Okay. So he, and I agree that the information isn't stored in this and now in the plastic sentences, and he admits that he does not have a story about how the information is stored Speaker 2 01:13:44 The in grim, but Speaker 1 01:13:46 These all focused on these cell assemblers, he's focused on this sparse coding. And I say, yeah, Thomas Tomas, that's all very interesting, but we both think that the real name of the game is looking for the end brand and those cell assemblies. They aren't, they haven't ran your own work, shows that it must be inside those cells. I can't get them to go out and it's all hung up about information. He doesn't like the idea that we have to think in terms of Shannon information, he's read Dennett. And he, he believes that there's in semantic information. And I, I know Ben very well. We have a lengthy email correspondence in which I'm trolling Bennet, Daniel, and saying, Daniel fact is you have no idea what you mean by semantic information. And then it more or less admits that that's true. I said, you know, Shannon information is the only game in town. Uh, semantic information is just philosophers hand-waving Speaker 2 01:14:51 So, but, but, but the recent op the genetic work where, um, you know, particular cells and networks of cells, Speaker 1 01:14:58 Then they can excite behavior that is informed by the store information. They've shown that over and over again, and now people are showing it in the hippocampus, right? Speaker 2 01:15:08 But that doesn't change your story. It doesn't change your view Speaker 1 01:15:11 Because it doesn't even address the question I'm posing, which is alright, you excite those cells. And the output signals from that cell is informed by acquired information. Where is it? Did some neighboring cells say, oh, you need to know this, right? Uh, or as your own experiments tend to show, they got that information from inside themselves. Well, once you get inside a cell, it's all molecules, right? Uh, very big, complicated molecules and Speaker 2 01:15:52 Networks of molecules, Speaker 1 01:15:54 Even railroads and structures build them in the ribosome, for example. But, but basically we're down to the molecular level of structure. Right. And, uh, and, uh, it keeps saying your own work shows that that's the case. I cannot persuade him and it's just driving me nuts. I mean, uh, Rick is a five or six years old now and I thought, oh, wow, this is the breakthrough. Now all those, uh, insanely ambitious molecular biologists, they'll jump on this. And they'll trace that cascade. And they'll use this ability to, to observe single molecules fluorescing inside individual cells. I mean, they've created the most astonishing tools. And once they get to the end, they can slice and dice it with CRISPR and so on. And they can find out that code. It seems to me like, this is so arduous. I cannot, Speaker 2 01:16:55 You've learned multiple things throughout your career. Why don't you just go learn molecular experimental, molecular biology and start on it. Speaker 1 01:17:07 And, uh, you know, it takes long time to become a molecular biologist. And besides that, I would have to get a grant. And so, I mean, that's the other thing, the, uh, and then there are no way with somebody with my background could get a grant. I mean, this effort, although it seems to me obvious what the general strategy is. I don't mean to minimize how difficult it would be and the kind of resources. I mean, you need the kind of money that only a molecular biologist can get. I mean, people like me, we get the, the, the rounding error in the molecular biology grants, right. Uh, so you're not gonna pursue that cascade with a $20,000 a year, right. It's going to be more like $5 million a year. Right. And it needs to become competitive, which it always does in microbiology. That is if one or two of the smartest young upstarts started doing this, then the rest of the field would say, oh shit, maybe I'm missing the train. Uh, maybe, maybe I better get on that train before it leaves the station. Right. I'm trying to stir up that kind of anxiety. But so far I have not succeeded. Speaker 2 01:18:21 Well, you've been driving your train for a long time along those, those very tracks. So this is a great place to leave it. I'm going to I'll play that last little clip there for Tomas when we talk perhaps, and now can respond. Thank you for the very, very fun, uh, conversation. Keep up the good fight, Randy. I appreciate it. I Speaker 1 01:18:40 Enjoyed this. Thank you. Speaker 2 01:18:47 Brain inspired is a production of me and you. I don't do advertisements. You can support the show through Patrion for a trifling amount and get access to the full versions of all the episodes. Plus bonus episodes that focus more on the cultural side, but still have science go to brand inspired.co and find the red Patrion button there to get in touch with me, [email protected]. The music you hear is by the new year. Find [email protected]. Thank you for your support. See you next time.

Other Episodes

Episode 0

February 11, 2020 01:36:03
Episode Cover

BI 060 Michael Rescorla: Mind as Representation Machine

Michael and I discuss the philosophy and a bit of history of mental representation including the computational theory of mind and the language of...

Listen

Episode 0

March 02, 2022 01:21:01
Episode Cover

BI 129 Patryk Laurent: Learning from the Real World

Support the show to get full episodes and join the Discord community. Patryk and I discuss his wide-ranging background working in both the neuroscience...

Listen

Episode 0

November 11, 2021 01:06:36
Episode Cover

BI 119 Henry Yin: The Crisis in Neuroscience

Support the show to get full episodes and join the Discord community. Henry and I discuss why he thinks neuroscience is in a crisis...

Listen