[00:00:02] Speaker A: Previous attempts of trying to find a simplified representation of the brain in that canonical neural computations project.
It's just, you know, showing the limitations of that attempt, that, you know, maybe the brain is just a lot more complicated than people had hoped for.
It makes me think that there is something important about, like, how the nervous system is, if you like, an extreme version of something that living cells are doing anyway. And it pushes me to be skeptical that you could replicate that kind of adaptive intelligence in a system that isn't made up of living cells.
[00:00:59] Speaker B: This is brain inspired.
Hey everyone, it's Paul. Today I was joined by Masrita Chiramuta, who is a neuroscientist turned philosopher of neuroscience and philosopher of science more broadly, and a historian of neuroscience. She's been at the University of Pittsburgh for a few years, but is soon on her way to the University of Edinburgh, back closer to her original home. I found her through her recent work on this current dilemma. We're the growing divide between prediction and understanding. That is, we're building these deep learning models that produce impressively accurate predictions, while at the same time our understanding of how they compute those predictions is getting worse. She claims, for instance, that classic simpler computational models in neuroscience, like receptive field models of early visual cortical neurons, that those canonical simpler models, in an important sense, actually provide better understanding while also providing worse predictions. This is because in her account of understanding, our understanding is never really about what's truly going on, the true nature of whatever we're trying to understand, but instead it's always an abstraction, an idealization away from the truth. The disconnect from the truth does the useful work of making models more intelligible, more interpretable, and making anything understandable. That's a simplification of her position, which she describes more clearly during our conversation. We also get into a bunch of her other work, like whether we should consider that brains at their heart are really carrying out computational functions, or if that itself is a story that we tell ourselves, when in reality the computational properties can't be separated from all the other physiology and metabolism that we often think of as noise or something to control away during our experiments. Go to BrainInspired Co to find the show notes with links to the relevant papers and some of the books that we mentioned. There's a handful of books recently on the topic of understanding. Plus, Masrita a few years ago wrote a book all about color perception and what color really is, and we really briefly mention it during our conversation, but I encourage you to check it out because it's a really interesting proposition. Brain inspired Patreon supporters get the full version of this episode and all episodes on their private feeds, where they also get occasional completely separate bonus episodes. So consider supporting the show if that sounds appealing. You won't hear me reading advertisements here because that business model fundamentally disagrees with me personally, and I want to believe it isn't necessary. You can go to BrainInspired Co and find the red Patreon button there if you have similar beliefs. Thanks everyone for listening, and here's our conversation.
Masrita, in the Cathedral of Learning, where you're. That's where your office is, right?
[00:04:22] Speaker A: That's right.
[00:04:22] Speaker B: Do you have a window in your office?
[00:04:24] Speaker A: Yes.
[00:04:25] Speaker B: And do you know what direction that window faces?
[00:04:29] Speaker A: It faces towards downtown, which I suppose is west.
[00:04:34] Speaker B: Okay. So you're on the opposite side of the windows where you would be looking over the Mellon Institute, where I worked in the basement for years. And the basement which would flood and which was fun because we'd have to rescue all the animals, you know, and save them and stuff. So anyway, you work in a beautiful building in the Cathedral of Learning at University of Pittsburgh for now. Although you just told me you got a job. You'll be transferring in August to Edinburgh University. So congrats on the new job.
[00:05:06] Speaker A: And the Mellon Institute is also a beautiful building, and it's famous from the Batman movies.
[00:05:12] Speaker B: Oh, is that why it's famous? So it was famous for its large columns when I was there, and then they filmed Batman there. And I guess it's famous for Batman.
[00:05:21] Speaker A: Right? Right.
[00:05:21] Speaker B: Yeah.
We could talk Pittsburgh for a long time, but.
So I've really enjoyed reading your work. And I realize it's partly because I have a philosophical bent, but also because it's philosophy that's very grounded in what I have done in the past. And it made me realize that philosophy by itself is one thing, but philosophy with knowledge of the topics that are being discussed. Because you use examples from computational neuroscience in your philosophical discussions, it's much more enjoyable and much more. You can just grasp onto it. So let me back up you. You were and still are a neuroscientist in some sense and made the switch to philosophy 10ish years ago. Is that right?
[00:06:13] Speaker A: Right.
A bit longer, actually. So I got my first postdoc in philosophy back in 2005. I started 15 years ago now. But also my first degree was in philosophy. I did philosophy and psychology as my first degree.
[00:06:31] Speaker B: I see.
[00:06:32] Speaker A: So I had like a long study interests, dare I say passion for philosophy, sort of predating my Switchback as opposed to.
[00:06:41] Speaker B: I see, so you just dipped your toes in neuroscience for many years, actually. So your book Outside Color, which is about what color really is, and it combines the physical attributes of color with our perception of color, which we're actually not going to talk about today, but I recommend. But it kind of followed your neuroscience in many respects. And everything we'll talk about today is your expanding reach into various areas of the philosophy of science and philosophy of neuroscience. Can you say the philosophy of neuroscience?
[00:07:17] Speaker A: Yes, sure. Let's call it the philosophy of neuroscience.
[00:07:20] Speaker B: So, okay, so 2005 is when you started transitioning to philosophy. So how was that transition?
[00:07:28] Speaker A: Well, how do you mean? How was it?
[00:07:31] Speaker B: Well, I have the feeling that, let me rephrase this. Was it a joyful transition? Is philosophy liberating to you? Or is it just as frustrating as the science and just as interesting?
What are the differences?
[00:07:49] Speaker A: Right, right. So I would say that definitely for a long time and still a bit today, I missed the sort of the laboratory work, that kind of investigation where you're just doing experiments and sort of finding out what's going on and really sort of in touch with concrete things and also working in a collaboratory way. But I didn't doubt my decision to go into philosophy because I would say for reasons about how I am as a kind of academic, I have broad interests. I don't like to be over specialized. And to be a successful scientist today, you have to be willing to really be very focused and specialized. You have one kind of laboratory which really digs into the nitty gritty of one kind of line of research. Whereas, you know, I, in philosophy, I, I can, you know, not only do philosophy of neuroscience, but philosophy of science more generally, working in areas around perception, which goes a bit into psychology as well. So just because I'm the kind of person that doesn't like to specialize, philosophy suits me a lot better. And I, I think this is a pretty fixed trait about means when I was at, when I was at high school, so in, in Britain at the age of 16, you're supposed to decide whether you're a scientist or a humanities person. And I, I refused to decide. Like I did a mixture of A levels, you know, the high school exams from humanities and science, and then for my bachelor's degree, I chose philosophy and psychology because again, you could straddle the humanities and the sciences and still doing philosophy of science, especially history and philosophy of science, then again it allows you.
[00:09:47] Speaker B: To straddle those different, just refuse to fit in.
[00:09:51] Speaker A: Right.
[00:09:53] Speaker B: So in experimental science, as you know, you seek to answer a single sort of question. And then let's say you do some experiments and you answer that question, or maybe you don't answer that. More likely you actually don't answer that question. But either way, that process, the end result is creating 50 more questions. And then you can kind of choose from those and move on. How do you decide what to explore next? Is it a similar sort of process?
[00:10:24] Speaker A: I would say so. I find that every paper I write has a concluding section, which is another paper that needs to be written. So. And often what happens is the first draft of a paper will have the final section, which is a bit less coherent. But it's really like, okay, this is really what this current paper is telling us, but it needs to be much expanded upon. And then in the version that gets published, I'll just say, okay, well, there's a question for another day. But really, it's bugging me that there's this other thing that then has to be developed further. So I would say my research projects now, they kind of grow fairly organically. That I'll, you know, start on the thread and it will lead me to another thing and another thing.
You know, in my job at Pittsburgh, I've had the luxury of not having to apply for grants. And so I could just, you know, work in a way that would follow the interests, as opposed to having to say in advance what the research project would be and then follow things according to that with set goals in advance.
[00:11:38] Speaker B: That really is very similar to experimental research. Because you start with your question and then you. I'm going to figure out, oh, I don't know. Well, V1, let's say. Right. And then you realize that you don't actually understand the neurons and how they're working. Do I need to understand that? Everything gets shifted and shifted and shifted until finally you're a heap of tears.
But with some published papers along the way.
[00:12:05] Speaker A: Right?
[00:12:06] Speaker B: Yeah, you have. So you've gone down the philosophy tract. But that's not all. I mean, your writing is steeped with history as well.
And I know that history has really shaped your views about all of this. So how has learning history in addition to the philosophy shaped your views?
[00:12:28] Speaker A: Yeah, so with my job in Pittsburgh, because we're history and philosophy of science department, it means that we would have to teach history classes periodically. So I found that I really enjoyed reading into historical episodes in neuroscience. And then as a philosopher, what I found about looking at historical cases is that the work is in the past. So of course there's always more that you can read about the past. But the science itself isn't a moving target in the way that contemporary science is. So contemporary neuroscience is moving so fast, there's so many things coming out all the time. It's just crazy. Since the time that I was studying neuroscience, PhD student at the turn of the millenniums ages ago, it is now just theoretically, methodologically such a different field. Whereas if I take a episode from the neuroscience of the past, at least that is something which is not still undergoing change. Of course my ideas about it will change as I investigate it further in lear more about it, but at least the science itself, it's not a moving target in the same way. So for my current book project, I just recently got a contract with MIT Press.
The working title is how to Simplify the Brain and it's about abstraction and idealization in theoretical neuroscience.
But a few of the case studies will be on what I think of as the first cohesive doctrine in theoretical neuroscience, which was the reflex theory of the late 19th, late 19th century, early 20th century.
And just digging into that, looking at the ways that scientists back then attempted to simplify the brain by thinking of the brain as just operating by this set of concatenated reflexes, in retrospect, we can look at that and think of it. Well, that's rather oversimplified. But from you can also as a historian, think about that science from the researchers perspective and look at why that made sense for them to make those assumptions and follow those practices that they did. And hopefully, you know, looking at that historical episode of a science which is now obsolete, there'll be some lessons also for today about the motivations of simplification and also perhaps the limitations.
[00:15:08] Speaker B: That's hard, that's a hard job to apply that lesson to whatever's going on currently. We're always kind of blind to what's happening currently.
Well, also back then, and I'm sure you may or may not know this, but back then they knew less about brains, so they actually had less to go on. Right. And that's probably a lot of what your book is about. Right?
I have an interesting subtitle for your book I'll get to in a little bit here. We'll see. I'll try to sell it to you. We'll see.
I'm old now. I don't know your age. You don't have to share your age, but I know you have a couple children. So 2005 is when you sort of switched to philosophy. So There we can take that as a point and started valuing history. And, you know, I realize I value history a lot more now than I used to. For instance, when I started out in neuroscience, even, you know, history just seemed to get in the way of, like, what I needed to learn. Why do we not value history much when we're younger or less experienced? Is there just too much other stuff to learn? So it's hard to contextualize any of it at the time and really appreciate it, or, you know, we're just young and foolish or what?
[00:16:20] Speaker A: Oh, I don't know, because I never, I don't recall ever not being interested in history. It wasn't a subject that I ever studied full time, but I think I was always interested.
[00:16:33] Speaker B: But did you always see the. Appreciate the relevance? Because it really has shaped your view. And I'm going to. I want to ask you later about how, you know, philosophy has shaped your view of neuroscience, because that changes things and, you know, so, yeah, did I.
[00:16:47] Speaker A: Always see the relevance of it? I think so. Even as a neuroscientist. So my, my advisor when I was doing my PhD was Dr. David Tolhurst from the Cambridge Department of Physiology, as it was back then. And he always encouraged me to read, you know, papers from the 60s, and he talked about, you know, work that was done in psychophysics back then and how it shaped the physiology of the early visual system. So definitely there was a scientist who like, in, like, fostered in his own students that you should know about how the field evolved in order to, like, make sense of what's going on in neurophysiology of V1 today. So I think I, and I don't remember ever, like, being skeptical about that. So I think, yeah, we were just on the same page, really.
[00:17:41] Speaker B: I think this is all quite useless. What's this old codger getting at? Yeah, my postdoctoral advisor, Jeff Schall, was also really good at that and pointing in the right ways to the right things that will make you sort of appreciate it in light of what you're doing. So I think it's a valuable exercise.
Okay, well, understanding this is the first thing that I want to discuss.
And we won't even be able to get to, you know, all of what your. What your current work focuses on here today. But understanding as a subject, of philosophy, of science has really exploded recently. And that may be partly because of the era that we're in in terms of, you know, building these deep learning models that predict really well, but that we don't understand before we talk about that actual specific predicament, let's talk about understanding itself, because it sort of underlies the rest of our discussion here. Is understanding the central epistemic aim of science and what does that mean?
[00:18:47] Speaker A: Yeah, so the word epistemic just means related to knowledge. So an epistemic aim of science would be an aim of science in relation to the knowledge generation process. So another epistemic aim could be truth.
People like Angela Patochnik, she's a philosopher of science at the University of Cincinnati. She has written a book arguing that the central epistemic game of science is understanding. One of the reasons she gives for that is that science relies so heavily on idealizations. So representations of nature, which we know from the outset, include falsehoods, false descriptions of how things are. So if we say the aim of science, the epistemic aim of science is simply truth, then how do we reconcile that with scientists using idealization so often?
So I think that's a really important point and it certainly influenced how I'm thinking about these things. But another line of influence in this project and the work I've done on understanding was actually coming from a historian of science, Peter Deer. So one of the questions that bothers both historians of science and philosophers of science is like, what is science? Right. So a lot of the philosophy of science, especially in the 20th century, was working at trying to, like, figure out what's the essence of science, like, what is the scientific method? And philosophers went round and round with.
[00:20:26] Speaker B: Different proposals and never, meanwhile, science was actually being done in the background.
[00:20:31] Speaker A: Right, sure, sure. But the question, like, what is science? It invites an exploration of maybe there's one methodology which ties all this together. Whereas for historians, they're thinking about, in terms of, all right, what of all the historical records and things that we can investigate as a historian actually delimits the scope of the history of science. So a question is, is science only what scientists do? And then we come into this thing, like, the word scientist is actually a new word. It comes in. In the 19th century. Before that, no one called themselves a scientist. So. But we think of the history of science as going, like, way back, longer than that.
[00:21:19] Speaker B: Those were natural philosophers, Right?
[00:21:21] Speaker A: Right, right. And then also cross culturally. I mean, there's. Is it, you know, a Eurocentric bias to think of science as something that started in Western Europe and then spread around after that? Or should we look at research aimed at, you know, knowledge of nature that was done all across the world by people everywhere, and also include that within the scope of the history of Science. So, yeah, the question what is science? Is something historians also need to think about. So there's this paper by Peter deer published in 2005 called what is the history of science? The history of. And this influenced me a lot because the idea that he comes up with is that what we call modern science since the 17th century is this marriage of natural philosophy, kind of reflection on nature, which aims purely at understanding nature for its own sake, with, if you like, an engineering discipline. So something aiming at control of nature, instrumentality. So if you know what we call science is this interplay of like the aim of understanding with the aim of control, then yeah, it makes sense to say that the central epistemic aim is understanding. And then it really resonated with the issues that are coming up in neuroscience, the development of neural networks today that, you know, if this marriage of natural philosophy and instrumentality only happened, you know, a few hundred years ago, and that's what is characteristic of modern science, then that they could, there could also be a divorce. Like, it doesn't mean that those two tasks of trying to understand nature and also using that understanding to try and control nature always have to be together. Maybe they'll pull apart. And so I was looking at, you know, research in neuroscience in which the power of the research to give you understanding and the power of the research to allow you to predict and therefore control the brain seems to be coming apart.
[00:23:36] Speaker B: Oh, good. This is something that I want to ask you about in a few minutes, but you have a particular view on understanding called non factive understanding. And it made me think of a famous quote about modeling, which.
This is what I'm going to pitch to you as a subtitle or maybe a chapter or something in your book, in your upcoming book. You want to hear it?
[00:24:00] Speaker A: Yeah.
[00:24:01] Speaker B: All understanding is wrong, but some is useful.
[00:24:05] Speaker A: Oh, okay. So that's. That sounds, that sounds on my street.
[00:24:11] Speaker B: Yeah, yeah, go ahead.
[00:24:13] Speaker A: So is this a phrase you've heard elsewhere or you just came up with this in response?
[00:24:18] Speaker B: No, this is a, this is a, it's a famous modeling quote by Box. I'll have to insert the name later. Alfred. Oh, what is his name? I just blanked on his name. But it's. All models are wrong, but some are useful is like sort of the, sort of the thing they go to. And, and my, my take on not on non factive understanding is that. Oh, that's a sort of pithy way to summarize it. But so, so tell us what, what is non factive understanding and how does it Differ from, you know, some of the other accounts that are floating out around there these days.
[00:24:54] Speaker A: Right, right, yeah. So the fact of approach to understanding just says that, you know, we understand a phenomenon in nature by way of learning the truth about it, having an explanation of it, which is true. So it puts truth to be a condition on having genuine understanding. Whereas the non factive approach really takes on board that, you know, the most successful and most useful models in science tend to be highly abstract, like much more simple than the phenomenon in nature themselves idealized, and that they include false assumptions and yet we still think of them as offering understanding. So it just does not put truth of the model as a condition on it being able to give you understanding.
[00:25:44] Speaker B: So it doesn't exclude the, exclude truth from an account of understanding, just that it reduces the necessity of truth being an account of understanding.
[00:25:57] Speaker A: Right. And one of the features of the non factive approach it takes on board that, you know, human beings, all scientists are human beings, are, you know, finite beings. You know, there's only so much cognitive resources that any one of us can find in any one of our brains. For now, for now we're not dealing but future enhanced science, but for now. And so scientific understanding is a compromise between like the overwhelming complexity of the things that are there in nature, especially biological systems and you know, the human ability to think through a complex system. And so you could think of, you can think of non factive understanding as saying that understanding occurs when you get the right sweet spot between the like really overwhelming complexity of ender in nature and the human mind's ability to grasp the interconnections that are there in such a way that it allows some kind of ability to deal with that phenomenon, know some pragmatic aim that a human would use their science for.
[00:27:15] Speaker B: And you give examples of how well you describe how this account of understanding, non factive understanding in particular is a benefit to neuroscience. So how does it benefit neuroscience?
[00:27:30] Speaker A: How does it benefit neuroscience? Well, I pitched this as observing that there's a big debate right now amongst neuroscientists about whether like the most advanced state of the art models that we're using to model the brains, especially when you need deep learning. So deep neural networks, recurrent neural networks, whether those are interpretable, that's the word that often comes up here, the philosophical literature. And understanding is helpful there. And I've referred in particular to the work of another philosopher of science, Henk Direct, because philosophers have been thinking about this question of okay, how does a model give you understanding? And he uses the word intelligibility as being the property of a model, it's a bit like how computer scientists raise the question of interpretability. So it's a question like you've built this model, it's really complicated.
Is it intelligible to you or not?
The link to understanding is that if you have an intelligible model that affords understanding of the phenomenon in nature. So where I use the word intelligibility, it's a following direct. It's a property of the model or theory itself, whereas understanding is like directed to the thing in nature. Yeah. So this question about intelligibility, that also has precursors in the history of science because one realm where it was debated a lot amongst scientists was in quantum physics, quantum mechanics. Different, you know, you had Schrodinger and Heisenberg and physicists were talking about, well, you know, wave mechanics, it's kind of intelligible. But matrix mechanics, that's not intelligible. Where possible, we prefer an intelligible theory. And then thinking about what is it that makes one theory intelligible and the other not.
So I think all of these previous instances of these discussions would be relevant to neuroscience today and also thinking through like philosophically, which, you know, what is it about a model that makes it intelligible to the scientists? As the science progresses and the scientists themselves develop their appreciation for different modeling techniques, do the models become more intelligible? And you certainly how direct conceives of intelligibility. It's a shifting thing. It's not, you know, something fixed for all times. But to different scientists, one model might be intelligible and to others not because of their background training. Background assumption.
[00:30:26] Speaker B: Right, yeah. That's also in the direct.
It has a lot to do with the skill of the researcher in the interpretability.
[00:30:34] Speaker A: Right, yeah, yeah.
[00:30:37] Speaker B: Just as an aside, is his book Understanding Scientific Understanding the one if you had to choose, one of the recent handful that have come out on understanding, is that the one you would point to? Or if you had to choose.
[00:30:53] Speaker A: No, I wouldn't want to just one.
[00:30:56] Speaker B: And say you can't force you to do anything.
[00:31:00] Speaker A: Yeah, no, no, no. I would say, yeah, that's. That's particularly useful for looking at these. Like I say, there's case studies and from the past and.
And because his has a separate feature on intelligibility that's helpful if you're thinking about interpretability. The Pitochnick book is nice because it has a lot of discussion about idealization and how that links to understanding. So.
[00:31:30] Speaker B: So those are kind of complementary then.
[00:31:34] Speaker A: Right.
[00:31:35] Speaker B: So to make these deep learning models, and I promise we're going to talk about them more in a second here, but the idea is you have these super complicated models and one benefit of non factive understanding is that you give up on the idea of understanding them in all their glorious detail and you realize we have to abstract, we have to idealize in order to understand, and that's, you have to be okay with that. And then there's a process of actually having to do it, which is another hard thing.
[00:32:07] Speaker A: Right, right, yeah. So when I'm talking about understanding there again I'm talking about our understanding of a brain area. So let's talk about early visual system V1.
So what I'm arguing in my work on, you know, understanding in neuroscience is that we have models which afford us understanding of how simple cells work. But the thing is, to be, you know, simple enough to be intelligible to the scientists, then they have to be very abstract, very idealized, so very simplified compared to all of the behavior that a V1 neuron could do, especially if it's stimulated with like all of the scope of stimuli that it would encounter in a natural environment.
So in order to say that we understand V1 using those classic intelligible models, we have to give up on the idea that those models are like telling us the truth about how those cells work.
One reason to think that they're not giving us the truth about how those cells work is that the current state of the art models, which are much more predictively accurate of those cells responses to like vastly wider range of stimuli do not have the same assumptions that these classic models have. They don't assume that those cells are inherently linear computations, and that's probably why they do much better. But the mathematics is opaque. So you're giving up on the intelligibility of models.
[00:33:54] Speaker B: So there's this fundamental. Okay, so yeah, we'll bring in the deep learning models models now. So there's this fundamental trade off between prediction and understanding in that sense. And you suggest that these more classic models, like a simple cell model in v1, actually may provide better understanding even though they're less accurate and provide worse predictions.
Whereas the opposite would be true for the modern deep learning models that are maybe more true to the way our brains are processing. And I just said processing, but that could be a dangerous word. The brain, the way our brains are functioning, you know, whatever word you want insert, but drive us further from understanding.
So you use as examples, you use the early visual System and models of canonical computations that are proposed. Some of the earlier models. Pre deep learning, you know.
[00:34:54] Speaker A: Right.
[00:34:54] Speaker B: And you also use examples from the motor system.
[00:34:57] Speaker A: Yeah, yeah. So the interest in the v1 cases actually stems back to things that I was thinking about and bothering me when I was a graduate student in neuroscience. So my task that back then was to do psychophysics of contrast discrimination. So my experimental work was the psychophysics. And then we were modeling the psychophysical data in terms of models which are based on the gabor model of V1 neurons. So it would be.
And adding to that, David Heger's normalization step. So these models tended predicting the psychophysical work. And also when you're looking at big predictions from the neural data themselves, they do well when your stimuli that you're using for contrast discrimination are like Gabor patches or maybe a couple of Gabor patches overlaid one or another.
But one of my. The tasks of my project was to look at, you know, psychophysics of contrast discrimination when you're using natural images. Oh, no.
And seeing if the models could. That was still essentially like Gabor models plus normalization, whether they would still work.
[00:36:30] Speaker B: So just to paint the picture, the difference between, you know, when someone's looking at a natural scene where the entire screen is filled with a picture, versus looking at a screen that's all gray except for a small patch on the screen that has these Gabor patches that are black and white, linear black and white, back and forth patches.
Sorry, I just want to make sure.
[00:36:55] Speaker A: Yeah, yeah, sure. So, no, thanks for. Thanks for tipping in there. Yeah. So one of the things that sort of bugged me back then was it just struck me like the brain is this very, very complicated organ. How could it possibly be that a model as simple as the Gabor model with the normalization stage added, which is not mathematically a particularly complex model, could really be capturing, you know, inherently what these cells are doing. But I was at the same time impressed by how far they could go, you know, for most of the psychophysical data that we were using with these Gabor patches and black and white stripy patches. And I was also impressed by. By my supervisor, David Tolhurst, you know, like real conviction that as a scientist, what we need to do is try and see how far we can get with the most simple models that we can really. He didn't want to give up and try and complexify things more. If we could just figure things out with this really quite simple theory, really quite simple model of what's going on in V1.
And so for a long time I was a sort of believer in the project of finding these canonical neural computations, sort of finding fairly simple computational templates which get repeated in different brain areas and afford a theory of the operation. But then with the advent of deep learning and finding a lot of the tasks that were not successfully modeled with the classic approach, then became predictably tractable with much more mathematically complex models. It made me start thinking, well, maybe my first instinct as a graduate student was right, that those classic models are just like way underestimating what's going on in the brain. And they work as well as they do because you've basically controlled the stimulation that you're giving the brain so much, only showing it like mostly a grey screen and then this little stripy pattern. And it's really the simplicity that you're introducing as an experimenter by having such a controlled visual input that is the reason why those particular data are predictable using the simple models. But that's not telling you what the brain would do, like beyond that range of conditions. It's not like revealing some inherent simplicity that's there in the brain.
[00:39:40] Speaker B: Yeah. So we, so then we, we have that. These differing pictures. Right. These very simple models that you used and, and then these really complex models. And there's a divide in that these very simple models might provide great understanding but don't predict well, at least in the broader scheme. And the, the deep learning models are the opposite of that spectrum. And one of the. I'm going to be talking with Jim DeCarlo soon, and you're very familiar because you use his line of work often in your work.
[00:40:13] Speaker A: Yeah. So I would just say I don't use it as one of the main case studies, but.
[00:40:18] Speaker B: Yeah, well, Right, so that was going to say, because in your work you sort of make an exception to Jim DeCarlo's lab's work. So the deep learning models that you talk about as being further from understanding and where the divide is being increased between prediction and understanding don't take into account anatomical structural features. Whereas the models that people like Jim DeCarlo use to model ventral visual stream or other brain regions that have these hierarchies that are at least somewhat constrained by what we know about the anatomy and the structure of brains, you're careful to say that that might be somewhat of an exception to this. And I don't know, we don't need to get down dirty into his models. But how are those models unique in the prediction versus understanding dilemma?
[00:41:14] Speaker A: Right. Yeah. So I wouldn't say that they're unique.
So Josh Glaser and his colleagues have a nice paper in the different tasks of using machine learning in neuroscience. And one out of the four that they list is using a deep network as a sort of representation of the anatomy of different brain areas, albeit a very schematic and abstract one, but still, you know, a representation of some of the architecture of the brain. And a second task is using a deep network as a decoder. Right. So my prediction versus understanding trade off applies to that use of deep networks for decoding. So where what you're trying to do is build an encoding model of like how the world's state of the world maps to spike emitted by particular neurons and then using that potentially to read off from spike trains, what is what that signifies. I see for the brain. So that's really a quantitative problem where what you're trying to figure out is like what is the computation being performed by a particular neuron, say a V1 neuron or a motor cortex neuron, and you're using the model to represent the computation done by that neuron. So the, you know, classic Gabor models that I was talking about, all of the mathematics there, it's, it's, it's just hand hand written, it's stuff that, you know, a person could write down on a blackboard. And it's making this assumption that there's this linear computation that's going on at the heart of those systems. In contrast, if you're using machine learning for that decoding task, you're using the capacity of the net, the artificial network to be a universal function approximator. So if you just give it enough data, it will learn a mapping between the input and output. But the mathematics that's going on in the train network is like opaque. It's like it's embedded in the network. So you couldn't just like you've got your trained network, you couldn't just write down what the equation is, what the function is that is mapping inputs to outputs in that model. So the point of the trade off is just saying that, okay, if you, you're, if the thing that you're trying to understand is the computation done by the neurons, taking the deep learning approach is not going to give you understanding. And because the train network is not intelligible.
So in the Decalo lab, what I see going on there is there's stuff on the quantitative side. So some of what they're doing is like building encoding models of ventral stream neurons, but they're also doing some qualitative work, which is like building networks which map on so some of the anatomical features of brain areas. And also, you know, when you see like those receptive field maps that they're giving you of V4 neurons, it's giving you like a qualitative sense of what those neurons might be responding to. But in terms of the quantitative side, like, what is the computation being performed by those neurons? It's not shedding light on that. So I would say some of that trade off applies there, but it's not like it's everything that is going on in that lab.
[00:44:52] Speaker B: There's some mix in there. Yeah, it's interesting.
Another reason I brought up Jim is because he has his own take on understanding, and I'm going to ask him about this, but he sort of redefines it or at least operationalizes it with respect to his models.
I don't know if you want to summarize his view or I could, but I want to know your take on his operationalization of his control version of understanding, which you mentioned earlier. Maybe you can just summarize it again.
[00:45:28] Speaker A: Yeah. So the recent paper, I believe it was in Science, so Bashuvan was the first author. It's showing how you can train a network to find a stimulus that will drive ventral stream neurons.
[00:45:45] Speaker B: Yeah.
[00:45:46] Speaker A: Whole populations, as hard as you can. And that's a really impressive feat. So nothing I'm going to say is to detract from like, the achievement there. Yeah, it's an impressive study. Definitely what it's showing you there is they found a way, utilizing deep learning to really get a control on neural activity that people haven't been able to get through other kinds of technologies. Certainly not like just eyeballing neurophysiological results and thinking, yeah, maybe a hand will drive this hard. They've really got a very fine engineering handle on like, how to drive these neurons in ways that people haven't had before.
[00:46:34] Speaker B: And I'll just say that the images that the network generates that drive the neurons, the way that, you know, that control, the way that the neurons respond, are really not anything that look natural at all. It's, you know, it's very specific lines and segments in shapes that you don't recognize as natural at all. You know, so it's very. These are very unnatural images, but yet drive the neurons in a very particular way more than they would have ever been driven by a natural image, for instance.
[00:47:05] Speaker A: Yeah, yeah. So that in itself is interesting that there's something that the network is doing which is not like where a Human being would intuitively go with like trying to think up what stimulus would drive these neurons. Yeah. So I mean it. But talking about the kind of control that this deep learning affords here as just another kind of understanding, I mean, at the end of the day, you know, you can say it's a semantic issue and if you want to call that understanding, we can just redefine understanding.
But this just seems to be a redefinition to me. It doesn't seem to have anything, any relation to like our initial notion of understanding. So one core thing about understanding, what makes understanding the thing that we value as scientists and as people trying to understand nature, is that we would say that there's understanding there. When you've taken phenomena, things that occur in nature, which on the surface of it they're really complicated, you don't know what the pattern is or how things relate to each other. And shown that there's like an underlying simplicity, that underlying simplicity might be, you know, one law of nature which can, which can show you why all of those different phenomena were expected to have to happen or show that like what the underlying pattern is which is giving you all those different phenomena.
You're not getting that here. So if I would like, want to pin my hat to like what's, you know, the core thing about understanding that, if you don't have that, you can't say that there's understanding. It would be something like that showing what the simplicity is which is tying together all of those different complex surface features.
You know, you could compare that to a dimensionality reduction. I mean this is one of, and it, you know, links to this thing of like understanding being like cutting, cutting complexity down into humanly manageable portions. So dimensionality reduction also, obviously in neuroscience it's like a thing that's become so important with multi unit recordings because if you're recording 100 neurons at a time, you have this very, very high dimensionality data set. But if, and you know, a human can't visualize, you know, 100 dimensional space, but if in virtue of how neural responses are correlated, you can bring it down to between, down to about 10 dimensions, you can start to get more intuitions of what's going on and certainly like three dimensions for us as optimal. So you could like.
Yeah, so I think that it's. Unless DiCarlo can explain how something like that process of cutting down a parent's complexity to some principles or underlying pattern, or doing some kind of showing how things actually fit into an intelligible order, then I don't think you can say that there's understanding.
[00:50:38] Speaker B: He does use the word understanding in quotes and also fully recognizes and always seems to mention that his colleagues. It's a contentious issue with his colleagues. But it made me wonder if there's. So prediction is straightforward, something is accurate or inaccurate and if it's inaccurate, you can quantify how inaccurate it is. But things like understanding and explanation is a whole different, you know, somewhere between prediction, understanding is explanation and we won't get into that. So we can just focus on understanding. But it is not particularly well defined. And I wonder if it's a problem that. Well, I wouldn't say it's not particularly well defined. What I might say is that maybe there's a typology of understanding that is needed. And the reason I thought of this is, well, maybe what Jim is talking about we just classify as understanding, control type understanding. So it's a different thing altogether. But it's among the typology of understanding because Hank direct has his version and there are other versions that I won't go through and you have your non factive understanding and they're all related and sort of in a big family. But I wonder if it's a problem at all that, you know, with this prediction, understanding divide, that prediction is so easy to quantify and understanding is not easy to quantify. Will there ever be a benchmark of understanding, for instance?
[00:52:13] Speaker A: Yeah. So how I like to think about that is I read an article in the Financial Times while I was preparing one version of this draft which was talking about management and how, you know, a lot of the fashion and management in the last couple of decades went towards, you know, hard metrics, hard targets. So, you know, profit per quarter would be an obvious hard metric there. But on the other hand you have soft metrics, things like, well, how cohesive is this team? How well is this CEO managing to get keep people motivated? And the point of this article was just that like people like hard metrics because, you know where you stand with them, they seem like completely, they're completely objective in one sense. You can just measure the data and you know whether you're meeting your targets or not. Whereas all of these soft metrics, they're a bit more intangible, a bit more fuzzy. One person might judge one way whether the target's been met, another might judge it another way.
But the point of the article was that just because they're not quantifiable in the same way, it doesn't mean that management should ignore it. In fact, it could be really, really crucial to how well business is functioning. Right. And if you only stick with the hard metrics, then you could have a very dysfunctional organization.
And so I think that's something that, you know, neuroscience is now in the position where it needs to start asking itself, is that, okay, so there are all these engineering goals which can be much more easily quantified and measured than these soft metrics. Like, have I figured out the brain yet? Am I really understanding what's going on in this area? But you know, as a community, does neuroscience want to still care about those soft metrics? And I think, you know, just speaking myself and things like reading neuroscience and talking to neuroscientists, I think most people really care about understanding. It's one of the really important intrinsic motivators that people have with doing sciences like they want to understand the world. And so it wouldn't be satisfactory just to reduce the soft target to the hard target just because it's more quantifiable and just because it seems like progress is going in that direction because of new tools like deep learning.
[00:54:55] Speaker B: Yeah, it's so tempting though, because it's such a, the target is so well defined, whereas in understanding you don't, you can kind of move towards something and it's amorphous.
[00:55:04] Speaker A: Right.
[00:55:05] Speaker B: You know, one of the things that you argue is that, so there's a lot of work these days trying to make artificial networks more intelligible and that's going to continue and on some level we're going to abstract and idealize and we will have a better non factive understanding of artificial networks. But one of the things that you argue is that that is true, but they will likely always be less intelligible than their simpler counterparts that are these very succinct mathematical models of, let's say, a simple cell, for instance. And in some sense the simpler models, the simpler accounts will always be preferred in an understanding sense. It made me wonder if, despite that, because I agree with that, but I wondered if in the spectrum, so as intelligibility increases with artificial networks, is there going to be a threshold where it becomes intelligible enough that it's like, oh, that's good enough, Even though the simpler models are still more intelligible, will we then switch? Is there a threshold? It'll cross and we'll switch and say, okay, now I feel comfortable enough with understanding the model, the deep learning models, is that going to happen?
[00:56:26] Speaker A: Yeah, I don't think it's a question that has straightforward yes and no answers. Because what's good enough tends to change from a case by case basis depending on, like, what the scientist is trying to do in a particular project. Also, you know, right. In terms of pedagogy, like, a lot of the training that you get as a neuroscientist is with these classic models. In terms of like, what is your. How are you like, led into.
I don't want to use the understand word understanding again, but how you taught your basic theoretical framework of what's going on in the brain. And so I think it will be a really interesting question with, you know, with, as you're saying, intelligibility of these artificial neural networks increases, whether that will be able to replace the role that the classic models of the brain have had pedagogically or not. Whether, you know, whether those classic models will go the way of the reflex theory in terms of like, the broad framework for how we think that the brain is working. That's now like, we wouldn't appeal to the reflex theory anymore for that. But once upon a time that was people's like, cohesive framework.
So maybe there'll be something that comes out of the, of the new generation of models which replaces that. But it's. It's hard to say and it's just hard to say. Yeah, I think it's just too soon to say how this will go.
[00:58:13] Speaker B: There's a recent push that's coming from a couple different quarters that says, let's agree we can't understand deep learning models at the level of their internal. All their internal workings. But maybe what we should try to do is understand them in terms of the things that we control when we build them. So the learning algorithm, the architecture, their development, the objective function. And there's something that I'm uneasy about there. Like, I don't feel like that's going to be good enough. Do you, Are you aware of that? You know, have you. Are you familiar with that?
[00:58:49] Speaker A: Yeah, so, so I just read this paper, draft paper. Lily Crafts and cordings. What does it mean to understand a neural network?
[00:58:57] Speaker B: Yeah, that's one of them. Yeah.
[00:58:58] Speaker A: Yeah, yeah, yeah. So there again, I mean it. That is a six. It is a viable option, depending on like, whether you're happy of narrowing your narrowing down what you want to understand.
So if you're really hung up on the decoding problem and you really want to know what the mathematical relationship is between world states and spike trains, then just saying like, okay, I know the training rule for the network or for the brain is not going to cut it there. So I think it definitely. In order to say that that's a satisfactory answer, you're really giving up on a lot of the questions that traditionally neuroscientists have been bothered by.
[00:59:50] Speaker B: Yeah, it seems unsatisfactory to me, but, you know, it's progress, I suppose.
[00:59:57] Speaker A: I don't know if this is a tangent, but I was just. When I read that paper, what I found I really strongly agreed with was how they put it. You know, what we're learning from the success of deep neural networks in neuroscience and predicting the brain is that previous attempts of trying to find a simplified representation of the brain in that canonical neural computations project, it's just, you know, showing the limitations of that attempt that, you know, maybe the brain is just a lot more complicated than people had hoped for.
[01:00:38] Speaker B: Yeah. And highlighting that there may be a real. I don't know if you just said limitation. You probably did, but a real limitation on the usefulness of those types of models. Yeah, that's a good point. Well, one more question on understanding in particular, do I understand how to ride a bike?
[01:00:58] Speaker A: I mean, one of the. The ways that you can think about, you know, your bike riding knowledge is that it's, you know, implicit as a skill. It's not like you can just articulate everything that goes into your motor control when you're riding a bike. But, yeah, you can reliably do it and you can adapt to different situations. And you know enough about riding a bike that you could teach someone else to do it.
[01:01:29] Speaker B: Well, I don't know. I just taught my son how, and the way I did it is I just pushed him.
[01:01:34] Speaker A: Yeah, yeah, but you'd be able to give some tips, right?
[01:01:41] Speaker B: No, that's true. And he'd already had some training. But the point is, what I'm wondering about is we have this phenomenon where when we sort of use something long enough and get our familiarity with it increases, we feel we understand it even though we can't articulate it. For instance, riding a bike. And I wonder if you can use something into.
If that sense of understanding is the same sense of understanding we're talking about, or where it fits in that scheme.
[01:02:10] Speaker A: Yeah. No. So I thought of another comparison for the use of these deep learning tools, which is more like know how to ride a bike. I do it every day, but I can't fix it. So I feel like, you know, where I'd be if I was using these networks in neuroscience? It would be, yeah, I know how to get it to do what I want for this task. When everything's going well, but I don't know enough about the nuts and bolts.
Not exactly the fixing it task, but like to, you know, my limitations of understanding a bicycle is that I don't know enough about the mechanisms to be able to fix it. And likewise, you know, with the neural network, it wouldn't know enough about the mechanisms that make it do what it does to be able to say I could, you know, reverse engineer it and fix it. So I would like that analogy more for neural networks than the implicit knowledge that I have of bike riding myself.
[01:03:12] Speaker B: Yeah, that's interesting. I don't know if it was Richard Feynman and maybe Surya Ganguly's talked about this, that this conception of understanding wherein without writing the equations and solving the equations, you can think about, if you can in the model, if you can think about, if you would tweak something, how it would change the output. That is one sense in which understanding. And in that sense, you don't have to fix the model. You don't have to engineer it. Anyway. Yeah, this is all.
[01:03:44] Speaker A: Yeah, yeah, that, that. That's right. That. Yeah, I'm glad you brought that up, because that is the notion of intelligibility of a model, which is there in Henk Dirac's theory. And it came originally from Feynman used it, and there was another physicist before him who first came up with it, but it came out of this debate over the interpretability of quantum mechanics.
[01:04:09] Speaker B: All right, so the brain is a computer. Right. Masrita, the computer brain metaphor, that the brain is a computation machine is under a lot of heat lately, I feel like. But you have a solution that makes everyone feel better or potentially could make everyone feel better. Why do we love the computer metaphor for brains? Is it accurate? And if it isn't accurate, how does it benefit neuroscience?
[01:04:43] Speaker A: Yeah. So Can I just say I'm surprised that you say it's under heat, because everything that I've been hearing from neuroscience recently seems to buy into it a lot.
[01:04:53] Speaker B: Oh, I think that. Yeah, I think that you're correct. But I've had people like Paul Cheesek, maybe it's under heat in my mind because I've had people like Paul Ciesek on the show and we talked. He's very anti computer metaphor.
[01:05:07] Speaker A: Okay. I have to listen to joy.
[01:05:09] Speaker B: Yeah. And I feel even. And also your work points to a lot of other people sort of singing the same song that the computer metaphor is not correct. But I think it's at its peak in computational neuroscience. It's at peak Computer metaphor right now. So maybe it's under a minor amount of heat, but only because it's so accepted and common, I think.
[01:05:33] Speaker A: Right, right. Okay. Sure. Yeah. So, okay. The first thing I wanted to say to the preface to answering your question is that the work on prediction and understanding sort of assumes the ground truth of the computational approach to the brain. It is taking it for granted that what neurons are doing are computations on inputs coming from the world. So it assumes that there is like a computation that B1 neuron does. And the task of the neuroscientists is to figure out what that is.
[01:06:08] Speaker B: And to do so, they have to separate it from in the neural activity, from all of the other messy stuff, the noise and the metabolism, all that stuff that gets in the way, which you talk about at length as well.
[01:06:22] Speaker A: Right, right. Yeah. So some other papers I've been looking more critically at the computational framework for understanding the brain, and this again was inspired by some things I was reading in the history of science and historians writing about science, which is, okay, like what is it that scientists are doing when they're trying to understand one kind of thing by drawing analogies with something else. And analogical reasoning seems to be really important throughout the history of science. So think about how, you know, the idea of sound waves can be related to observations of water waves. So there's things that are directly accessible to the human senses, like waves in a pond. But there are phenomenon which seem to be rely which are relying on things that are beyond our senses, but, you know, like sound. But we can. Scientists have noticed that there are some similarities with the observable phenomenon and the inobservable one. And they, by bootstrapping off an analogy, you can start the beginnings of a theoretical framework for something where the working parts are not observable. So I think analogical reasoning is really, really important in science. And I'm also arguing that the computational framework in neuroscience is an instance of analogical reasoning. So my opponent in this project is someone that says, you know, computational neuroscience works because the brain really is a computing machine. It's a machine which, if you like, has been designed by evolution, but it's a computer just as much as your desktop is. It's running, it's running algorithms. In one hand, the substrate is non living tissue, and the brain case, it's living tissue, but abstract away from the substrate difference.
They are both computing machines.
[01:08:38] Speaker B: And that the brains are trying their best to compute target functions, the very certain functions they're trying to compute.
[01:08:48] Speaker A: Right. And what I'm arguing in this paper sort of against this literalist approach to understanding what's going on in computational neuroscience and why the computational framework is successful. I'm arguing that, you know, why neuroscientists grasped on the computer approach is that there is a nice analogy that you can draw between brains and computers. And when you're looking at computers, at least until we get to deep neural networks today computers are much more understandable than brains are. So if what you can do is like draw some comparisons between a well understood system and a not well understood system, that gives you the beginnings of a theoretical framework. Just like, you know, water waves versus sound waves. And particular to the neuroscience case, what I argue is that using the computer framework, it gave neuroscientists an excuse to ignore lots and lots of the messy biological details that are going on in neural tissue. So to abstract away from most of the biophysics of the cell and just say, okay, let's treat a neuron as if it is this input output device which is doing a certain kind of computation, then that gives you a principled framework where you abstract away from lots of the biological details. And it gives you some explanatory purchase on like plenty of things that we can talk about as successes of computational neuroscience.
[01:10:29] Speaker B: You distinguish between these two approaches and I don't know how well this actually maps on to what you were just describing as. So what you call formal realism is that this idea, if we could get rid of all the necessary details, the noise and the metabolism and the biophysical properties, that the brains are trying their hardest to perform this particular function. And when we make neuro computational models, the model is modeling the function that the brain is actually really trying to perform.
Whereas formal idealism is different. I don't know if you want to just contrast these two terms that you've laid out.
[01:11:11] Speaker A: Yeah. So formal idealism, it's saying that your abstract model of the neural system is like the lie that reveals the truth. And I get that quotation from in the Candle and Schwartz neuroscience text.
[01:11:30] Speaker B: Oh wow, you read that?
[01:11:31] Speaker A: Yeah, no, no, I just thought someone else quoted it and then I started using it. But yeah, so no, no, I would.
[01:11:40] Speaker B: Never read the Bible.
Me either.
[01:11:45] Speaker A: Time enough. But yeah, you're busy, busy people and there's always so much to read.
[01:11:52] Speaker B: Sorry.
[01:11:53] Speaker A: Yeah, yeah. So in the section there on theoretical neurosciences, it talks about modeling as, you know, we know that there's abstraction there, we know that, you know, pyramidal cells are not really triangle shaped and there's more going on in the dendrites than most of our models are ever going to put in there and try and represent. But there's this conviction that what the model is doing is like getting out this underlying computation, which is like, if you like the inherent mathematics that's there in the brain.
And in contrast, what I call formal idealism is saying that, you know, science is about simplification, that is people coming along and abstracting and sort of massaging things, both by doing the experimental adjustments and then, you know, looking at patterns in the data and sort of discarding some of it as noise. Even though, you know, that could be contentious because maybe there's pattern there which isn't strictly speaking noise, like experimentally introduced noise, but it's pattern from the brain's perspective and not from the neuroscientist perspective and actually trying to massage out some simple, simpler structure from all of the complexity that's there in the brain, as opposed to just saying, you know, coming in with this conviction that, you know, underlying there really is this function being computed and that is there.
[01:13:26] Speaker B: And it's like the fun, like there's an ontological status of the function. Is it right to say that?
[01:13:32] Speaker A: Yeah, yeah, that would be right.
[01:13:33] Speaker B: Yeah.
[01:13:34] Speaker A: So, and it's there independently of whether the scientist represents it in that way or not. Whereas the formal idealist approach is to say, okay, it can be valid and useful to model this system in this way, but we should not assume that the way it's. The properties that we attribute in the model are there in the brain, sort of independently of our modeling it in this way.
[01:14:00] Speaker B: So this is, you know, this is all related. And this comes right back to your non factive understanding that there's this, you know, we can't, it's an idealization and an abstraction and it's there in the word formal idealism, an abstraction that is necessary, but it's. We're not actually the true thing is maybe beyond our grasp. Is that fair to say?
[01:14:24] Speaker A: Yeah, yeah, I tend to think that way and people have called me a pessimist force thinking that way. But I would say more of a realist, you know, going back to where we are as human beings, finite human beings in the world, is like far bigger and more unpredictable and more complicated than often we like to appreciate.
[01:14:48] Speaker B: So would it be accurate to say that your view, when we're using computational models to try to get some knowledge, try to understand the way that some brain area works, that we're not really discovering the mathematical structure of nature so much as we're, I Can't directly quote you. But a summary would be that we're, you know, using math. I think you were used the word arduous to arduously abstract away and chip away and to know some partial truth of the thing.
[01:15:20] Speaker A: Right, right. Yeah, yeah, that's right. I think of, you know, the application of maths to nature as, you know, a simplification rather than, like I said, a revelation of the underlying structure of the world. And this goes back to one of the oldest issues in philosophy, which is about Platonism and even before plato, Pythagoras, SO2 Ancient Greek philosophers, and Pythagoras thinking of reality as inherently numbers made of numbers. Right. So thinking like underlying reality is maths. And so much of modern science kind of carries on with that assumption. So enhance the belief and like computational models being like the underlying truth of the brain. But if you break with that tradition, you might just say no. Like, what reason have we to think that, like, the metaphysics of the world is actually mathematics as opposed to, like, the material reality that we have around us and that mathematics is a tool for abstracting away from the complexities of those material realities and it, you know, allows for prediction and control. So it is really helpful for those engineering tasks, like that engineering strand, which is there running through modern science, but it shouldn't then be, you know, taken to be like, the underlying reality.
[01:16:53] Speaker B: Isn't mathematics impressive that we could invent such a thing? That's not even real.
[01:17:00] Speaker A: Yeah, sure, sure, sure. So it's certainly a testament to, like, the power of the human mind, like, that. I'm not trying to, like, discount the achievement.
[01:17:10] Speaker B: Yeah, no, no, I'm just celebrating with you the humanities achievement of mathematics. And I'm trying to get people to stop calling you a pessimist. You know, I'm on your side here.
Well, Masrita, this has been a lot of fun for me. I really appreciate you spending so much time and there's so much more we could have gotten to. And I'll point to all of this work, but I hope you continue doing what you're doing. I really appreciate it. So thanks for talking.
[01:17:36] Speaker A: Yeah, really nice to meet you.
[01:17:51] Speaker B: Brain Inspired is a production of me and you. You can support the show through Patreon. For a microscopic two or four dollars per month, go to BrainInspired Co and find the red Patreon button there. Your contribution will help sustain and improve the show and prohibit any annoying advertisements like you hear on other shows. To get in touch with me, email Paul RainInspired co. The music you hear is by thenewyear. Find
[email protected] thanks for your support. See you next time.