[00:00:03] Speaker A: I know I'm wrong. I know that it's my personal laziness that pushes me too far into that end of the spectrum, and I know this organization is there.
[00:00:16] Speaker B: Can we use the basic features of dynamical systems to understand what this complex non linear system is doing?
The idea of universality is that different dynamical systems have properties that are conserved, despite the fact that the equations are different, or in our parlance, despite the fact that the parameters of our neural networks are different.
[00:00:43] Speaker A: Universality is sort of a question that has to be asked. But again, it lies on a spectrum. Breaking universality gradually and using that as a constraint to link to the data can be sort of a promising way forward.
[00:01:01] Speaker B: Asking the right question is the hardest part of science.
[00:01:05] Speaker A: I want to be in a place where people ask me that question.
[00:01:15] Speaker B: This is brain inspired.
[00:01:28] Speaker C: What's the best way to model the functioning brain using neural networks? How do we even talk about the emergent properties of massively recurrent interacting neurons without just throwing up our hands and falling back on the vocabulary of folk psychology?
Neuroscientists and AI researchers for years have been interested in how exactly deep learning networks work, how to open the black box, so to speak, and make sense of those networks. After all, if we can't make sense of a deep learning network, what chance in hell do we have of making sense of the infinitely more complex real thing, the brain? Omri Barak and David Sussillo are two peas in a black box pod. Hmm, that was dead on arrival.
Omri and David wrote a paper about a decade ago, actually called Opening the Black Box, where they used the language and tools of dynamical systems theory to describe the emergent properties of artificial recurrent neural networks, or RNNs. And they found reliable structure within the dynamics, something David has come to call the dynamical skeleton of the RNNs. OMRI works out of the Israel Institute of Technology, and David, who's been on the show way back on episode five, I think, long time ago, works out of Stanford University as an adjunct professor, among other venues. And since that Black Box paper, they've both continued pursuing how much we can glean about the function of RNNs using those tools of dynamical systems theory and how it compares with and how relevant it is for how brains function. So this episode is basically all about that. Omri and David reflect on their journey. Since that original black box paper, we discuss the merits of the machine learning approach to modeling brains versus the sort of more classical computational modeling approach. We talk about the idea of recurrent nets as model organisms, similar to how we treat non human animals as model organisms. And we get into their recent thinking about all of this. David's been studying the idea of universality, for instance, which is the idea that there may be commonalities among artificial and natural RNNs, despite their vast differences both among them and between them. And Omri, among other things, has been studying the learning process in RNNs, like how the dynamics can act as a sort of ongoing prior on the learning process. So if some of that doesn't make sense, don't worry, they describe it better in the episode. And I link of course to the related work in the show notes at BrainInspired Co Podcast 97. If you like the show, consider supporting it on Patreon where I offer a few extra things. There's a patreon link at BrandInspired Co to check that out. I am Paul, thank you so much for listening and for your support. And here are Omri and David.
So David, you were on a super, super early episode. You must have some foolishness in you to have appeared so early in the podcast. But welcome back to you. Thanks for coming back. And Omri, I had emailed you I think shortly after maybe I interviewed David, I'm not sure around the same time. But you were I think sipping martinis and roller skating somewhere on vacation. And then it just hasn't worked out back and forth. But I'm glad that you're both here now with me. So thanks guys for joining me. Thank you.
[00:04:58] Speaker B: Good to be here.
[00:05:00] Speaker C: Maybe the place to start is that you both, you co wrote a 2013 paper called Opening the Black Box and it was about using dynamical systems theory to help understand what might be going on in neural networks. Well, of networks of actual interacting neurons, also artificial recurrent neural networks. So it's been almost a decade, let's say we'll just push it a few years. Call it almost a decade since you co wrote that paper and you've both, you know, successfully been continuing to open that black box, rummage around in there and, you know, mess around and pull things out and push things around. So we're going to talk about a lot of that stuff today and where it was and how far it's come and where you are now. But I'd like to just start asking kind of a simple question, I hope. And maybe Omri, we'll start with you since you're new to the podcast. What's one of the most, you know, the best scientific moments you've had in your career?
[00:05:55] Speaker A: Well, actually, I think the black box paper might have that moment.
[00:06:03] Speaker C: What moment is that? Because that. Because publishing a paper is like such a long process. Is there a moment?
[00:06:08] Speaker A: Yeah, but yeah, the particular thing, again, working with David on this, trying to understand what's going on there and how come these things works. And in particular, actually, I remember one point where we sort of, sort of technical, but looking at one of these brain networks and finding a fixed point that turned out to be sort of a subtle point that had sort of one unstable direction and 999, sorry, opposite one unstable and 999 stable ones, which is sort of what we thought we might find there. But actually finding it was kind of cool. I mean, it was, it was really nice to see that this ad actually works.
And I think one of the frustrating and nice things with this approach is that there are almost no guarantees.
So you have this idea that the network might operate in a certain way and then you look at it and then sometimes it does and sometimes it doesn't.
And that leads to quite a lot of frustration when it doesn't. But I guess correspondingly to quite a lot of fun when it does work.
[00:07:35] Speaker B: Yeah. So the backstory there, Omri, correct me if my memory has gone astray, but that paper is really just a textbook application of stability theory to high dimensional nonlinear neural networks. Right. Recurrent networks. And so the way that it went down is I had just finished taking that class and so I was mucking around with artificial RNNs back then, I'm just going to say RNNs now. And I said, well, geez, maybe this technique works. So basically there's a culture clash there because in the technique of stability analysis and nonlinear dynamical systems analysis in general, it's a very mathematician's approach. And so they like to study these two three dimensional systems where you can actually prove something. Right. So it was not really a numerical approach, whereas what we did in that paper is very much a numerical approach. So the way it went down is I walked up to Omri, I was like, hey, do you think that fixed points might be negotiating the dynamics nowadays? We would call that. Do you. The way I would say it now is do you think the fixed point structure is the dynamical skeleton of this system? Right. And he said, yeah, maybe. And so then. But I didn't know what to do. And so Omri had the idea, he figured out how to make the optimization go. And so then once we had like, hey, wait a minute, there's a thing Here. Then we sat down together and started analyzing a bunch of examples.
[00:09:01] Speaker C: How often do you think that happens when someone is just taking a class? That class sort of leads to an idea and you know, you go from class versus having an idea through your own research. I mean, I know that they're all.
[00:09:12] Speaker B: Intertwined, but you know, I think, I think anywhere, ideas, anywhere you get ideas from. Right. So the problem with classes is like, unless you're really in a new field, in this case we were, it's very likely that someone will have already trod over that territory. So more likely your own research program and the people you're talking to over time are going to give you ideas that are fruitful in terms of the cutting edge. But anything goes.
[00:09:37] Speaker C: So for the record, before we proceed, are you guys. And Omri, I guess we can start with you as well. Are you more interested in brains themselves or how brains function? You know, like, you know, the mind or intelligence or the relation between brains and minds or something else. How would you describe what it is that, you know, gets you out of bed?
[00:09:56] Speaker A: Yeah, that's a surprisingly tough question.
You know, I'm in a field where identity crisis is sort of the norm, you know, sort of the. What am I?
[00:10:08] Speaker C: What are you?
[00:10:08] Speaker A: Ma, mathematics, mathematician, physicists, biologists.
I think what I'm interested in is how complex systems adapt to their environment.
In a sense, I think that sort of. And that includes artificial networks, it includes cancer, and you have genetic networks, it includes brains.
But, but in a sense I'm interested in the commonalities between these and the differences and whether studying one can inform us about the other. So my intuition is that there are things that are similar among all of them, and there are things that are certainly different, but also technique wise or data wise, there are aspects that are more accessible in one or the other and therefore sort of hopping about between them has the potential to sort of keep providing fresh viewpoints. And so I think that's sort of part of what wakes me up.
[00:11:21] Speaker C: You're almost saying you're interested in complexity. Is that too far a stretch? Or complexity theory, for instance. Right. Complexity science. Because it seems that's sort of not.
[00:11:34] Speaker A: I would say it's not well defined enough or, I mean, course is too large.
[00:11:39] Speaker C: Yeah, okay.
[00:11:40] Speaker A: If you will. I mean, you could say, you know, how you know the properties of strange attractors. Right. Is that complexity could be. I'm not interested in that. I mean, it's, it's, you know, it's fun if I bump into it, but it's not what drives sort of the learning process, the interaction of a complex system within its environment. It's the adapting and how that changes the complex system. And why does the system have to be complex in order for that to work in the first place?
[00:12:13] Speaker C: David, how would you respond to that?
[00:12:16] Speaker B: So for my part, I'm very much interested in how the brain works. That's already being a topic that's impossibly difficult. So I'm happy, I'm happy to hang out in that world. Very much motivated by how what we might understand about the brains can improve the lives of people.
So for me, you know, the crossover is into neural networks because I do believe that they are some, if you will, model organism, artificial, minor portion. We can study these things in ways that we cannot. So for someone who has a sort of technical bent, you know, the experimental difficulty of neuroscience is just impossible. And so we. I've been focusing on these other tools because of sort of both a hypothesis, that is my guess, and some very preliminary empirical evidence that artificial networks are actually pretty useful. So that's the world in which I'm hanging out.
[00:13:21] Speaker C: So one of the ways that people think about brains is that brains compute things, that they're computers. Maybe not digital computers, von Neumann style computers, but there's a lot of computational descriptions about what brains do. And I'd like to ask about this idea of computation through dynamics and how to think about that kind of computation versus the way we normally think of computation as using an abacus. Right. And moving a few pieces around and getting an answer or something. David, maybe you can speak to how computation through dynamics differs from a more traditional Turing machine like computation that we would consider.
[00:14:03] Speaker B: Right.
So coming, just starting, very high level, you know, there's this. Our brain's computers. Yes, brains are computers. There are also a lot more though, right? I mean, this is a computer that cares about its place in the world and surviving, et cetera, et cetera. But on top of all that, it is also a computer. So that's my opinion. The way that I like to say it is that brains are really crappy. Von Neumann computers built on amazing neural networks and modern day artificial neural networks are really crappy. Artificial neural networks built on top of amazing von Neumann machines.
[00:14:41] Speaker C: Nice.
[00:14:42] Speaker B: Right. So they're related somehow.
So that's sort of my perspective there. Now there was another question that you were asking.
[00:14:52] Speaker C: Well, just how to think about computation through dynamics, like within the dynamical systems framework. Right. As a trajectory through dynamical space versus a more static and binary like Turing machine type Computation or if I'm even thinking about them differently in the correct way. So getting to the answer through dynamics versus getting to the answer through binary computation, let's say, does that make sense?
[00:15:18] Speaker B: Well, yeah, I guess so. So the way I would say it is, I mean, dynamics are everywhere, right? I mean even when you have a for loop and you want to do a sort, you are still doing something iteratively, right? So there you go, right. But you know, these days, especially in the machine learning world, the conversation is often one about developing representations that are useful for computation versus at least you know, what we've been trying to put forward, the dynamics of a computation. And so I don't view these things necessarily as opposed, rather complementary. It's just that I don't really know of anyone who's an expert at all of it. So I'll just admit my own lack of expertise in the sort of feed forward representational world. Although of course I can have a conversation about it, but I don't research it. So in the computation through dynamics framework, the idea was like, hey, can we use the basic features of dynamical systems, fixed points, linearizations, all of those tools to understand what this complex nonlinear system is doing and what has turned out to be true. And this is why I think there's, there's energy behind it, because it's actually working to some degree, right? It's not perfect, but I mean we're dealing with really hard stuff here. But it's empirically, it's turned out that you can actually glean insights from some of these techniques. And so that's really what's driving it. It's not so much that this is the only thing, it's rather that hey, this thing is actually yielding insight.
[00:16:55] Speaker C: Omri, I don't know if you want to comment on that.
[00:16:58] Speaker A: Yeah, just one thing that I guess it's sort of an association from that is about precision or how exact things are. And the concept that at least I associate with the Turing machine is a very precise deterministic, if you will, trajectory that has a very discrete set of options. In a dynamical system, naively you could think of this high dimensional space where you can wander about and maybe have some noise and things of the sort, and a trained recurrent neural network after it has formed these fixed points in a sense is in between. If all it has, let's say in a flip flop example, if all it has is a few very strong attractors and everything around it is the best attraction of one of these attractors, then in a Sense, it's almost like this Turing machine, because there are very few options for that dynamical system to be at.
But I think one of the nice things about these trained networks is that they're imperfect, which I think is what biology does as well.
They have to be good enough, they don't have to be perfect. And that means that you don't need a fixed point, you can have a slow point.
You don't need all the trajectories to end up in the right place. You need most of them. So the connectivity that has been obtained via training restricts the space of possibilities of this dynamical system. So you can't wander about in this high dimensional space and get lost. But it doesn't restrict it to just being two, three options like a well designed and well behaved program would. So it's sort of in between.
And I think this, this in between, this sort of, these approximate objects are part of characteristic feature of both trained networks. And I think biology, that idea of.
[00:19:14] Speaker C: Sort of wandering around the area of what a Turing machine would find in a discrete answer to, you know, whatever output, right, you're going to eventually output. That dynamical systems idea of wandering around and that approximate area has, I mean, it's really changed the way that I think about, well, minds and brains.
And I'm, you know, I'm wondering if your work, both of you, if your work on using applying dynamical systems theories, theory to recurrent neural networks, if you know how your thinking has evolved with respect to recurrent neural networks themselves, just very high level, you know, that you started in a Turing place and now you're in a much different place or how has it sort of changed the way that you think about these things?
Go ahead, Omri or David, whomever.
[00:20:08] Speaker B: Yeah, no, I'm fired up for this one.
All right, so if you just look a little bit at the history, first there were hopfield networks, right? And those are just attractor networks you can really think about just marbles rolling down hills, right? And so that was a very powerful insight. And people really thought, thought that through for a long time, but it's restricted because those dynamics are very limited. And so then, you know, this idea came out of echo state network or liquid state network. And that's the idea that, hey, you just throw a pebble in a pond and whatever ripples and crazy interactions it has with the edge of the pond and other ripples happening in the pond, you can decode that and make sense of what happened. In other words, you can back out the pebble getting thrown into the pond.
So that's the idea of liquid state or echo state networks. And that's sort of the radical other side, right? This is as far unstructured as you. It doesn't even matter. Like any medium that sort of reverberates can be used for computation. So what has happened for me, especially with the onset of deep learning, is that I started applying echo state ideas to neural data. And what I discovered very quickly was that real biological neural data is much more structured than echo state networks are, and that the sort of wiggles in an echo state network are just sort of wild and out of control in ways that don't make sense empirically when looking at brains. So, well, we started trying backprop now, and that is to say, no longer just a random network that, where you have a readout that you're training, but the whole thing gets to be trained. And all of a sudden the comparisons started getting a lot more close between the brain and the artificial networks. And I think it just speaks to the idea of what Omri said, which is that, you know, things have to be good enough, right? The computational primitives have to be good enough, but otherwise it's sloppy, and that's okay. And that's kind of the world that these artificial networks live in. And apparently, if you believe the comparisons between brains and artificial networks, it's likely where the brain is hanging out a lot of the time. So my evolution has come to this place of like sloppy, but has to be good enough. And just to add one more point to it, I also think that computation itself is fundamentally regularizing or robustness making. If you want to be a successful organism and you have to integrate whatever information in order to eat, you'd better do it reliably. And the dynamics of your system, neural system, that's making those decisions, it's going to reflect that. And thus when you look in the state space and the dynamics, you're going to see something hopefully that's intelligent, intelligible to me.
[00:23:07] Speaker A: There's also something, I guess, one, okay, I think the idea of a tension between these two ends of a spectrum that David described this very ordered and very chaotic, and that things are in the middle, I think it also exists in neural data itself.
So again, historically, because I think because people recorded single neurons, then they had to make sense of these single neurons, so they gave them names and they told stories about them.
And that sort of pushes you towards the side of the spectrum where everything makes a lot of sense and every piece of the puzzle has A very specific role in the entire organism or network. Whereas this ecostate approach says nothing matters and everything looks like a mess. And to me, I started looking at Data during my PhD, I looked at neural data, and this, to me, was the push away from the ordered part because I read the papers and I saw the nice neurons in the figures of the paper. And then I looked at the data and suddenly, how come the figures represent 10% of the data and 90% looks like junk? And then I start speaking to more and more people and say, oh, yeah, of course, you know, 90% looks like junk. We just pushed in a drawer and never looked back. And then I said, okay. So the data tells me that we need to go somewhere that's less organized. And then, personally, my natural inclination is towards the, let's say, the ecostate approach of, you know, let's, you know, let's take this messy pile and everything can work. And we don't need any predetermined structure. We don't need to memorize names of, you know, if every neuron has a name or every brain area has a name or every gene has a name, I need to know these names. And I'm very bad at memorizing names. So I'm more comfortable personally with this holistic approach.
But then I know I'm wrong. I mean, I know that I'm criticizing these single neurons too harsh, in a sense. So I know that it's my personal laziness that pushes me too far into that end of the spectrum. And I know places exist and I know things that, you know, this organization is there. So in a sense, to me personally, it's.
I'm sort of using these many times as sort of thought experiments in a sense that I'm saying, let's see if something that's as wild as this can work. But I know it's too far, and then I correct myself, in a sense.
So to me, let's say the interaction with data is both to push away from that pole and to attract myself back to that pole because of my personal sort of inclinations.
[00:26:22] Speaker C: So when you guys use dynamical systems, do you feel like you're learning about brains or minds or both?
[00:26:29] Speaker B: Definitely both.
[00:26:31] Speaker C: Both. Is there a distinction? Do you distinguish between brain and mind, David?
[00:26:36] Speaker B: Boy, that question's above my pay grade, I think. I mean, what I'm trying to do. What I'm trying to do is make the pieces and components lead up to computation. And whether or not computation is mind is anyone's guess, right? Certainly in Its most basic forms it's not. But if you say, well, if all these pieces are functioning correctly, somehow mind arises, well, maybe I'm speaking towards that, but I really wouldn't know for sure.
[00:27:06] Speaker C: Omride, I'll ask you the same question and then we'll get into just using RNNs as model organisms.
[00:27:13] Speaker A: So I guess brain mind is a tiny question, but I guess I think the way I view mind is a bit like, I don't know, temperature or other emergent properties, if you will, in physics. Is it sure a convenient name for a complex phenomenon.
And that's, and the thing is that these, you know, convenience means that ignoring that can be radical, you know, extremely inconvenient. Sort of only saying there's, there's a bunch of neurons and they are active and let's not talk at all about whether there are internal representations and whether thoughts and things like that might make like, you know, maybe technically correct but extremely inconvenient and therefore not so useful. So that's sort of just I guess a rough thing, whether these dynamical system concepts are applicable to brain and to mind. Well, to brain certainly, because that's the dynamical system we are considering to mind.
In some sense they have to do that.
But you might even say that again if you want to be extremely radical. You could say that the dynamical system is brain and the fixed point structure is mined because that's the emergent phenomenon, right? You have a certain connectivity that developed through some interaction with the environment and now you have this, let's say fixed point landscape or dynamic landscape. And there are many different implementations that could give rise to the same landscape, but that landscape is what allows you to function in the world, if you will.
So, so in a sense that's the sort of the higher level, whether that higher level, you know, of course it's a tiny component of mind, you know, there's no emotions and it's, it's, you know, it's a very, it's a very small component. But I think it does have the flavor of an emergent phenomenon because you have a certain dissociation. You can have many different networks that will give rise to the same dynamical landscape. And to function in the world, you care about the dynamical landscape, you don't care too much about the lower level implementation.
So in that sense I think you could sort of, again, probably irritating several philosophers, but I think you could equate these two levels to brain in mind.
[00:30:13] Speaker C: That's why it's hard for me. I'm still wrapping my head around thinking about fixed points and dynamical landscapes because it almost, and especially the vocabulary doesn't help because the word attractor sounds causal. Right? And then you start to think about a fixed point as pulling the neural activity toward it, when, as you just said, it's more of an emergent property that these things are happening and the system happens to be configured such that there are these fixed points in the landscape. But then, you know, and, you know, like, we think of mind sort of as like, causal. And this is. Now we're going way off rail here. You know, whether mind can cause things, you know, and whether mind is epiphenomenal to brain. But it just, I don't know, it has this in between kind of feel that feels like it's getting at least closer to being right between brain and mind. But there's still a lot of stuff up in the air, I feel like. So that's why I was asking what it feels like to you guys.
[00:31:16] Speaker B: Well, to the degree, again, I feel like I'm out over my skis here, but to the degree that dynamics is related to mind, then we're on the right track.
And if you feel like the dynamics are emergent from. From the parameters and the middle part is the fixed point skeleton that determines the topological flow of everything, then, you know, we may be on the right track. But that. That's really all I have there.
[00:31:43] Speaker C: All right, guys, so backing up. So the history of neuroscience is mostly studying animal models, right? Model animal model organisms, you study their behavior, their brains, you study their behavior, give it a task, see what comes out while you're recording brain activity, et cetera. And then you infer generally to humans, and I guess, you know, with FMRI and a bunch of other technologies, now we're recording human brain activity, or proxies thereof.
But you both, it's interesting, you both introduce your talks often with this idea of using recurrent neural networks as the new model organism, essentially.
And you both talk about the difference between this more classical neuroscience approach to modeling, where you think about what might be going on, and you build your model that way, versus a more machine learning, modeling approach to understanding recurrent neural networks, be they artificial or natural.
David, can you just explain the difference? And then. Omri, I'll come back to you for a question as well.
[00:32:48] Speaker B: Yeah, sure. So the tried and true methodology for a good number of decades in the early computational neuroscience was you observe a phenomenon in some neural data and then you go build a machine that is a neural network by hand, that reproduces some features of that data. And if you can do that, then it's a reasonable thing to say that whatever you cooked into that network could potentially, as a hypothetical, as rather as a hypothesis, explain the neural data that was observed. That's called building a model, right? And you know, by hand, and it's great. And so, but given the difficulty in especially systems neuroscience, right, when lots of things are interacting, that's when sort of reductionism starts to fall apart methodologically, given this sort of lack of progress, it became a question, well, what if we just don't know how to do that? What if it's either too hard, we don't have the right ideas, etc. Etc. Etc. Too high dimensional, who knows, right? We just don't have the intuitions. And so that's where the training approach comes in. That is to say, you know, the way that I would express it is under some, under some robustness principle, we don't want all solutions, we'd like our solutions to not be insane, but under some robustness principles, just let an optimization of all of these parameters sort of settle down into something that looks like your data and then go study that. Right. So that's what the approach is, just.
[00:34:27] Speaker C: To follow along there. Omri, you think recurrent neural networks and dynamics may provide the correct or better simple parts that we need to study to learn about the larger, more complex functioning system? Can you just elaborate on that idea?
[00:34:46] Speaker A: Yeah, I mean, so one thing I just remember to mention is that I think similar things have been done in evolution, basically. So there still are some works that try to use genetic algorithms to get, let's say, genetic circuits and that would do a certain function.
I think most, again, not all. So many of these works use very low dimensional, so they wanted to see how a negative feedback circuit arises. So you have like five nodes or something.
There are works that use larger networks, either enzymatic or genetic.
Kaneko Lab has quite a few of them. These approaches exist in other realms as well. And then now I completely lost my thread of.
[00:35:45] Speaker C: Well, I mean, you've made the point that part of the entire point of science is to use simplified things and use the simple parts to understand the larger.
[00:35:54] Speaker A: So then there's the question of why do you need a model at all, right? I mean, so in principle, if you want to understand a system, then you want, what does it mean to understand the system or to have a model of a system. And there are many different answers to that. Some people say that it's prediction that if you can predict how the system will respond to a novel stimulus. Others will say that if you build it right, if I know how to build something that does this function, then I understand how it works.
And then of course, there's the whole, you know, bird playing conundrum of I can build something that does the task, but is it, do I understand birds? And I think a. I think it's ill defined. I don't think there's, there's an answer to what does it mean to understand something? Personally, I, I think that I'm sort of, I guess in between the prediction and the building part, and it's a rather subjective question actually. What does it mean to understand something? I think you'll have different people give you very different answers for the exact same system. Whether, you know, for some people, if you don't have a pharmaceutical agent that cures a certain disease, they will say that they did not understand the system. And others will say that having a pharmaceutical agent means nothing about understanding because it just means that you found a trick that works by accident. But if you look at it and you know what it means, then that's understanding. And.
[00:37:38] Speaker C: Yeah, go ahead, go ahead, David.
[00:37:39] Speaker B: Oh, yeah, I was just going to say, I mean, I think I'm speaking for Omri here as well, but speaking for myself for sure. I mean, we're definitely in the camp of did you even understand the data at all? Right? So like, let's not lose sight, right? I mean, there are data sets in neuroscience where nobody has any idea what's going on. And so I feel like the progress, and I do think it is progress, however limited, that we were able to help along with is say, hey, look, there is a way to even come up with words for what that data might be doing. Because, you know, at least for some examples before this reverse engineering approach, nobody knew. They literally just didn't have an idea. So I do feel like there's been some progress in that, but I don't want to take it too far either.
I think there's a lot of misses that might be happening with this, the artificial neural network approach.
Something that's commonly talked about is, well, you know, what do you have, what do you have to put into it in order to make it look like your data? So it turns out that you'd like it to be as simple as match some task animal, does task A, train a network to do task A, and wouldn't that be great if that was enough to make your neural system? Now that is to say, not its outputs, because you optimize them so they're gonna look like what the animal's doing. But the internals, wouldn't it be nice if the internals, just based on that task matching, looked like the animal model animal neurons? Excuse me, it turns out it doesn't work right. So that's the dirty secret of the approach, is that you have to add a lot of extra stuff. And so what we're. What I think the. The subfield is coming to is an understanding of, or trying to develop an understanding of what those things are. For sure, it's a robustness principle that is, you don't want very crazy systems. It's just not what brains are doing. Right. But there's other ones that are very particular to the system at hand that you're studying. And so one could argue that what you're really doing now is instead of building a system, you're picking the hyperparameters such that an optimization program builds your system. So if you haven't reduced some complexity there, then you really just have kicked the can down the sidewalk for a block without learning anything. But I don't believe that. I think you have reduced the complexity and you have carved up the space of solutions a little bit more.
[00:40:10] Speaker C: Yeah, I asked Omri that because he has expressed the notion that thus far, neuroscience has by and large failed to find useful, simple parts that you could then put back together in an explanatory or building fashion to make the complex system. So does that ring true to you, Omri?
[00:40:29] Speaker A: I mean, I think again, what I basically, yes. So I think that my critique was of the, let's say, single neuron, in particular, the single neuron that was exposed to a limited stimulus set and then.
[00:40:46] Speaker C: It stopped picking on single neurons, man, they're just fine.
[00:40:50] Speaker A: But in a sense, any.
All context dependence, basically. Right. So if you probe a system with a very limited set of stimuli, then often you'll find that it does not generalize well when you won't understand how it works otherwise. And then the question is whether these.
But if you chart this dynamical landscape, then you might have. Then these rules could be more general.
So in that sense, these could be building blocks that are. That are more useful.
But again, maybe not. I mean, maybe you do this, you train on a simple task, and then you see that if the network has another task in the background, then it will change completely. Perhaps.
And I think one thing that I want to relate to what David said is that another way to think of it, that you optimize a network to the task, and then you hope that it will match the data just by virtue of that.
Another way to think of it is that maybe you don't, because there could be several solutions to the task, right? So you could have, if you think of it, you take one particular organism that trained on this task and then you take one particular network that was trained on that task. Now, it could be that two different monkeys that learned the task or two different mice that learned the task solve it differently.
If that happens, then what do you hope for? I mean, do you hope for the network to match nmla? Do you hope to match nmlb? Do you happen to match the common aspects of all animals that, if there are any.
So I think even that hope is sort of, if you stop to think of it, it's not that trivial.
[00:43:02] Speaker B: This also brings up the topic of universality here. I don't know.
[00:43:06] Speaker C: Yeah, yeah. Well, let me ask one question then I want to pause and I'm going to ask you guys how we should proceed. But you mentioned that, you know, the different tasks and the simplicity of many of the tasks. And these days in neuroscience, there's just crisis after crisis around every corner, it seems, about how we're doing it wrong. One of those is that we're not using ecologically valid tasks. And one of the strengths of using, let's say, recurrent neural networks as model organisms is that you can give it these same tasks that we're using in animal models, right, and then ask it to output some behavior, match the behavior, test the innards of the recurrent model, the dynamical landscape, for instance, and compare that to what you see in monkeys, much as you know, David's done, you've both done, do you worry at all that the tasks that you're asking your recurrent networks to perform, which are often even simpler, right. Three bit tasks and sinusoidal wave tasks, in an effort to simplify, in an effort to understand. Do you worry that asking your networks to perform a task that is so far removed from the way what animals do in the wild, for instance, that the answers that you're getting aren't applicable?
[00:44:18] Speaker B: So I think there's really two axes there, right? One is ethological relevance and the other is complexity of the task.
And I'm much more concerned about the.
[00:44:30] Speaker C: Latter as moving forward and getting the right answers.
[00:44:34] Speaker B: Yeah, exactly. So the last 10 years have been like, hey, let's apply these approaches and study these simple, what we now call simple, although it wasn't obvious 10 years ago, but study these simple tasks, right? And so that's gone from memory to decision making to blah, blah, blah, blah, blah. And we've just gone down the line. And so. Well, the criticism, I think extremely valid criticism is like, well, brains do lots of things and they, you know, so that is really the magic and how the different, how different tasks are solved. That is to say, if task A has something to do with task B, is there a generalization aspect to that and how is that captured in the dynamics, et cetera, et cetera? This is something that's deeply concerning to me and you know, I work on that in my research with, with postdocs and pushing that idea forward of basically multitask systems. So to me that's really critical, a critical element which is like do lots of things and the other generality. That's right. And so the other is like reaching, for example, let's say you wanted to understand arm reaches and motor control of arm reaches. It's one thing to say, hey, we do these standard center outreaches, reach up, reach down, left, right. And so versus let's grapple with the total complexity of this. I don't know what it is 50 degrees of freedom, arm and digits that can lift and control force and strain and, you know, all the rest of it. And so I think that's where we want to go. But there are lots of problems in the way, including not the least of which is just experimentally getting that data in a rigorous way. So I think there's problems both on the experimental side and on the modeling side. But that's clearly where we need to be going.
[00:46:27] Speaker A: I think one other aspect that where I think we might want to go, that's sort of a tricky aspect is the process of learning itself.
So in principle, I very much agree with David using the word of optimization for this. And optimization in principle, in a sense is a process that it doesn't have to be process. Okay? Optimization is finding the best solution. Whereas if you think of learning in a biological setting, it happens over time. And in many cases, I think both David and I and many others have been careful to sort of say, look, here is the network after it has been trained, and this is the object. You should be concerned with how we got there. You know, don't ask us about it. That's, you know, we pulled it out of somewhere, it's of no concern to you. And I think in many cases it's correct and it's a valid warning to issue. But I think another sort of aspect that I think is worthwhile to push forward and it's related to the relation and validity with respect to biology, is the process itself. So, for instance, if you know how to perform many tasks, then they are built one on top of the other.
So you have an existing schema that you generalize for many tasks, and now.
[00:48:06] Speaker C: You learn a new one compositionally.
[00:48:09] Speaker A: You mean it can be compositional. It can be these issues of catastrophic forgetting.
But if you learn a family of tasks and you learn them sequentially, or you learn a battery of tasks and then you add another one that already has something with the process in which you learn, and I think that's also somewhere where we can connect better to constraints that are in the experiments themselves.
[00:48:46] Speaker B: I guess I agree. It's just. It's too hard for me.
You know, when you look at the literature on learning in the biology, it's the classic example of like a thousand, hundred thousand small details with no way of figuring out how those small details come together into something that makes any sense. So I find the problem overwhelming, personally.
[00:49:11] Speaker C: So you guys are both working on what you might call a space of solutions, right, for recurrent neural networks. And that is like for a given task or a sequence of tasks. Right. Or, you know, interleaved or, you know, however you want to do it.
A family of tasks depending on how you initialize weights and how you train, you know, that process, what range and types of solutions, you know, arise through training that network. And David, some of the latest work that you've been doing is on what you call universality, which is, you know, and you can correct me, but the, roughly, the idea that different neural networks might converge to, you know, the same or universal solutions. Right. And Omri, you're maybe more skeptical and through your work you've seen that maybe this is not so much the case. So we can just go down that road a little bit. And David, did I define universality correctly? And. And take it from there.
[00:50:04] Speaker B: Yeah, sure. So I'm going to defuse it a little bit at the outset. I'm going to be talking about an ideal here, and surely what happens in real life, if anything about what I'm hypothesizing is true, would not be ideal, but nonetheless, it's still a guiding idea.
So the idea of universality is that different dynamical systems sort of have properties that are conserved despite the fact that the equations are different, or in our parlance, despite the fact that the parameters of our neural networks are different. Right. So the classic example, there's lots of classic physics examples, but they're pretty complicated. But the classic example in math is Feigenbaum's Delta. So with these little one dimensional RNNs, one dimensional maps, they're called, you can show that if the, if the one dimensional map has certain properties, it's unimodal and three or four of these properties. But basically if it has these things, it doesn't matter what the equation form is, it will have a number called Feigenbaum's Delta, related to the onset of chaos. And that number, 4.669 something, something, something, something. It's the same number across any system that has these properties. Right. So I took a lot of inspiration from this example. First off, it's just great science and accessible and people should look it up because it's amazing.
But in terms of my own work, via reading some of that and conversations with people along the way, I've started to wonder if artificial neural networks are similar to the degree that we're showing them. We're already buying into that hypothesis, and not everybody does.
But to the degree that artificial networks are similar to brains, is there not a universality property going on that is just. So to apply the universality idea directly to brains and artificial networks? That is, despite the fact that the equations of RNNs are almost surely totally different than whatever biophysical equations you'd write down for a brain, no matter what level of abstraction you try to get at, you're still going to have massive mismatches, right? So given all of that mismatch between what we're trying to model and the tool, the RNN that we're using to model it, why should anything look similar, right? And my evolution on this topic, I've basically come around, I'll admit it, I'm very much in the universality camp. And let me tell you a little bit about my evolution on the topic. I mean, in my PhD 15 years ago, when I started making networks and copying the echo state network algorithms out of the paper and studying what was going on, I thought a neuron was a neuron. And what I mean by that is an artificial neuron in my network was a one to one mapping to a real neuron in a brain. I actually bought that idea and I don't believe it anymore, not one bit.
[00:53:02] Speaker C: You mean functionally, you mean.
[00:53:04] Speaker B: Yeah, functionally. I'm sorry. Obviously there's a neuron as a physical thing in a biological animal versus the mapping.
[00:53:10] Speaker A: The mapping doesn't sort of.
[00:53:12] Speaker B: That's right. I thought the mapping was 1 to 1 at the neuron level.
[00:53:17] Speaker C: So you were naive.
[00:53:19] Speaker B: Yeah.
Basically so. But when you take a closer look, you say, well, there is, you know, neurons have integration properties, integrate their inputs over synapses and then they decide to fire. So there is a little bit similar similarity there. But by and large, I mean, artificial networks have floating point accuracy, they're rate networks, there's no spikes going on, right? There's a lot of reasons to be deeply skeptical. And then at the higher level, the artificial network just has some equation, right? You say at its most simplistic form, it's a linear system with maybe a saturating non linearity. So you have this matrix application to a vector and then you saturate it and that's your network. That's just not a brain, nor is it even a network in a brain, right? There's so much other structure going on, inhibition, excitation spikes, all kinds of things in a real brain. So that asks the question, well, why should there be similarity? And so if you move Forward through the 10 years of applying RNNs to brains and seeing a number of successes, you say, well, what's going on there?
So my thinking is that coming back to universality, the idea is that these dynamical skeletons, that is to say the fixed point structures, right, those things are potentially the conserved element, that those are the universality sort of numbers, if you will, the 4699, 4669, that is the universal property amongst these systems. And there's lots of reasons to think that that might be true. Not, but not necessarily the particular exact fixed point location. Although if under very precise circumstances, maybe, but because these, when we look at fixed points, I don't actually care about a fixed point. I care about the topological flow of the dynamics in the high dimensional state space. And that's what these fixed point structures are negotiating. So if a task requires these properties to be done, if it needs these dynamics to be done correctly, then you need to organize a system that has those dynamical flows. And that's the idea of universality to me as applied to artificial networks and biological networks.
[00:55:36] Speaker C: Is it valid to say that the other way to think about this is that given the vast differences that you described between a unit and a neuron, and yet these, you know, the dynamical landscape when you train it on a task comes out, at least in a similar way. If not, you know, exactly the same in some cases, the other way to interpret that is that the dynamics is trivial and just something that would come out of so many other things that maybe it's like the least Important thing, maybe that reduces its importance. Right. So that's a minor rebuttal, but just it occurred to me that that's one way to think about it. Does that make sense?
[00:56:15] Speaker B: Yeah. So I think that falls into the line of criticism that what we're really studying is tasks and not, and not animals. So I think to some extent that's true. Right. But nor do. So, again, I want to defuse this a little bit. I don't think that everything is universal. I think it's a guiding property. And if you set up precise artificial examples, you could find, I'm guessing, my hypothesis, very conserved, probably universal structures of fixed point skeletons. Right. But coming back to the animal, which is what you're talking about, I mean, the animal has to do lots and lots of things, so lots and lots of tasks to survive. So my guess is that the analogy there is. Well, when we see something, when we see a dynamic in an artificial network, that helps us understand the recordings, that is the dynamics, the state space dynamics of animals, and we see an overlap there, that why we're seeing an overlap, there is some hint or substructure of universality required to do the task, for example.
[00:57:25] Speaker C: Well, and David, you didn't really. And we don't need to go into detail, I suppose, but you've tested lots of different types. You ran a bunch of tests on hundreds and hundreds, if not thousands of different families of recurrent neural networks and trained in different ways and looked at, you know, compared them all and then rearranged them and found that there is, there are clusters depending on how you look at it and that, you know. Well, let's just, let's just talk about it and then we can go back into Omri's stuff and how it differs. So there's geometrically, let's say, I'm going to summarize it and then you can maybe detail a little bit more. So the geometry differed among the recurrent neural networks, but the, and you can talk about what that means, but the fixed point topology. So basically the dynamical structure of the networks all sort of overlapped in many ways. So that was, that was this universal aspect is the overlapping dynamical structure, whereas the different architectures differed in the geometry of the representations. And maybe you can elaborate.
[00:58:30] Speaker B: Yeah, sure. So in that we're talking about a purely artificial study, so there's no biological data here. Right. So I'm testing the idea, we were testing the idea that even in an artificial setting, could we find things that were highly conserved, structures that were highly conserved across Networks that had different equational forms and different parameter initializations, that is to say, after training them to death. Each of these networks, you get something that the system solves, each system solves a task. Do the internals as we understand them as visualized through dynamical skeleton. And what all of that means is that conserved.
[00:59:05] Speaker C: And this is across LSTMS, vanilla RNNs, GNU.
[00:59:08] Speaker B: That's right. GRU's.
[00:59:09] Speaker C: GRU's.
[00:59:10] Speaker B: Yeah, whatever, whatever. All the standard stuff that people are using in the field right now. And keep in mind, those equations are very different. Like a vanilla RNN is a very different equation on paper than an ls. Very, very, very different.
So a mathematician would night and day, right?
They do have common elements, for example, saturations and matrix multiplic, matrix multiplies, but they're still very different. So what we found was that if you were to sort of. So I don't know how precise to get here, basically the geometry of the solutions looked a little different. That is to say, some were stretched, some were rotated differently. There's also an unidentifiability there. But basically the geometry varied from network to network. But if you were to imagine the flows of the dynamics, that is to say, the dynamical skeleton is negotiating these flows, that creates a topology. So in other words, to me, if you have a long ellipsoid network state space that does exactly the same set of flows topologically as a very compact circular one, that they're doing the same thing. Right? And so what we found is in that topological view, that literally every, every, every network was basically identical. To keep in mind, you know, there's a lot of experimental, artificial, experimental technology going on here, but across thousands of examples, and even if we got a few wrong in terms of our analysis, it would still be like, well, 99.9% of these networks are all doing the same thing when viewed through this very specific dynamical systems lens. And that was the result.
[01:00:47] Speaker A: First of all, I think I very much agree with David's view and description here.
First of all, the analogy, or in a sense, what is the proper way to compare the model data to the experimental data?
Is it the single neuron level? Is it the task level? Is it the dynamical object level, the pca? There are many different levels at which you can do this comparison. And I'm quite convinced, again, as David said, that the single neuron is not sort of the correct one. Again because biology is sort of a single segment of a dendrite is more complex than a full artificial rnn.
But then I think that your question and what David said is sort of highlighting, as always, there's the spectrum, right? So if it doesn't matter, then can I have a sand pile, equations of a send pile that don't even pretend to look like a neural network, and they also implement the same equations. And maybe I could.
But then the question is, what are we modeling here? And I think so now, how would one approach that? Right? So you could say, well, if it is so universal that no matter at all which equations I put at, let's say, the implementation level, I always get the exact set of fixed point topology, then have I learned anything about the specific implementation the brain is using, or have I learned something about that task? If it is, again, at the extreme case, if it doesn't matter at all, then obviously I have not learned anything about implementation because I just proved the implementation is irrelevant.
But that's an extreme case. So I think there are at least two interesting places to go there. One is geometry might matter. So it could be that the topological landscape is identical and is dictated by the task and not by the implementation.
But you could get, let's say, geometrical insights or constraints from the neural data.
And again, that has to be done carefully. And actually I have not carefully thought about how to do that while at the same time not requiring a neuron to be a neuron. But I think that it could be done in principle.
[01:03:38] Speaker B: Well, here are two. Let me just give you two examples, right, from neuroscience.
So if you have a dynamical system and you say what you're really interested in is the topology of the dynamical flows, then you're making an argument that at least within orders of magnitude, you don't care about the dynamics. Right? Right.
So and so how so sometimes it's just about getting from state space region A to state space region B, and how that happens, whether it's a big swirl or whether it's this straight thing, it doesn't matter. So in that case, that's a case of dynamics not mattering where the topology does. But let me show you, I believe, examples where the dynamics matter. Well, for example, in the motor system, right, when I'm lifting my arm and controlling my arm, et cetera, et cetera, the speed at which that happens, it matters and it's precise.
Another example is out of the Mehrdad Jazzieri's lab, where they're talking about the geometry of a particular curve in state space as implementing a priority on some kind of decision making. Right? So that's a case where the geometry did matter. And so I'm just helping the hedge here.
[01:04:51] Speaker A: So I think that's sort of one axis with which you can diffuse this criticism of over universality.
And then the other one is that it is possible to have tasks that are not universal, so it is possible to have multiple solutions to the same task. And if that is the case, then once again, you can constrain your models with the particular solution attained by a particular animal. And in that case you can also have different solutions, you can have different strategies that, that different animals develop, and you might sort of match them to different strategies that are developed by your networks. And in a sense you can take that even further. And you can ask, if I have 100 rats or mice that solve this task, can I do statistics and ask, what is the distribution of solutions?
And now if I have a million networks, can I ask which aspects of network architecture, modularity, learning, rule order of trials or whatever affect the distribution of solutions there?
And does that inform me about some of these underlying implementation constraints in the brain? So I think this universality is sort of a question that has to be asked.
But again, it lies on a spectrum. And I think that sort of walking along the spectrum and breaking universality gradually and using that as a constraint to link to the data can be sort of a promising way forward.
[01:06:59] Speaker B: I totally agree with what Omri is saying, and I think there are different levels where things. So basically, if you have a couple of solutions, then you have a universality class. Right. If it's at the algorithmic level. So for example, you can sort, quick sort, heap, sort, bubble sort, there's a gajillion sorts. Right.
At the algorithmic level, you would say, well, which one is someone doing? And then my guess is if you dive into the implementation detail, and you might, for everyone who's doing a quicksort, the implementation might still be somewhat universal or at least have hints of it. Right. So that's one level. And the other level is really what Omri is saying about, like, well, actually, even at the implementation level of some of these tasks, there still could be various ways in which it's done. And I'm aware of some work actually in mice for integration tasks that show that. And so hopefully that'll come out soon. So those were the comments I wanted to make.
[01:07:51] Speaker A: Yeah. Then I guess one example that we sort of, we are now sort of working on is again, the Jazairi Redis go tag that David mentioned earlier that we trained sort of many networks on this task and we saw different Dynamical, let's say topologies.
[01:08:12] Speaker C: Oh, sorry. This is the ready, set, go task, is that what you said?
[01:08:15] Speaker A: Yeah, yes, this is the ready, set, go task. So that basically the task is the animal receives two stimuli that are separated by a certain delay of time and it has to reproduce ready, set and then go. And if you do ready, set, go will come later at the same delay. So you have to sort of match the ready set grade to. Yeah, so we translate ready set into set go. And it turns out if you think of it as a dynamical landscape question, it's not that obvious. So it's not that you have sort of, you know, a decision and you go right or left to this fixed point or that fixed point, but all the action is happening in how fast you move or how or where you go. And it turns out that networks that solve this task, some of them do this with, let's say a one dimensional slow manifold and others do this with a two dimensional slow manifold. Some of them curve on top of themselves, some of them do not. Some of them generate a limit cycle and some of them do not. Now the limit cycle is sort of too slow to matter when you train the network, but if you then sort of let it continue, then you see that some of them do that and some of them don't.
So in that case you sort of have different variants of solutions to the same task. And again that.
And I think it raises many questions that we have very few answers so far. But for instance, we use extrapolation. So you train the network on a certain set of delays, but then you challenge it with longer delays. Now intuitively, if I tell you what the task is, of course you'll be able to extrapolate because you know the rule. If I just give you some examples, these delays, it's not clear how well you will extrapolate. There are some experiments that show that you will, you will extrapolate, but you will do this sub linearly. So you'll sort of, you will.
As the delays I present get longer and longer, you will start making shorter and shorter error sort of responses. You'll still respond longer than your training set, but not as long as you should have. There are some experiments from the 80s where people were sort of using psychophysics to test extrapolation. And people don't do that.
Well, it's ill defined, but they don't do it as the experimenter would have expected.
And you could ask, you know, is that even a fair thing to ask the network to do? Because it's Sort of ill defined. So if I say networks are different when they extrapolate, is that a true difference or is it not? Should they only care about how they operate within their training regime?
Or maybe I can use that to probe the network or to compare it to the neural data. And I think these are, you know, we have some clues, but so far we have many questions as well.
[01:11:45] Speaker C: Let me just jump in and ask you what your thoughts are on Uri Hason's work about direct fit and that. So what you're saying is, well, humans can extrapolate so we could compare or animals could. So we could compare natural neural networks to the artificial neural networks. But Uri's suggestion opinion is that, and his research shows that maybe, you know, well, he believes that humans can't extrapolate and that we just have so many neurons, our recurrent neural network is so large, has such high capacity that everything that we can do well is interpolation, is within the training set and anything outside it we actually fail at because we just haven't memorized, you know, how to do it or haven't had, you know, examples of it. So, and if that's true, then comparing between natural and artificial neural networks in that case wouldn't tell us anything. I don't know if you have thoughts about that.
[01:12:41] Speaker A: I guess it's very hard to compare extrapolation in humans who have a lifelong experience and say whether it's extrapolation because the prior knowledge an RNN is right. It's tabula rasa, right. You have, you create it out of nowhere and then you give it this task and this task is its entire life experience and then you compare it to a subject walking into the room with a college degree and you know, with a lifelong experience of being out and about and, you know, is that a fair comparison? I think David spoke about the fact that real biological agents have a plethora of tasks that they were trained on and they share components and they use these shared components to better behave in a new scenario. And when you test a human on a new stimulus, it's never really new because there's always some context. Whereas these artificial networks, you can really surprise them in ways that I don't think you can do as well with experimental subjects. I think it's not hopeless, but it's tricky to compare that.
[01:14:03] Speaker B: Yeah. So I do, I do think being able to instruct humans is also a big deal. Right. And I think that if you really wanted to study some of this extrapolation, you could build and in Fact I did with Robert Yang, or Robert Yang did the building. But you could build networks that actually understand language. And, you know, if you think about config, this is a side topic. But if you think about configuring a neural network to do task A, B or C, normally you would say, well, here's a hot one for task A, hot one for task B, hot one for task C, one hot encoding rather. But somehow language is modular generalizing, basically configuration for these systems. Right. So I think there's. I think there's a world where we could study those things. But jumping back out to the topic of universality, I definitely agree with Omri that the specific details matter. And I don't mean to say that like my little crystal rnn, artificial RNN on my network is universally the same as some very complicated biological mammal organism.
But to me, the idea is at least, I guess, maybe normative or helps me explain why when these things are so different, you might actually see some similarities at all. Right. And so that's really where I would hang my hat in terms of leaning on that theory.
[01:15:34] Speaker C: Omri, you're hanging your hat in a different saloon or the same saloon as David.
[01:15:39] Speaker A: I think the only thing I'll add is that again, this spectrum. So I think that I want to go to the universality side and then take one step back and ask whether I can use this small lack of universality to sort of match the data or to ask, I know my model is ridiculously unbiological, but for instance, is one feature of that more ridiculous than the other?
If I hold the task and play with biological detail, will that bring me closer or further away from the data? So I think it's. I think universality is a good place to be. But. But again, we're doing linearization, right? But you want to be. To sort of perturb it a bit and see how it responds to various things.
[01:16:31] Speaker B: That's right.
Number one and number two, do spikes matter? Yeah, number number two, excitation and inhibition.
There could be all kinds of reason. There's excitation in inhibition, that is different cells. Right. Cell types. For biological networks that are concerned with actual metabolism and not getting epilepsy or whatever, pick your favorite reason why there should be things like this. Now, if those details matter to solutions, then we should absolutely be putting them in our models. But if they don't matter, then we need to make a sort of concerted effort, both in terms of our. And something of a PR effort to say, hey, these things don't matter. And we're not going to put these in because it's only obfuscating the point of the modeling. Right.
[01:17:19] Speaker C: I'm just zooming out though. Isn't it just wondrous and exciting and amazing that there is a resemblance that we can make some headway into comparing both of these complex systems and finding some sort of structure in the randomness, some sort of potential universality? I mean, do you ever just take the time to think, oh, how awesome is that? Or are you too mired in the work to take the time?
[01:17:45] Speaker B: I think it's great. I think it's very cool. And so again, this is why I don't get too caught up in does the brain implement backprop conversations? First off, as I admitted before, the learning stuff just seems impossibly difficult to me. So I'm picking easier problems and much respect to those who are biting off the hard ones.
But on top of it, like if you really believe in some form of universality, however minimal, not however minimal to some degree of universality, but not too much, then as long as these systems can sort of wiggle into semi optimal solutions, then it's not overly important how they got there. I probably just made 17 different enemies when I. But, but I kind of, I kind of believe that, right?
[01:18:32] Speaker A: Yeah, Again, I think it's, I think it's great. And again, as I sort of hinted earlier, I enjoy looking in a sense even further away.
So you know, looking at genetic networks, looking at a topology of genetic networks and there's the sort of surprising areas where you find similar aspects and I really enjoy that. So again, I'm in a job that all I have to do is have fun. Right. So I try to do that.
[01:19:09] Speaker C: Oh, that's a good way to end it. Let me ask you guys just kind of a question for the audience, for the younger people out there earlier in their careers and then we'll wrap up. Do you guys make time to go outside of your own expertise and think about the bigger questions and if so, how do you fit that in? Do you take a day like Romain Brett takes days off or at least he used to dedicated to thinking about the big picture, why it all matters, how it all fits together. Do you guys take that time and if so, how do you do it?
[01:19:43] Speaker B: Yeah, so it depends on what you mean by big picture. I mean that can get pretty cosmic pre pretty quickly.
[01:19:49] Speaker C: I don't mean tripping in the desert.
[01:19:53] Speaker B: Yeah, no. So no, I guess not really. No, I don't take that time.
Maybe it's Because I already did it.
I believe that the work that I'm doing is important, and I have empirical evidence that it's making contact with data.
And I feel like I'm bridging. What I'm attempting to do is bridge gaps between the granular and the meso, at least, if not the macro. So I'm pretty okay with where I'm at, and I don't spend a lot of time on that question anymore, although I probably did early on in my career.
[01:20:26] Speaker C: Do you think it's important to spend that time early on in your career?
[01:20:29] Speaker B: Absolutely. And more narrowly, asking the right question is the hardest part of science. You have to get the questions right. It's easy to say, I care about consciousness, but how are you going to. How are you going to get a crowbar into that question? So getting that correct is the most important part. And in fact, you know, when I sit down in lab meetings and we have full discussions with lots of people, really the most important part of that feedback for me is, or that process is calibrating my questions.
[01:20:58] Speaker A: So I'm lucky, I guess, to be in the atmosphere I'm at is there are people working around me on various things, and people talk a lot about rather big questions. One sort of anecdote that when I gave my job talk, One of the PIs there went and said, that was a great talk. But why are you working on the brain? I mean, who cares about the brain? You should. There's lots of other stuff in biology that are more important.
And I sort of said, I care about the brain, but I want to be in a place where people ask me that question.
So I think it's important to be probed, to be poked, let's say, a bit out of your comfort zone every now and then.
But as everything in this conversation, there's a spectrum, and you don't want to spend all your time being poked because you do nothing.
[01:22:01] Speaker B: So, yeah, at the end of the day, you have to do things.
[01:22:05] Speaker A: Yeah. And I think that's. Right. There's a balance. So whether this balance is a day, a week, or a day, a month or a week, a year, it doesn't matter. I mean, it's very personal, but I think it's important to sort of see that you're not drifting to one of these ends. And I have drifted to both ends at times.
[01:22:30] Speaker C: Well, thank you guys for spending the time with me. I look forward to next time when Omri is studying ant colonies and David's still studying the brain, perhaps. But I really appreciate you guys. Thank you so much. It was fun. Thank you.
[01:22:40] Speaker B: Super fun. Thanks for having me.
[01:22:56] Speaker C: Brain Inspired is a production of me and you. I don't do advertisements. You can support the show through Patreon for a trifling amount and get access to the full versions of all the episodes, plus bonus episodes that focus more on the cultural side but still have science. Go to BrainInspired Co and find the red Patreon button there to get in touch with me. Email paulainspired Co. The music you hear is by the New Year. Find
[email protected] thank you for your support. See you next time.