Episode Transcript
[00:00:03] Speaker A: I'm quite convinced, and I think we've been pushing ourselves to be convinced that manifolds are the ideal level to look at neural function.
But I don't know if we will have an understandable mapping from neural function to behavior.
There was a lot of argument more about is this concept, like, you know, useful. And I think now we have enough evidence for that, maybe a bit too much, because I feel like science should be a more like confrontational in the sense that we have more of these discussions, you know, now there's not that those types of counter arguments. It's more like, how is this not trivial? Like, of course you find this. What have we learned by comparing recordings from monkeys doing, reaching and grasping tasks and mice doing reaching and grasping tasks?
And we basically showed that these manifolds were similar across different monkeys or different mice, and the degree, the similarity dependent on how similar their movements were.
[00:01:11] Speaker B: This is Brain Inspired, powered by the transmitter. Hello, people. Welcome to Brain Inspired.
That is Juan Gallego. Juan runs the B Neural Lab, as in Behavioral and Neural Dynamics, at the Champal EMOD center for the Unknown in Lisbon, Portugal, which is probably the most awesome name for a center that I've come across. They probably know that. Sorry to splice this in here, but just a quick correction. Juan has renamed his lab to the Neuro Cybernetics Lab and wanted me to note that they are affiliated with the Neuroscience of Disease and and Neuroscience Programs and the center of Restorative Neurotechnology. Okay, back to the original recording. The main reason I invited Juan is because he has worked a lot on neural manifolds, the mathematical objects that neuroscience is using more and more to describe how big populations of neurons coordinate their activity to do useful things.
In fact, he recently gave a short talk that he titled the Manifold Manifesto because he was asked to be provocative.
And he was provocative, suggesting that manifolds are real, as real as chairs and tables are, that they have causal power, and that they might be a target of evolution.
Very provocative. Of course, he talked about his own work and others work to support those claims.
So today we discuss many of those themes through the lens of his own work and others work. And we talk about what keeps him up at night, about the possible limits of using manifolds to connect brain activity with behavior and with mental phenomena.
Juan is not just a manifold person, though. He's more broadly interested in motor control and how brains do it. In that vein, we also discuss his work in patients with spinal cord injuries who don't have enough nerve connections to their muscles from their brains.
They don't have enough connections to actually enact their muscles to actually move, but they do have enough nerve connections so that some signal still gets through. And Juan and his colleagues can detect that little bit getting through and use it to infer what behaviors the patients intend to do. And they can use that information to control actions in a computer simulation.
The hope is that this will translate to controlling prosthetics to give spinal cord injury patients their mobility. Again. If you're a Patreon supporter, this is a super long episode because I coaxed Juan back on to have another short conversation to discuss a few other topics that we didn't get to.
And I'm including a little snippet of that extra discussion here. But go to BrainInspired Co to become a Patreon supporter for the rest of it and to get the full versions of all the episodes. So thank you to my Patreon supporters as always, and thank you to the transmitter for also helping support Brain Inspired. Go to the shownotes at BrainInspired Co Podcast234 where I link to many of the papers that we discuss. All right, I hope you and your manifolds are doing well out there. Here's Juan.
Juan, we're going to talk a lot about neural manifolds today, among other things.
And, and when I think of you now I synonymously think of like manifolds. Is that, is that an accurate thing or do you even like that?
Do you think that's true and do you like that?
[00:04:59] Speaker A: I think it's maybe I'm part time manifold but I think I'm also there things. I want to be there. But I guess we've been try to be opinionated about what a manifold is and also try to clarify like, you know, the pros and challenges of this framework. So I guess it's fair to some extent.
[00:05:17] Speaker B: Yeah. Well, so, yeah, okay, pros and challenges.
Let's just jump right in and I'm going to ask you to do something that you've probably done a lot. But what is a neural manifold to you?
Perhaps, because I know that there are different definitions as you've written about and then so just broadly and then how does the concept of a neural manifold fit within your thinking these days about brain function?
[00:05:46] Speaker A: Yeah, so I think to me a neural manifold is just like this mathematical object that captures the possible neural activity states given the constraints that a neural population has. And these constraints can be like connectivity, neuromodulations, all this. We call this in a recent paper biophysical constraints, for lack of a better word.
But of course, there's also the task constraints of what the animal is doing. Right. So that is what I would call a neural manifold.
[00:06:13] Speaker B: Well, so you already. Oh, man. Now I just immediately want to.
But a neural manifold, let's say, is defined by the spiking activity of neurons, right.
In a population of neurons. Which separates it from sort of the single neuron doctrine, where the field used to really put a lot of focus on, like, the tuning properties of single neurons. And a manifold is a way to take, like, populations of neural activity, look at how they co. Vary with each other, and then come up with some sort of structure to describe how the population of activity relates to ongoing behavior and cognition.
[00:06:54] Speaker A: Right, exactly. So I think what you said, like you said, I think covariance or coordination, to me, that is the key term because a lot of people, I think there's a lot of emphasis on talking about manifolds as being low dimensional, right. But I think this is very unfair to everyone looking at tuning curves or the first thing on neuron recording, you know, papers we sing on neuron recordings, where basically people were building categories of neurons, right? Like these are ramping neurons, tonic neurons, or these are, you know, muscle cells, force cells, velocity cells in motor control, or place cells, edge cells, and all these things. So we were like, I think in neuroscience we were always low dimensional. You're trying to build low dimensional classes because. Because, as you know, philosophers say, we need like, low dimensional objects or compressed concepts to reason with. Right.
But I think the important thing in manifolds is that you switch from what single neurons are doing to what the collective is doing as a whole.
And then there's a lot of these things, these things that the collective may be doing that cannot be mapped at least very clearly onto the constituent neurons.
So that is, I think, what is the key distinction?
[00:08:06] Speaker B: Well, part. Part of that is that. So continuing on the theme of where neuro neuroscience was in recording single neurons, I mean, the way that you would do it back in the day is lower an electrode and you'd sort of listen for a neuron and that neuron, you wouldn't record it if it, if it seemed kind of random or wasn't like modulated relative to whatever, you know, task your organism was performing. And so you would actually bypass a lot of neurons that you might think is noise, right? And sometimes the neuron that you're. That you end up recording for an hour or 20 minutes or whatever it is, sometimes it's not modulated the way that you would expect it to be modulated. And it seems a little noisier. So then you end up with this kind of population of recorded neurons. And you have to think, all right, which one is relevant to the task that the animal was performing? Meanwhile, all the neurons are firing. Some of them you consider noise, some of them you consider somehow adjacently related to the task that you're performing. You have to, like, make these decisions.
And these days, of course, we just. We put an electrode down. We don't try to isolate any single neur. Well, the population approach. We don't try to isolate any single neurons. We just take them all. And this is a way to sort of collectively say, well, to include all those neurons that maybe we couldn't make heads or tails of in the past.
[00:09:27] Speaker A: Exactly. And I fully agree. And also, like, you know, I think, like, thinking sort of evolved with technology, right? And technology unlocks, like, thinking and understanding, right?
So, you know, there were good reasons why people were doing it back in the day. Again, our textbooks were written, like, with these approaches. But I. I think that now that we can collect different kinds of neural data, we can think of different ways of making sense of neural data.
And to me, the most important change, I think, from, you know, like, enabled by this population approach was like, again, first studies suggesting that the collective is more than the sum of the neurons. So the first paper, if you want me to give you a quick example, the paper that kind of made me go down this rabbit hole, is a paper by our common friend Aaron Batista and Byron you, when in. In 2014, they published this paper basically showing that these collective patterns of activity constrain what a population can do.
So if you. I'll describe it very quickly for everyone in kind of late terms. So this is a bit of a technical paper. So what they did was had monkeys doing a standard brain computer interface task where they would map neural activity from populations onto the movement of a cursor on the horizontal and vertical axis. Right? This is the typical VCI task.
And what they did was, like, do this through a manifold.
And their hypothesis was very nice, like, very elegant. It's like, if manifolds reflect constraints, learning something new within the manifold should be easy, and learning something new outside the manifold should be hard. And that is exactly what they found.
[00:11:08] Speaker B: So to me, let me just pause there and just clarify what inside and outside the manifold or on or off the manifold is, right? So you have, like, this population of activity, and so the neurons collectively, well, individually, sorry, the neurons, they all have, like, ranges of their firing rates under different conditions. For example, and if you think of like if you have 10 neurons, you can think of the state space of the neuron, of those 10 neurons as like a 10 dimensional space where the neuron can go from 0 to 100 spikes per second or whatever within. So that defines the total space of possible trajectories of that neural activity. Right, but the manifold is defined and you mentioned low dimensional earlier.
In reality, the patterns of the population of the individual neurons collectively that make up the population only visit like certain parts of those state spaces. And often what we have found, especially in motor related activity in places like the motor cortex, is that the range of possible spaces within that state space that the population activity actually visits is fairly limited and, or constrained.
And so on a normal task, let's say like an animal performing this cursor mapping task, you'll see that like there is kind of a low dimensional structure to the population of activity while the, while the monkey or animal performs the task.
Um, and I think what you're saying is like when they change the task but keep it within the range of that well defined manifold, then it's easier to. I forget the. Maybe this is. You should jump in now and correct me.
[00:12:57] Speaker A: Yeah, no, no, this, this was a much less compressed explanation of this. So what I was saying is after doing like this, finding this low dimensional manifold that Paul was describing, that is what they observed these neurons were doing when the monkey was doing the task, it was easier for monkeys to generate new patterns of activity within this surface.
So respecting, if you want, within the patterns of working together that they had observed, then change the firing rates a similar degree, but asking the neurons to work together in a different way.
So it's about, in a way, neurons could do different things easily if they were working together the same way, but if you ask the neurons to do a different thing, working in a different way, they couldn't.
I hope that was clear.
[00:13:50] Speaker B: Yeah, yeah, no, that's great.
And that's what got you into the world of manifolds.
[00:13:56] Speaker A: I think this paper and a couple other papers made me think that it was more than basically a nice way to look at data back in 2013, 2014, which is there were very few papers, but like this discuss. I'm sure if someone goes and Google Scholar, like this is skyrocketing, right?
Yeah, but it was this, this idea of, you know, these ideas of emergence that people have discussed a lot on the podcast that, you know that if you want like this coarse graining allows you to see things that maybe are not so easy to understand. At a lower level.
[00:14:32] Speaker B: Well, I saw you give a talk at a recent conference, and I know that you were kind of asked to be a little speculative, a little.
What's the word?
To generate discussion. I don't know if the title of the talk was the Manifold Manifesto, but that's the phrase that you use, which is great marketing, by the way, in that talk you were discussing.
Okay, so stepping back, like, I kind of think I go back and forth about the importance and, and usefulness of the concept of manifold. In one sense, it's.
It's obviously beautiful. You get these ring attractor manifolds and these things that are geometrically satisfying to describe, and it's like a miracle that the population of these noisy neurons can covariate in a way that maps onto these structures.
But then also in my own recordings, which is like in motor cortex, for example, in a mouse, I think, am I looking at a manifold or some, like, Swiss cheese or just like, low level? Like, how do I do? I really should. I really think of this as a manifold because it doesn't map out to those nice trajectories. So anyway, I mean, that's why I kind of, at the beginning asked you to, you know where manifolds are in your thinking these days, because you gave that talk and said manifold Manifesto and, and then listed some. Some things that might be useful for manifolds. But I'm wondering, like, how convinced you are that they are ontologically real objects and how much we should be studying the manifold versus other things, you know.
[00:16:11] Speaker A: Yeah, no, these are. Yeah, that. That was a fun talk to prepare. And it was also like, you know, I prepared it like the week after I moved here, I think, so we were unboxing and stuff. But it was. It was fun to do.
Um, so I think, you know, because I had also to compress. I will focus. I focused on. On what I think manifold. Manifolds are and what they mean.
So the three points that I wanted to make in that talk, and we can go through them later, were first, that manifolds are real, as in they reflect. They are, you know, invariant patterns that reflect real biological constraints.
So they are ontologically real, as real as chairs and tables are. That was the first point I wanted to make.
[00:16:59] Speaker B: Yeah. Okay.
[00:17:00] Speaker A: Okay.
[00:17:01] Speaker B: No, I'll let you get through the three points because I want to jump in.
[00:17:03] Speaker A: You know, of course, the second one was that they. That they, you know, are good for compressed understanding, like we were talking, but in that sense, like, tuning first also were right. But, you know, manifolds, as you were Saying, like, you see these beautiful ring attractors. And so you get these compressed descriptions that we can discuss whether they are useful or trivial and which way to go. Right. Depending on what you're looking at, I think is the answer. And the third point that I was trying to make is that I think they have causal power on the organism. So they are not only like, you know, real, but like the manifold influences what the animal is doing.
What I didn't have a lot of time to talk about is what I think are the two things that keep me awake at night with respect to manifold. I only outlined them very quickly, which were whether we would be able to map the concepts like from, you know, psychology or psychophysics that have taught us a lot into, oh, you know, this is. I'm so, you know, I'm playing Tetris. Oh, this means that I'm going to do this and this, I have this attractor move to this point. Right. And how to go between these two levels. The description and the other level that I think applies to any way of looking at neural data is basically, if we are not trapped into this neurological fallacy of saying like, you know, basically that things that we do, like making decisions when you're setting a movement or thinking about while we are going to cook this evening, can be mapped into a small part of the brain that we can understand, even if I don't think there's, you know, even if I think there's modularity in the brain. So I think these are the three good points on the two weaknesses.
[00:18:49] Speaker B: Oh, okay. Um, yeah, okay. I mean, you've written, you've expanded on that last point more in a review.
Talking about thinking. So we have this phrenology background where we think area X does function Y.
[00:19:06] Speaker A: Right.
[00:19:07] Speaker B: But one of the things that you've written about really nicely is that with the advent of like these modern technologies and recording in more naturalistic settings and more complicated kinds of tasks, that there's more of a heterarchy view that like a brain area and people's work like Eve, Martyr, etc. Whom you cite frequently, a brain area can. Can do different things depending on the context and depending on the complications of the task. And then so the idea that like brain area X has a ring attractor brain manifold, brain area Y, it has a slightly more nonlinear manifold.
And then these two. And you just called them causal. So these two manifolds must interact with each other and be pushing and pulling from each other and then somehow construct up to these psychological terms that we use. And so that's where your worry is, is like trying to like combine these things or.
[00:20:06] Speaker A: Exactly. Like can we even do it? Right. Like one thing that we were discussing in.
In the last review I think is the paper with my friends Matt Peric and Derrick and that came out I think last year.
[00:20:19] Speaker B: But anyway, it's the Manifold View of the Brain. Is that the title?
[00:20:22] Speaker A: I think so, yeah. I think that may be the title. We have a different title and got like changed in the last minute. Okay.
So in that paper I think we were. And we discussed it a lot among the three of us because we wrote this mostly on zoom, which was what made it fun. Like what. You know, because we all agree. I think. I don't want to speak on. On their behalf, but I think we all agree that the brain is modular. Right. And you know, like since like a stroke, like so that if you have a lesion in this area, the, the deficit with very like, you know, you have a stroke here, you will have this type of. Of issue. Right.
But the problem is that maybe this doesn't necessarily mean that this area is doing all that. Right. It just may be part of this aggregate collective brain state that does that process. It just has like a focal effect on that process. This is what I meant about the neurological fallacy. So maybe there's no way that we humans can understand of mapping these human processes into how they're implemented in the brain. Which is a.
Basically a discussion I often have with John Krakauer now in person these days, and that connects a bit with.
To. To his. Their paper on neura. Like mental representations with a neural representations that I think I recommend people to read.
[00:21:39] Speaker B: Yeah, well, I mean on the other hand, I mean it's like our best bet right now.
[00:21:44] Speaker A: Right.
[00:21:44] Speaker B: Because it's an in between. I mean, I think that's what John would argue as well, and has argued is that these sorts of emergent structures, for lack of better term, they're not single neurons. They do have shape and characteristics and they describe well this. They're like a go between because they're. You use the word constraints earlier in terms of how they're real entities and reflect the. The real constraints of the system of the underlying like physiology. But also they reflect the constraints of the tasks that we're performing that the behaviors and the psychological.
Our psychological outputs. So I mean perhaps are the best thing that we have going right now anyway.
[00:22:29] Speaker A: Yeah, I mean that is what I think. Right. And I think a lot of our work, you know, first, when I was a postdoc and then in the lab has been like, basically convincing ourselves that manifolds are real in the sense of, you know, finding invariances, like over time during a learned behavior. So, you know, no matter which neurons you are recording from, if you look at a brain of a monkey that knows how to do this task today, you know, the manifold will look the same two years later, even if we know we are not recording from the same neurons. And the monkey has had an interest in life and learned new things and stuff.
So, you know, this was one of the examples that, or the studies that made us think that they are real. Right. And we can talk about other examples, like, for example, this. You talked about the ring attractor manifold, Right. That has been found in the telemos in the navigation system. Right. So we know that this manifold is not only the task. It's not the mouse or the rat looking around because when they go to sleep, the activity remains confined to this ring attractor. Right.
So this goes back to your point about constraints and why I think manifolds are real and, and I just think they are the best thing we have. But, you know, I'm also trying to be, you know, critical with my own ideas.
That said, as you suggested, I think it's not only this. I think they also have causal power, which we can talk about why I think that way.
So it makes me feel even a bit more convinced that they are interesting.
[00:24:02] Speaker B: But with the level of feeling convinced, if you're like me, the level of worry that I'm wrong also rises.
[00:24:11] Speaker A: Right, exactly right. But that is, I think, what science should be right. Like if you want to push your beliefs and you want to disprove, at least I would like to be the one who proves me wrong if I'm wrong.
[00:24:27] Speaker B: Yeah, well, ideally, you don't have to talk about beliefs.
You mentioned the work that you did in talking about the invariances and some of what you have done. I mean, you've done a lot of things, and so we can talk about a range of things, but one of the, one of the more beautiful areas that you've worked on is saying, okay, if these things are real, we should be able to look at the manifold at, you know, time, at one time in the same animal, and then a year later or whatever, and find the same manifold structure, but also two different animals, even perhaps two different species of animals performing the same task. If that task is not like some sort of specialized ecologically or ethologically specialized task or whatever, we should be able to see essentially the Same manifold or at least map between them. And so you've done some work mapping between different species and different animals at the. At different. Same animals at the different time. So could you just discuss a little bit what you did and what you found there?
[00:25:36] Speaker A: Yeah, exactly.
So this was like work that we did during the pandemic and led by a couple people in the lab with my parik.
So basically our idea was that there's, you know, like there's this common observation that, you know, we can do the same things. And the way I usually use this in docs to, you know, wake people up is just ask everyone to close their eyes and touch the tip of their nose and open their eyes, and then everyone has done it. Right, so where there's two ways that this can happen, of course there has to be some similarity in the neural activity. But is it because it's similar brains? But there's two ways that this could go, right? One is because our brains are similar in the organization of, you know, microcircuits, columns, and so forth. If and because we believe that this is what shapes manifolds, then the solutions that these brains could adopt should be the same. The alternative hypothesis is that because we know the genome has very limited information capacity and a lot of people have talked about this, right, Maybe its brain kind of converges into a slightly different solution. So what we did in this study was test the first hypothesis in, by comparing recordings from monkeys in reaching and grasping tasks and mice doing reaching and grasping tasks.
And we basically showed that these manifolds were similar across different monkeys or different mice. And the degree the similarity dependent on the. On how similar their movements were.
And we also showed, by the way, that if you try to break the connectivity of the circuits, and we did this in artificial neural network models to address our hypothesis, right? If you have underlying different underlying connectivity statistics yet, then you wouldn't get similar.
So this is how we were trying to close our whole loop of hypothesis from connectivity to manifolds and behavior.
[00:27:31] Speaker B: And then I can use math to do a lot of function transformations, right? And so the way that sort of pictorially you presented is like you can.
So you see different trajectories. They look a little bit different between, let's say, two animals, but you can perform a linear transformation on them, and all of a sudden they map onto each other. And so I could skeptically think, well, yeah, well, I could perform like a fancy transformation on different functions and force them to map onto each other. So what, what makes you confident that these are the same kinds of manifolds, even though you're. You're doing a little extra math on them.
[00:28:14] Speaker A: Yeah, that is like the reviewer number two and three question.
[00:28:18] Speaker B: Oh, shit. I'm sorry.
[00:28:20] Speaker A: So we, we have lots of controls. One of them, I think that, that my friend Matt Perry, this is also what, what I brought him up like, he have. He has done in a different paper from the lab is like, it's very easy to. Or relatively easy to have RNNs that, you know, very simple models, right. That you train to do the task and you find different solutions.
We also have in the paper the same control.
We have lots of controls in papers, but if you compare across two different tasks, the similarity.
Let me unpack this a bit more.
We have a control in the paper that I think is the most convincing one where we have two different monkeys doing the same task.
Okay. And then we have the same monkey doing two very similar tasks on the same day when we were looking at the same neurons.
And these tasks were as similar as I'm generating force with my wrist on different directions, or I'm basically moving my wrist on those same directions.
And the manifolds between the two monkeys doing the same task were more similar than the manifolds between the same monkey doing these two very similar tasks when we were looking at the same neurons, because we did this like in blocks. So I think to me that was like the stellar control you mentioned.
[00:29:34] Speaker B: Reviewer 2 and it just made me wonder like how much these days I can't imagine you're getting like pushback on just the concept of a manifold. It seems like there's enough evidence, enough people producing papers, public publishing essentially that any resistance that might have initially been there maybe has gone away. But what do you find? Is there still a lot of resistance to the concept of manifolds or. It's part of the daily vernacular where I am.
[00:30:04] Speaker A: So, yeah, no, I agree with you. I mean, I remember when I was a kind of junior east postdoc talking about these things. It was. You would get a lot of.
So for context, this was like maybe 2016, 17.
There was a lot of argument more about is this concept, like, you know, useful? Like, will it tell us something that we haven't been able to find at the level of singleness? And I think now we have enough evidence for that now I think that. And, and maybe like, like maybe a bit too much because I feel like science would be a more like confrontational in the sense that we have more of these discussions.
So I think we should also, you know, now there's not that those types of counter arguments is more like how is this not trivial kind of question like answer like of course you find this, what have we learned?
Which is, you know, like what a good study should be, right. I think it should teach us something new.
But I think I agree with you that the conversations have completely changed in the last six, seven, eight years.
[00:31:11] Speaker B: Yeah, well I can personally tell you like any which way I look at my data, right? So the manifolds are beautiful, right? And so I get this data set and I think man, I'm going to find the manifold for this behavior. And then I look at my data and I'm like, oh, it's not like obvious this, that there are these repeatable trajectories.
And then I, then I doubt like whether the concept of a man. But you can still say, oh, it's a, it's a manifold of X dimensionality. So many dimensions explain so much of the variance. If I do some linear PCA dimensionality reduction on it and you call it a manifold, but it isn't necessarily beautiful.
So the question that you mentioned, how useful are they?
Because they're descriptive and they're obviously beautiful, they're nice to look at. If you can talk about something has a ring shape or linear attractor, a line attractor shape. So these things are nice to look at. And one of the things I wrote down to ask you about is whether relates to their usefulness. Can we, what potential do manifolds have in terms of developing theory which people often point to as being a lacking thing in neuroscience, right. So we have these in between emergent structures and we can describe the properties and they're invariant. But do they help with theorizing about the brain?
[00:32:41] Speaker A: Yeah, I think this is what the, like what we as a manifold community as part of as much of myself that is part of the manifold community, right?
[00:32:51] Speaker B: You're like the leader, you're one of the leaders of the manifold community, man.
[00:32:58] Speaker A: So I think this is the next, the next phase, right? Like one thing that is making me happy in the sense is like people are studying systems that are a bit different from ours. Because when I was supposed to like you looking at our data, I was like, oh, we're going to find this manifest and we find like very beautiful, simple, understandable things. Like we have all these tasks and we find dimensions that, I don't know, commands to these muscles and these muscles and wave.
But on the other hand, people can study in other systems like this get direction system for example or you know, we've talked about it to show that, you know, the ring is preserved in, in fruit flies, we know that you can drive the activity to different points of the ring attractor that they also have and you can basically steer the flight, the fly, the fruit fly to start to show causality. Right.
And there's a now like I think three, four papers that we can discuss that show that this basically this compressed description of mirror data, also like if you take the insights that you can get from them and you causally manipulate in the way that intuitively would make sense, leads to the effect that it would, that you would predict. So this is why I meant like they are real, but they also are meaningful to the animal because we can take these compressed objects and basically manipulate behavior by, by moving activity along this manifold. Now that we have better technology, if you want, we can talk about one of these examples.
[00:34:36] Speaker B: Yeah, sure. But I also maybe before that earlier when we were talking I was about causality.
I was thinking, well, yes, okay, so the manifold sits in between the nitty gritty physiological details and the behavioral or cognitive output. But also behavior has an effect on neural activity and behavior itself is a constraint. So in your example of closing our eyes, touching our nose, it may be that, that I was going to call that ethologically relevant behavior. Although it's not, you know, but a behavior behavior like that we've been sculpted evolutionarily to perform.
It's like vital that we be able to touch our noses to survive.
But that could shape the neural dynamics themselves. So how do you think about that causally in a sort of. Is there a circular causation there or, you know.
[00:35:30] Speaker A: Yeah, I mean like what we saw in the paper too is that if you look at the preparation of these movements while the monkey is not moving, you also find these preserved manuals.
And again like also like the example during a sleep in you know, the head direction system and also the grid cells that are related to navigation. Like I think this supports the reality of it of manifolds, but I think the causal effects have to be a bit more direct. Right. And as you said, like as dissociated from behavior as possible. Right.
Because otherwise we don't know if what we are doing is money, you know, is the behavior, the manifold is a representation of the behavior or a correlation of the behavior. I didn't say representation. I correlate of the behavior. And then what we are doing is manipulating this. Right.
And I think there's a good example from this from.
Maybe we can talk about a concrete example from my now colleague Joe Patton here at Sampalimo, who has work on this, like, he's very interested in timing in the nervous system. So they've developed this task where basically rats have three ports and they poke their nose into one port to start a trial. And then they are presented with then a tone, auditory tone rings. And then there's an interval and a second tone rings. And the rats basically have to categorize if this. If the interval between these two sounds was smaller than 1 1/2 seconds or longer. Shorter or longer. Sorry. And if it's shorter, they have to stick their nose in one of these goals and of these words. And if it's longer, they have to stick their nose in another.
[00:37:19] Speaker B: An interval timing task.
[00:37:21] Speaker A: Exactly. A categorization task. And an important detail about the task is that they have a version where the rats cannot move, so they have to be there with their nose during the interval.
And what they've done is they've found that basically in a striatum, so in the basal ganglia, they find a manifold that is basically a trajectory that is independent of the passage of time. So it's always the same geometry. What happens is that the.
That the speed of these dynamics along this manifold relates to the passage of time.
Okay. So what they've done that is very cool and cost out because the rat was like, they're waiting, they could cool down a striatum to make this dynamics go a bit slower. And then the rats would report that time, less time had passed.
So to me, this is like really beautiful because there's, as far as they could see in the paper, no covert, no overt correlate of this. The RAD was just like internally, like misjudging, if you want, based on this internal dynamics, the passage of time on this manipulation. So this, I think, a very compelling example that these compressed objects allow you to make some predictions that then you can test positively to show that they matter for behavior.
[00:38:42] Speaker B: They physically cooled down the.
Was it basal ganglia?
[00:38:47] Speaker A: Yeah, with a temperature probe. They also showed that if you. You can heat it up a little bit. So the paper, I think, has like different. Four different temperature manipulations. And you can see that the speed of the dynamics and the behavior moving the way that you would expect.
[00:39:02] Speaker B: And you. And I'm so sorry, and this is kind of a detailed question, but one could imagine that some two single neurons are mainly responsible for all the variation within that loop structure and speeding up and slowing down. So I guess the question is how are we confident that they're manipulating the manifold as opposed to a different explanation of manipulating the speed of all the neurons.
And so you're gonna. You can explain it by manipulating all the neurons, but I guess that kind of defines the manifold. I'm walking myself in circles here.
[00:39:39] Speaker A: Yeah, no, but this is like a very good point. And I was thinking about this recently in the context of, you know, reading about emergence like this. This is a topic that also comes up in your podcast and I'm interested in. Right. Because it relates a lot to manifolds. So I think the.
The intuition here, and maybe for everyone, or at least my intuition, is that, you know, manifolds can be reduced to neurons. Right. Because manifolds reflect the collective activity of all these neurons.
But I think that it's a bit like Phil Anderson's more is different paper.
Unless you've observed the manifold, you cannot know, you cannot predict what is going to emerge. So you can go from manifolds to neurons, but you cannot look at the neurons and expect this is going to happen. You have to basically analyze what these neurons are doing as a collective to see the.
So I think it's a really beautiful study because it means that again, this manipulation either makes whichever downstream region judges like I think it's been this time go left and right. That is not what I think happens. Or maybe moves the collective brain state to. That is related to the decision of left or right.
[00:40:54] Speaker B: All right, so let's say you have this in the basal ganglia. You have this beautiful timing manifold structure that you can either go slower or faster on, and you can causally manipulate the rodent's behavior to subjectively or to report its subjective.
Report of the timing. So, but then. All right, so then you have that in one brain area and it has to causally interact with other manifolds in other brain areas. And this is something that you were alluding to earlier, right? That this is a challenge.
But. But if I take neuro, like the neurons from that, and then I treat it not as one brain area, but then I take a different neural population from a different brain area that is part of the causal story of how the behavior eventually happens, and I treat them as one brain area and I get a different manifold from combining the two and their interactions. How. How do I think about that? You know, I can potentially look at the manifolds of every brain area.
[00:42:03] Speaker A: Exactly. I think this is one of the two limitations that I was alluding at at the beginning. Like, it's like basically a different side of the same coin in the sense of, you know, we're, I think, thinking about drain regions is how is very useful. Right. Because even if, again, I don't think the brain is like 100% modular, I also don't think the brain is 100%. Like everything is everywhere all at once.
So again, it's something we can talk about. We can say like, oh, prefrontal cortex neurons mostly care about this. If we listen, prefrontal cortex, this happens and these things. Right. So in a way, they are useful, compressed understanding. And we have centuries of evidence on these things. Right.
On the other hand, to what extent the most interesting things that we do, like having this very abstract conversation or, you know, creating art can be reduced to this area? Is creating art or making a decision that is this. I think the big challenge for going from behavior to is neural implementation. And maybe what we have to do is what you were alluding to. We have to take all the areas together and look at some change in the brainwide manifold that will be the correlate of this interesting thing that we're interested in. Right. Like an interesting behavior.
[00:43:21] Speaker B: But then you can just measure the behavior and say, that is the manifold of the whole brain.
[00:43:25] Speaker A: Right, Exactly.
[00:43:26] Speaker B: What. What granularity do we need to be? Do we need to be satisfied?
[00:43:31] Speaker A: Exactly. I think that is the million dollar question. And this is when you asked me at the beginning where I was at. Personally, I think manifolds or I'm quite convinced, and I think we've been pushing ourselves to. To. To be convinced that manifolds are the ideal level to look at neural function. But I don't know if we will have an understandable mapping from neural function to behavior that works across everything and understandable. I mean, in this, you. I think you had like 10 Hank direct in the podcast, right?
Understand Hank direct.
[00:44:09] Speaker B: Oh, yeah, Hank. Yeah. You cite Hank's work. I've seen.
[00:44:12] Speaker A: Yeah, like, like talking about scientific understanding. Maybe we can build a transformer that would be able to one day build this mapping, but maybe we will not be able to understand it. Right. We can't tell it.
[00:44:25] Speaker B: Yeah, I'm interested in understanding too. And in some sense we're limited or trapped by these symbolic things that we use called words.
And we have this legacy of what's known as the machine metaphor for brain function, where we have to look at a little part and give a name to a function. But everything that we're learning is that the name that we give to it can change depending on the context of what the task is that the animal's performing and everything is dynamic and changing, and we just can't have a conversation about.
We can't go to, like, some differentiable limit of function. We have to, like, give it a static name at some point. And we can't, like, say the hippocampus is a navigation memory system when you're waking up in the morning, when you're 42 years old, and when you're 47 years old, when you're going to bed at night. It's a abstract, you know, you just get into an infinite run on sentence. So this is part of the challenge, I think, is that we have to give names to things basically.
[00:45:40] Speaker A: Exactly. If we want to understand them. Right. Like, although language and thought are not the same thing. Right. Like, at least we have to have like, again, like, compressed objects we can sync with. Right. To understand.
And maybe behavior is the best level for that.
[00:45:56] Speaker B: Maybe. I'm sorry, maybe behavior is the model.
[00:45:58] Speaker A: Maybe behavior, as you were alluding to, is the best depot for that.
[00:46:03] Speaker B: Wait, so, okay, so then now I've lost again where your thought is in terms of how you're thinking about these things. So are you with someone like Jon Krakauer who says, like, well, there's just gonna be a divide between the psychological and, like, neural descriptions, and we just have to get comfortable with it.
[00:46:26] Speaker A: I think I'm not in the same extreme as John for sure, but I'm not convinced that we would be able to find a neural explanation of all the interesting things. I think we have a good neural explanations for interesting processes. Right.
Like, you know, like these things we talked about, constraints or also, like, what is the transition between preparing and executing a movement or, you know, like this time, judgment, decisions and things like that. Or. But I don't know if we would be able to, you know, by the time that you and I retire, we would have cracked this.
[00:47:05] Speaker B: Not me.
[00:47:05] Speaker A: Yeah. Yeah.
I think you are a bit farther away from retirement.
[00:47:12] Speaker B: One of us, at least it probably is.
[00:47:15] Speaker A: Yeah.
[00:47:15] Speaker B: So.
[00:47:16] Speaker A: Yeah. So I'm optimistic as. Just to clarify, I'm optimistic as manifolds are the best way of neural description, But I'm not 100% confident that we'll be able to find a neural explanation of behavior.
[00:47:31] Speaker B: Let's talk about causality for a second.
Going back to it, I know that you just got Alicia Juarero's book Context Changes Everything.
And I have come. So she's been on the show, and I think I mentioned this to you, that some of the feedback I've gotten from her book, a lot of people really love it. I really love it. But I also sympathize with some of the neuroscientists, feedback that it's kind of like, well, okay, fine, but I don't know really what to do with this information. I can't put it into equations. There's not some formal mathematical system by which we can study how the constraints of a system, how to think about how constraints affect ongoing behavior, whether you're talking about behavior of neural populations or behavior of the organism. But one of the driving points is that, and philosophers have written about this, is that there's this hazy line between what we think of as causality, like one billiard ball hitting another, and the constraints, namely the pool table and the, the composition of the wood of the queue, et cetera, all of those constraints that go into making it happen as well. And I've just come to sort of accept that the billiard ball view of causation. Let me rephrase that. I've come to accept that context and constraints are the main story of causation, not some sideshow of causation. And I'm wondering how you think about, if you thought about these sorts of things, uh, and if so, how you think about them since we were talking about causation. Yeah.
[00:49:14] Speaker A: I haven't read Alicia's book yet because I just got it a couple of days ago, but I, I really like the, you know, like, I define manifolds as mathematical objects that reflect real constraints on neurons. Right. So this is something that I'm very interested in, and I think it has a lot of power. Right.
[00:49:32] Speaker B: Like, but see, in that wording. Sorry to interrupt. But in that wording, it's a, It's a mathematical object that reflects the constraints.
So the constraints themselves are sort of like, they're there, but it's really. The manifold is the thing, you know,
[00:49:47] Speaker A: But I think the manifold is, if you want, a consequence of the constraints. Right. Because if we didn't, maybe I was being a bit wishy washy with my word. And as you have said, like reflects those constraints in the sense that.
Or is a consequence of these constraints. Because if neurons, you know, if we didn't have, like, connectivity structure that improves constraints, and we didn't have common inputs, neural modulators, and this stuff, we wouldn't have constraints. Right? So that, to me, is what delineates a lot of what a neural circuit can do. And if we think, for example, about learning, we talked about this paper in monkeys, where the constraints that you have shape how easy it is to learn a slightly new version of what you know how to do. So I think constraints are very important. I have a lot of causal power. The billiard ball cause example is really nice because it again, easy to understand. And the example I gave you from this time manipulation study is very, very billiard. Boldly right. You manipulate and this is the consequence.
But I think the other point about constraints having causal power or on what animals and brains can do is like super important. And clearly it has been like the basis for many of the studies we've done in the lab.
[00:51:06] Speaker B: One thing I also want to ask, and I want to make sure I ask you about this in the manifold story as well, because we have other things to talk about here, but everything that we've been discussing is about measuring the spiking of neurons and correlating the spikes of one neuron with the rest of the population of the other neurons. But then even in your last few sentences there, you talked about, I think you just mentioned the word neuromodulators. There's a lot of other things going on in the brain and we tend to think of the spikes as being the thing. The spikes are, after all, what make up the manifold. The spikes are what we measure.
But there are neuromodulators that shape the activity of the spiking activity. But are we even recording measuring the right thing? Like, is the spike the right thing? How do you think of. I know this is like such a dumb question, but we sometimes have to go back to those like, assumptions, like, because it's the lamp post problem, right? So we could measure neurons with single electrodes a while back. And so we did, and we averaged them, et cetera. Now we can measure the spiking activity of lots of populations of neurons. So we are, and we have found that they have manifold kinds of structures. But is this the same sort of problem of looking under the lamppost just because that's what we can measure?
[00:52:31] Speaker A: Yeah, that's a great question. I think that I also wonder about this, but I think, and we discussed it a bit in the interview that we were talking about. I think, for example, neuromodulation will change manifolds. And in a way that, you know, looking at, of course, if we had measured them, it would be better because then we could basically, hopefully establish relationships between the geometry of the manifold or where you are with respect to, I don't know, norepinephrine release. Right.
But I think this is kind of built or captured by the manifold. The problem is that if you don't think about it, then you may be seeing something on your data that is like the animal is doing exactly the same thing, as far as I can tell, what is the activity on this part of a state space and not there.
And maybe it's just like this if you want neuromodulatory states or maybe like arousal or other things that are also related to neuromodulators. Right. So I think it will be reflected in the manifolds. Not thinking about it may add to our confusion or has added to our confusion, I would think.
And there's a couple of papers that hint in that direction.
Yeah, and the more we can measure, the merrier.
[00:53:52] Speaker B: Sure, the more the merrier. But I guess buried in my long winded sort of comments or question was, would it be better if we defined manifolds? If like, let's say neuromodulators were part of the manifold rather than shaping the manifold? The manifold is the neural activity. But why can't the manifold be neural activity and neuromodulary activity and oscillations?
You know, there's this thing is like, that's the, the grand goal, right, is to like, we have all of these interacting parts that we know give rise to emergent properties.
Can we tell a story across temporal and spatial scales, including all these things not just spiking, and call that some sort of manifold? Would that be better? I don't know.
[00:54:47] Speaker A: Yeah, I don't know either, because the neuromodulators, I'm sure that. Well, I'm sure there's a couple of studies that show that, for example, there's correlated drift, quote, unquote drift for people that are not watching the YouTube review.
[00:55:01] Speaker B: Define drift.
[00:55:02] Speaker A: Define drift or some unknown slow change in neural activity that was found across very distant brain areas that is basically related to, that was related to engagement and attention. That could just be explained by, you know, there's this change in neuromodulator release that basically makes the activity move in the state space, or drift if you want, in these two areas in a correlated way. And that just may be arousal. So if we had, if this group had been measuring like this neuromodulator, they could know for sure.
Going back, you talked about neuromodulators and oscillations. So this is something that we actually worked on a small paper and looking at the relationship between manifolds and intracortical locust hypotensions.
And we found a relationship that was kind of stable between different phases of behavior.
But one thing that I'm not sure about is whether oscillations Are like, constant for the brain or if they just reflect underlying biophysical processes that with neuroanatomy, just give rise to these oscillations that we mean that we measure. And they are just like epiphenomena in that sense.
[00:56:20] Speaker B: How can I.
[00:56:21] Speaker A: And I don't have a strong view on this.
[00:56:22] Speaker B: You don't? I'll argue for the. The fact that they're causal. Right. Okay. So let's say oscillations. You know, there you have a bunch of neurons, and then you can measure the electrical. The field potential activity. Right.
[00:56:35] Speaker A: Of.
[00:56:36] Speaker B: So you have like sort of positive, negative, positive, negative. Well, neural activity is affected by the voltage difference between the extracellular milieu and intracellularly. And if the oscillation, which is the collective whole is changing, that neural activity that the neural. That the single neuron is being affected by, how is that not causal?
[00:56:58] Speaker A: Yeah, I think it could be. I just don't.
What I was trying to. I meant in the most general sense because there's a few papers that I'm aware of where they saw that the biophysics of LFP generation, for example, depend a lot on the brain area that you're looking at. Like, for example, hippopotamus has a very different structure than motor cortex. So there may be like, the sources generating the oscillations that you measure in these areas may be local in some areas and may be not local in other areas.
So I just meant, like, using this idea as a blanket.
Like basically a blanket assumption that they are always processed.
And this is, I think, like a different conversation. And I think, like, to be really well informed, we have to study like, these mountains of papers that I was trying to read back then.
[00:57:49] Speaker B: Well, we'll let Earl Miller do that.
[00:57:51] Speaker A: Right?
[00:57:51] Speaker B: Is that the.
[00:57:51] Speaker A: Yeah, exactly.
He has some interesting coastal evidence. That specific band, if you modulate it, it can have an effect on behavior.
[00:58:02] Speaker B: Well, I think this is, again, the, the, the problem is that there are so many interacting parts. And, and you guys write about this in your review too, is like, it really is dependent on what temporal scale you're looking at, on what spatial scale. And, and that's why, going back to, like, we.
The degree of modularity of the brain, whether area X is always performing function Y.
No, it's not, but it always is in this one condition. So then how can we describe what area X is doing? I mean, I think that same thing can be said about oscillations and astrocyte contribution and all these interacting parts.
[00:58:38] Speaker A: Exactly. Yeah. Astrocytes are also like, things that we should probably talk about as a community that have been like, well, there's people studying astrocytes, right, but maybe they don't get astrocytes enough credit.
[00:58:50] Speaker B: Right, right.
The astrocyte manifold hypothesis.
[00:58:54] Speaker A: Exactly.
[00:58:57] Speaker B: You and collaborators have written about how like, especially in the early days of describe of figuring out what a manifold was, you take like a high dimensional space, like you know, lots and lots of neurons, which means lots and lots of dimensions, and then you throw it into like a linear dimensionality reduction technique, most famously principal component analysis. And then which shows you like the linear shared variance between the neurons. And then you have like these little, what you can map out a manifold in that linear space.
One of the things that I guess you've worried about over the years or continue to worry about is, you know, is that too simplified a picture? Because neurons themselves behave in nonlinear ways. Even dendrites of single neurons can behave in non linear ways. Then why would we expect to get this linear relationship in a low dimensional manifold? And what you suggest is that, well, yeah, it probably is too simple and it turns out manifolds are non linear and there's work to do in mapping out the relation between the complexity of tasks and cognition and the linearity non linearity of manifolds. So can you just discuss briefly where your thoughts are on that?
[01:00:21] Speaker A: Yeah, it's like, yeah, like from the beginning we're thinking about this because it would be so nice, right, like that all things are flat for some reason, even if nonlinear neurons, as you were saying, are nonlinear and if you have like many nonlinear interacting parts in complex ways, is unlikely to be all linear. Right.
But on the other hand, like PCA is nice because as you were saying, principal component analysis is the simplest method, most basic assumptions that you're basically asking what are the patterns within this population that explain the most stuff. And you could make the inference that what dominates this population is likeliest to capture its function.
So I'm, you know, very happy with starting with pca, but then we're starting to look at more complicated tasks where we needed lots of these principal components to capture the variance. Well, and what we found is what you would expect that you were describing. Right. If you have your serial boat, this is a two dimensional surface, but it lives in a three dimensional space. So with PCA you would need three dimensions to describe this bottom.
And things get even worse because, you know, our world, unless you are a string physicist, has three dimensions, three spatial dimensions.
But you were saying like we record now from hundreds or thousands of neurons. If you think about the we go back to the ring manifold that we were talking about before that a ring is clearly a one dimensional thing. Right. It's like a ring.
But if you take you, you can think about it. I wish I had like some rope or something here. Like I have a rubber band. So for people on YouTube, if I take this rubber band and like this is a one dimensional manifold, but if I twist it a lot now, it becomes a three dimensional manifold. If you look at how many linear dimensions you need and if you have a thousand neurons, then it can look like a 250 dimensional manifold even if it's just a ring.
So I think there's been a lot of papers written on things are very nonlinear things that scale with more neurons. But we have to be, you know, very mindful about this simple example of the rubber band that you can use a nonlinear dimensionality reaction method and probably find a low dimensional surface that, and I think this, this matters just because of the assumptions that we were talking about at the beginning. Right. Like there's constraints and that that are intrinsic to the circuit and the task itself. Yeah.
[01:02:58] Speaker B: Part of Hank Direct's concept of understanding is like that we should be able to make sort of predictions without doing calculations.
If we understand something, someone who is well versed in, let's say manifolds should be able to sort of look at the shape of a manifold and make a prediction about the outcome of some causal manipulation or something. I don't know what Hank would actually say. I think he would say that. But so in the case of like, okay, we do this linear manifold reduction technique and oh, you can, you could see it if you do like a little smoothing technique, you can just see that like nice trajectory. But then you do the nonlinear one which explains more variance and is a better epistemically, a better explanation for the data. But all of a sudden it sort of escapes our intuitive ability to think about it and manipulate it.
Where, where do we, where's our satisfaction there? Like where do we cut off?
Where we say we're satisfied is we've estimated it well enough linear, let's be satisfied. Or like, yeah, it's really nonlinear. But if you look at it this way, you can see like a little glimpse of it in the linear domain
[01:04:10] Speaker A: or something like exactly. I'm, I'm with you. I'm on, I'm more interested in understanding. So like the same, the tasks that we looked at like the non linear manifold look like a slightly different version of the nonlinear.
Just like we need a fewer dimensions. So this would Be interesting when we manage we as a community to record from people playing Tetris or you know, creating music.
Right.
But to me it would be more satisfying what you were saying, right. Like we look at something that we can comprehend and then that we can use to make predictions and test hypothesis. Why I think this is important to bear in mind is to avoid saying like manifolds are not true because I found that the more neurons I record, the more dimensions I need with pca, with Principal Component Analysis and this is a bit technical. Sorry to apologies to the audience, but I think this is what I think is important.
[01:05:06] Speaker B: Okay, so we, that's what I was going to ask is like what kind of purchase that has on whether we consider manifolds ontologically real or just real. You know, I didn't have to say ontologically they're being real if like it escapes our low dimensional understanding of them. And we just have to like sort of trust. Well, it's this really non linear, high dimension nonlinear space that we can't conceive of. But we just have to trust it.
But it's still real.
[01:05:37] Speaker A: I think we should be able to, you know, even if it's nonlinear like a lot of these methods, basically like the one we use, isomap, you can think about it as you have a very nonlinear surface and it basically unwraps the surface. Right. An object like you have your, I don't know, your serial bowl again and it will unwrap it. So I think you just have to adjust your thinking and you know, there's mathematicians that think in 10D. I think like everything in science is a compromise between like and we can go back to Hank direct doing, making predictions that Maybe are not 100% accurate, but I can make just by reasoning about them and they would be like 70% accurate versus I'm going to build a huge neural foundation model that will give me 100% prediction. But then the model is so big that I cannot just like make sense of it. So I think it's like the art of modeling and understanding. I think this is why science is a bit like an art and hopefully we still keep our jobs with a kind AI revolution.
[01:06:40] Speaker B: Well, I mean Masrita Chirimuto makes this point too. And this is an old philosophical point that, you know, when we model, you just said modeling it's necessarily an abstraction.
And going back to Alfred North Whitehead, which he called the fallacy of misplaced concreteness, one of the things that we tend to easily do is slip into the treating the model as the real thing.
[01:07:03] Speaker A: Right.
[01:07:04] Speaker B: And so there's this sort of worry that when we're talking abstractions and models and manifolds, that we can start treating the manifold as the real thing, when maybe we're just modeling it as this something, because we have to be able to do that.
So I don't know, there's this trade off always in thinking about these things.
[01:07:22] Speaker A: Yeah. I think this is one thing that used to come up a lot in the early days about talking about manicure, like, is this abstraction real?
But I think, I wonder what would have been the case if we had been able to, if the history of neuroscience had been different, if we hadn't grown into looking at sigma neurons, it could have been the opposite. Right. Because, you know, like, chairs are not real. Right. They are collections of atoms.
So maybe we had seen atoms before, we would have said that chairs are not real.
[01:08:00] Speaker B: Oh, man. Okay.
[01:08:01] Speaker A: Yeah.
[01:08:03] Speaker B: I've asked David Krakauer, like, is there anything that is not an emergent property? And he said yes. I still don't buy it. I think everything that we can talk about is an emergent property of other
[01:08:14] Speaker A: things, I guess, like the quantum wave function. If you asked Sean Carroll. Right. That is everything is an emergent property of the quantum wave function. And, and, and I think I may agree with that to my poor understanding of quantum mechanics.
Right.
But anyway, like, thinking about the quantum wave function will not let us predict how our conversation is going to go. Right.
Yeah.
[01:08:37] Speaker B: You know, in a sense, though, like I think you said. Did you say a neuron is not real or cell?
[01:08:45] Speaker A: I said that we had only looked. If we could only look at atoms, we may say that my theory is not real. No, no. Neurons are real.
[01:08:52] Speaker B: Yeah, yeah, yeah, yeah. But we could kind of say that, like a neuron's not real because it's only atoms.
[01:08:56] Speaker A: Right.
[01:08:56] Speaker B: It's just where it pans out in a super reductionistic excitement. I think it's funny that we always use atoms still, even though we know there are quarks, you know, subatomic things, but we still go to atoms as
[01:09:07] Speaker A: the base, like bionic matter and these things.
[01:09:11] Speaker B: Yeah, yeah.
[01:09:13] Speaker A: But I guess the idea of atoms is the oldest one, right? Like synthetic.
[01:09:15] Speaker B: That's right.
[01:09:16] Speaker A: Yeah, yeah.
[01:09:17] Speaker B: Democritus democrats. Yeah, I can't remember that.
[01:09:20] Speaker A: I think so. I think so. Yeah, yeah.
[01:09:22] Speaker B: But, but a neuron. So manifolds are in this interesting space too, because a neuron, even though you say it's not real. Juan. I'm just kidding.
But you can see neurons aren't Real will be the title of the, of this episode.
But I mean you can, you can see a neuron and you can measure like the, the voltage. Right. And that's how we get action potentials is by measuring those voltages.
So manifolds, there's this like, they're like these floating abstract things. Right. So there's always this where I can't see a manifold unless I plot it like in a three. Three of its dimensions.
[01:09:59] Speaker A: Right.
[01:09:59] Speaker B: You know, so.
[01:10:01] Speaker A: Yeah, yeah, no, I agree with that. It's like I would, I would a good, you know, thing to say. Like it's obviously right. But I think we have enough evidence that that is the case. Right.
[01:10:13] Speaker B: Okay. I've heard a lot of people say like when they talk about you, they say his only talent is manifolds. That's all, that's all like he can do or talk. He just jabbers on about manifolds all day. So I'm going to try to get you to talk about something, something else today. No, but I know that I don't know how you may, maybe we can transition into your work with bcis.
Because I was going to ask you in the very beginning like how you got into this racket are you still doing these days what you are you following? Are your interests the same these days as what got you interested in neuroscience, for example? In the beginnings, yeah.
[01:10:55] Speaker A: I've had like, I mean this could go on very long. Like my, like the journey of my interest. Like I started doing research a bit in robotics as an engineer and then I moved into more translation, I think what people would call now neural engineering. So my PhD thesis was on building a closed loop system to use surface elect stimulation of muscles. So for people who are old enough to remember these ads of machines to build up your abs while you were like eating snacks in a football game.
So we would use like sophisticated ways to stimulate arm, forearm muscles and arm muscles to counteract the tremor of people with Parkinson's disease or essential tremor. So this was my PhD, but at the same time I got very interested in understanding why people had tremors. So I started to read a lot about clinical neurophysiology. I went to work in a lab recording from human spinal cord motor neurons in people with tremor. So this where this was my segue into neuroscience. And then I guess because I had this like translation and interest, I. This is why I joined like Lee Miller's lab that you know, was like doing this pioneering work on, for people who don't know. Li was one of his group, was one of the first, like, my friends Emiliovi and Chris Tatia did the work. As, you know, PIs, we don't do anything right of. They showed that a monkey that they had, like, basically reversively paralyzed with anesthetics, like the same anesthetics that the dentist used. So they were paralyzing the nerves and the monkey couldn't close the hand. Like, they could decode from the brain using a brain computer interface how the monkey wanted to activate the muscles. And then they could use the same type of muscle stimulation technology I was using to reanimate the paralyzed muscles.
I let the monkey do interesting things again, like grab things and move them around. So this is the project I joined and I was working quite a lot on this.
We only never had an N equals 2 experiment so that for people doing animal research, they would understand this, like the implants and things.
So I was working a lot on bcis, and at the same time, again, because I was interested in understanding what was going on, this is how I got into Manicos.
[01:13:14] Speaker B: Ah, okay.
[01:13:15] Speaker A: So it was kind of doing both things at the same time.
[01:13:18] Speaker B: So I was thinking, like, I didn't know if this was a sort of a newer direction, but it's like returning home to your original.
[01:13:26] Speaker A: Yeah. And in London. So when I. We started my group, like, we were doing this manifold work. We are also doing a lot of mouse experiments that we was, you know, we are presenting at conferences now, more like motor control, motor learning, BCIs.
And we were also working on motor neuron control because we are a motor control lab at heart for those who don't know it.
So we mostly work on motor control and motor learning. And the output of basically movement doesn't happen because the hand magically moves. Right. Like your motor neurons are doing the causing the movement. Right. So we are studying that and also translation applications. And this was a bit of a long answer, but just things into context.
[01:14:10] Speaker B: Yeah. So spinal cord injury.
So tell us. Tell us the story about what you have found in patients who have spinal cord injuries and the extent of the damage versus what you can do with it, et cetera.
[01:14:26] Speaker A: Yeah. So this is. I'm very excited about this work that we reprinted recently because as part of all these basic understanding how spinal motor neurons work efforts, like a bunch of awesome people working in the lab. So just like shout out to Vish, Kira, Singchen, Agnese and Laras. It's like this was a team project with my colleague Dario Farina.
So we were basically doing all this basic science work and then we had the idea, why not basically try to have even clinically complete Hispanic or injury people control their motor units so their motor neurons to do interesting things. Like in other words, can we build a BCI that is just like putting a sticky electrodes on the arm of a paralyzed person and can. What can they do?
So this was.
[01:15:15] Speaker B: You said, you said clinically complete.
[01:15:18] Speaker A: Exactly.
[01:15:19] Speaker B: What is that? Incomplete in another language?
[01:15:23] Speaker A: Yeah, this is a very. Thanks for asking. Because clinically complete means that they basically cannot generate overt movements and you know, and if they are sensory complete, they cannot have like, you know, they don't have sensation. But a complete spinal cord injury would just mean like, you know, you take a chainstone, you cut the spinal cord in half, which you know, people don't, wouldn't survive something like that. So this is very important. Like clinically, even clinically complete people still have like some rest is one input from the brain below the injury.
So this is what we were hoping for, that we can put these sticky electrodes, use our math tricks to identify motor neurons and that they would be able to control these motor neurons to the new tasks and play games basically
[01:16:14] Speaker B: just before you get into the details of that. So just what is like the range? So clinically complete sounds like one thing, but there's gotta be like a bill has 30% of the projection neurons still somewhat intact and Nancy has 45%. But they're both clinically complete or something. So is that, is there a typical range at which like there's still some, some activity going down or is it just continuous or what?
[01:16:43] Speaker A: Yeah, I don't, I don't know if, if we have a good quantification of that. This is usually done, I mean like people do imagine, but this is use done with clinically clinical rating scales. And also what we were doing was using ultrasound of the muscle. So we would put ultrasound probes in, ultrasound probe in the muscle and ask participants to try to contract different muscles.
And in the clinically complete participants, they. We couldn't see like fibers contracting most muscle.
So that is the, the extent of it. So there's this.
[01:17:18] Speaker B: So you don't, you don't necessarily even have a measurement of like how, how extent the.
Through the trajectory. Okay.
[01:17:24] Speaker A: Yeah, exactly. We don't. And you know, there's a lot of, you know, like there's a lot of interesting translational work by many people like Marco Capogrosso and Reward Gutin and others where they show that you can do a lot with these spare connections to basically retrain, you know, with neurotechnology and have people walk again or use their hand again with a neuroprosthetic. But we want it to be very low tech. It's like, what if we put like a sticky electrodes and can you just like, you know, use your computer to communicate, to play games? Can you navigate your wheelchair with doing this, even if you cannot move your hand?
[01:18:02] Speaker B: Right.
[01:18:03] Speaker A: And what we found is that they could, they could control like up to three degrees of freedom is how far we got. We are continuing these experiments. So they could, for example, one of them who couldn't move his own wheelchair like we did. We didn't control the Witcher directly. This was a virtual, we'd say, because, you know, the engineering time, but.
[01:18:23] Speaker B: But it could be in principle easily used.
[01:18:25] Speaker A: It was so it was like basically turn left, turn right, go straight. And he could do this with motor neurons from his paralyzed gun. And for him, he was like, oh, this is great, like, because I will be able to move around my house on my own.
[01:18:38] Speaker B: Oh God, yeah.
[01:18:41] Speaker A: And. And you know, like, I'm also like working on. I was also working on BCS like you were saying. But you know, some of these participants were open to having implants, others not because they've already gone through a lot. Right.
So I think having this big range of technological solutions is exciting. So.
[01:19:03] Speaker B: No, I'm sorry, would that not be considered bci, the using the EMG to control is it. I'm not sure what constitutes the C. The computer part of the BCI then.
[01:19:17] Speaker A: Yeah, it's like I was actually looking at the. The brain computer in Interface Society release at this definition there is a paragraph long and maybe would fall within the bci, because I can't remember if it actually talks about interfacing with the nervous system, but I'm just trying to be conservative on. We, we call it motor neuron interface,
[01:19:40] Speaker B: but it's because this. The B is lacking, not the C in this case.
[01:19:45] Speaker A: Yeah, yeah.
But yeah, I think this is cool because now we're interested in pushing this forward as a new way to, you know, assistive devices. But we're also thinking about applications for rehabilitation with these approaches.
[01:20:00] Speaker B: And how would that happen? How would the rehabilitation happen? You can train the regeneration of the nerves.
[01:20:10] Speaker A: Maybe that is the question. Like, maybe we can actually help, you know, because rehabilitation is basically, you need like intensive, intensive training. Right. Is correlated with having better recovery. And. And yeah, our mutual friend John has like a lot of work showing that. Right. Kakawa.
So maybe the problem if you are fully paralyzed at the beginning, after you have a neurological disease. And we don't have to think only about a spinal cord injury like you cannot do. You cannot exercise. Right. Because you don't get feedback about what you're doing. But if you are using this technology as a way to basically close the loop because you are controlling even like this motor neuron that you can capture, then you could have people retrain their nervous system. Of course, this is.
[01:20:58] Speaker B: How is that more effective than or different than just behavioral rehab, which should in principle maybe be doing the same. That's what people are trying to do when they're doing behavioral rehabilitation.
[01:21:09] Speaker A: Right, Exactly. But the problem is when you cannot move at all. Right. You don't get any feedback about what you are trying to do.
Like, you are not getting.
[01:21:18] Speaker B: You don't know if the thing that you're.
[01:21:19] Speaker A: Exactly.
[01:21:20] Speaker B: Sorry.
[01:21:20] Speaker A: Yeah.
So you don't know what you are trying to do because you cannot.
You cannot practice this because you don't see what you are doing in a way.
[01:21:30] Speaker B: You can't correct any errors because you're not making any errors, because you're not. You can't see what you're doing at all.
[01:21:35] Speaker A: Exactly. That is what I was trying to say.
[01:21:37] Speaker B: Okay. Yeah. So. Okay, so you're getting the feedback, and then it's the error corrective mechanism that should in principle reinforce. And then there's some plasticity involved, etc.
[01:21:50] Speaker A: That I think it's also like the simpler thing, like observation, that if you have people play, you know, like stroke participants play with video games with exoskeletons for many hours a day, they get better than if they're just doing one hour of physical therapy. So it's about, like, gamified interventions that people are engaged and practice many hours a day. So there's.
[01:22:14] Speaker B: They're willing. They're like, willing to do it because it's, like, more fun. And that's okay.
[01:22:18] Speaker A: Yeah, yeah, exactly. And you are getting this, you know, like, in a way, you are engaging the system and you're also getting them to practice. So I think it's both.
Both of those dimensions.
But I'm not an expert in clinical neurology.
Right. But I think that that would. That is the word prediction. That is something that we are very excited to build here in Lisbon, like, we are working on actively.
[01:22:47] Speaker B: But you had mentioned. Is this part of something you had mentioned to me which was like the. A big clinical and translational neuroscience push where you are.
[01:22:56] Speaker A: Yeah, yeah, I guess, like, it is. So.
So I just moved to Lisbon to join the Sampalimo foundation that is, I think like neuroscientists know it because of the, you know, like the very permanent basic, if you want, neuroscience program. Right.
[01:23:17] Speaker B: Congrats.
Congrats on the.
[01:23:20] Speaker A: Thank you.
And now the.
So I joined that and basically this big expansion into translational neuroscience. So there's this new center for Restorative Neurotechnology directed by John Krakauer, whom we've mentioned like a lot of times. I guess he's your VIP in the podcast.
[01:23:42] Speaker B: He's a frequent guest, we'll say he's a frequent visitor of the podcast. Friend of the show. Yes, for sure.
[01:23:50] Speaker A: So I joined this effort and this center for Restorative Neurotech has three legs. So one is this translational neuroscience program that I'm part of that we call Neuroscience of Disease. Then there's also a clinic where a lot of my colleagues, they are actually clinicians as well as neuroscientists. So it's a lot of fun because in meetings I learn a lot about psychiatry and neurology where we will be administering like neurotech and AI based gamify therapies. So we, I mean, not me, but my colleagues and, and, and people in our group.
And then we have a warehouse that is literally a warehouse that people who came to Costa have partied at a few years ago and this is now basically. And we have a lot of like groups doing like hardware, software, tech, and we are starting to have like some startups too. So the idea is to go from basic neuroscience to clinical trials and translation in the same building by the Tejo river here in Lisbon. It's exciting.
[01:24:53] Speaker B: So what, so where does this fit in your, in your wanting to understand versus wanting to engineer things?
[01:25:03] Speaker A: I think it's like, you know, it kind of seemed like perfect fit because the lab was working on both. Right. On understanding and bringing what we're understanding into, you know, fixing.
So in a way it's like the, the, the perfect space for us.
[01:25:20] Speaker B: What have we not discussed that, that you're finding exciting in what you're doing these days?
[01:25:28] Speaker A: So many things.
I think we've covered most of the bases.
The other things that we're working on is basically mouse experiments to try to understand close loop motor control that build a bit on these ideas that you mentioned on the email about, you know, like different parts of the brain working together and there not being like a clear one to one mapping between function or what a brain area does. So we are doing experiments along those dimensions in running mice.
[01:26:03] Speaker B: Yeah.
[01:26:03] Speaker A: We're also doing some bci Batista inspired neuroconstrain studies. But I'll let you jump in.
[01:26:10] Speaker B: Batista inspired. Well, so we were talking earlier about sort of the difficulty because we. We're confined to use language. Right. And what we use, the difficulty in. In thinking about going beyond a hyper modular, very static view of the functions of different parts of the brain, et cetera. But, but part of the problem is that we write scientific papers using language.
How. I mean, how much of that is like part of the problem is just writing down sentences without trapping yourself into that sort of old view of like the static.
I don't know, it's. It's kind of a reductionist small view of it.
[01:26:55] Speaker A: Yeah, I think, I mean like, I think part of it is like we should make an effort to integrate different approaches. Right. So look at different tasks, for example, maybe different species. Although we also have to be mindful differences of species. Right. Like, you know, like you and I have worked in mice and monkeys. Clearly mice are not small monkeys and we are not big monkeys.
[01:27:21] Speaker B: Monkeys have better neurons, by the way. Better neural activity in monkeys for those listening.
[01:27:28] Speaker A: Yeah.
So there's this integrating across different tasks, looking at different methods like correlational, you know, looking at activity, decoding during interesting task designs may not be enough. We need to perturb, but we also have to be mindful of are we perturbing just like doing a little Pokemon transcend the selectively versus are we listening?
When are we listening? Are we listening before an animal learns the task or after?
I think there's these two access that we wrote like a short opinion paper about with this was with just that man. And it was basically led by Jimmy G in the lab and Jason Keller, a postdoc in Justice's lab. We were talking about this, about the need of looking across different tasks and different ways of looking at data. And if you do that, basically you realize that this, the brain is not isomorph, is not like, you know, a blob.
But there's also not clear isomorphism in motor control. And I would argue that if there's not clear a super clear isomorphism motor control at the level that we're describing, maybe there won't be for cognition.
And then the other need of looking at basically where the animal is with respect to the behavior or its own life. And what we meant by that is like a developing animal is not the same as an adult animal or an animal or a human. For what it's worth, that is learning to do something for the first time like the Brain is not the same as when you are very skilled at it. Even if you, you know, if you look at the, I don't know, my, me trying to play a complicated. Just chording the guitar or pad methane playing it, you know, I'm sure my brain doesn't, my motor cortex doesn't look like pads.
So yeah, like basically looking at all these things and just like trying to stop and think about it, digest the literature and be mindful of what we may be missing.
[01:29:25] Speaker B: Just D major, C major, G major, Juan, that's all you need, man. That's, that's all that, that's really required. My fingers can find those. Yeah.
[01:29:34] Speaker A: And maybe minus. They can.
[01:29:37] Speaker B: Well, you're, you're, you're discussing. Was it current opinions in. Current opinion in Neurobiology is the work, the, the nice review that you're discussing and, and the, the, you know, the challenge of looking across timescales relating to like everything that we've been discussing about how to think about these things and write about these things.
[01:29:56] Speaker A: Yeah, exactly.
[01:29:58] Speaker B: I think in the very beginning we were talking about like the legacy of like recording single neurons and a lot of the legacy of neurosciences also. There used to be, I don't know if it was dominated by the. But in the early single neuron days it was a lot of it was non human primates, was, was monkeys and that's where I cut my teeth as a neuroscientist and I just said monkey neurons are better than mouse neurons. Right. But, but it really is like if you, you can record the single neuron in a monkey and in real time, you can listen to the modulation of that neuron and think man that is involved in that decision in that eye movement or whatever.
And then you go and you do the same in like for example, a mouse and it's just not nearly as clean. And you think, oh, I don't know, this single neuron couldn't be doing this. Like, it has to be a manifold, it has to be a population. And the neural responses are like so sluggish and slow compared to those in a monkey. It just seems like how does the brain even do this? It seems so noisy and slow and stuff. So I don't know if like recording also in more recordings in rodents and organisms with brains or even thinking down to C Elegans with very few neurons, no action potentials as we know them when we typically talk of action potential.
And yet I don't do to see elegance. Do they have manifolds?
[01:31:27] Speaker A: There's a, this is, I Love this question. Because there's this paper that I really like that I think doesn't get as much love as it deserves. And I'm going to butcher the authors. The first author is Brennan and the second one is Proct.
My. My weirdest face for people on YouTube is because the last name is like P R O E K T. So I don't know how to pronounce it. Sorry if he's listening to the podcast.
[01:31:51] Speaker B: But it sounds. Sounds European. Quite European.
[01:31:55] Speaker A: Yeah, you would guess. Like I would have been exposed to that.
Yeah. Probably prepped.
So they took worms. That is like, again, we've known like they have 302 neurons, I think, and they have names.
And they took the same worms. Sorry, genetically identical worms in the same environment. And they could recover from the same neurons in these ones. Right. Because they have names, the, I guess the neurons and the worms.
And what they did was the first figure in this paper, one of the first is like, okay, I have these 20 neurons in this worm and I can decode the behavior. Like, you know, it's turning left, right? Or you know, little worm things.
And now I am going to take this decoder and I will take this genetically identical worm and what happens? The decoder phase completely.
But within worm, they could always decode from. From this given set of neurons, but they couldn't across ones. Then they did some clever dimensionality reduction to find a manifold and guess what? The manifold looks the same across worms, the same topology and the same relationship to behavior. So they color coded these manipul shapes that look like a bit like a pretzel that you have bent a bit.
And the colors that indicated the behavior were the same macro swarms.
So this is one of my favorite examples with Eve Marder's example that you mentioned, Eve and that basically you can have basically the same lower level constituents. Because in Eve Marder's work in crafts, they also know this is the three neurons. Right?
[01:33:36] Speaker B: Yeah. Can you just actually unfold that and sorry to ask you to do it, but describe Eve martyrs. Very popular work.
[01:33:41] Speaker A: Okay, let's start with the worm and we switch to if martyrs work. So there's. They've been studying the digestive system, the stomatogastric system of lobsters and crabs.
And they have a paper. I really love that. I can't remember now if it's about lobsters or crabs, but basically they have these three neurons that describe what has long been called a central pattern generator. So you can think about it as an oscillator. What Paul Was saying a ring attract, like the activity loops around in circles.
And what Eve has shown is like, you know, you can find this circuit in any war, in any crab or lobster. Sorry. But if you look at the biophysical properties of these neurons and their connections, they are different across animals. And it's not like, you know, you can predict, oh, this is going to be within range of this one. There seem to be, like, privileged combinations of these properties that give you an oscillator and others that don't.
Okay.
[01:34:41] Speaker B: And that allow them to digest.
[01:34:43] Speaker A: Exactly. So this is another. Another paper that I would like to say that manifolds are ontologically real and have a causal influence and are meaningful because the crab doesn't care about the details of the circuit. It also cares about being able to digest its food.
[01:35:01] Speaker B: Oh, right, right. Yeah.
[01:35:02] Speaker A: And again, this manifold is made of these three neurons, but there's nothing that they could find in their paper that would be like, oh, these neurons are digestive neurons. And these aren't like, basically it's about the interactions of these three neurons and their properties that make the digestive manifold. If you want.
So it's a bit the same in the worm. Like, the neurons are the same. Probably the connections are not the same, but still, like, their emergent behavior at the manifold level that relates to how these worms navigate in the world is the same. This is like, maybe a good point to end about manifolds as being ontologically real and having meaning, being meaningful not only to us.
Right. As people who look at them as like, oh, this is a cute shape that I could put little beautiful colors on, but also be meaningful to the animal.
[01:35:56] Speaker B: That's the other possible title for this episode is Manifolds, they're real. Damn it is the.
[01:36:03] Speaker A: But.
[01:36:03] Speaker B: But I mean, it's interesting to think from an evolutionary perspective and just from like a. From a computational perspective, from a what is possible perspective, that you could have lots and lots of different combinations of thing that end up doing the same functional thing. Like what? Like, it's crazy to have to think that the space of possibilities is almost infinite. The range of what actually works is still really high, apparently.
And so we have to think about these constraints at all levels and what really does whittle it down to what's usable and what's not. And how do we manage to think about that? And just theoretically, how can we predict?
Can you go the other way and say, like, all right, Here are your four neurons, and you have five ion channels and 12 neuromodulators. Tell me what's possible and what's not to. To perform, I don't know, gestation, you know, or whatever.
[01:37:08] Speaker A: Yeah. I think this is a fascinating question, right? Like, because the genome, again, has so little information. Like, it's amazing. Like you're basically squeezing a human like us into a genome and then you get people that can have these conversations and I don't know, like, we could play guitar together, right?
And at the same time, while there's so much variability in the substrate, it seems so much, but it's still like a lot of consistency, right? Because circuit motifs and columns and layers and areas are preserved. Right? So I think this is a fascinating question and you know, I would love to have a neurodevelopmental friend around here to basically work on these questions directly. But what you said, there's a really cool paper by Bing Branton and John Tarhill, I don't know if you've seen it, where they basically take the, the entire connectome of the fly, right? So they have like two. I think there's more than, than two now there's maybe four.
And they all have cute names like Fancy and Mansy.
And what they do is like, they take. They look for a central pattern generator in the connector.
So it basically build a model and I'll describe it because I think it ties to this idea of manifolds. Like they, they have this con. This connectivity, right? This connectome, this connectivity matrix. And they know which connections are excitatory or inhibitory and more or less they can guesstimate the weights.
So with this, what they did was like, okay, let's ping this neuron network that I've fitted to the connectome and look at which neurons kind of would make the downstream motor neurons that they also have oscillates.
And with this, they pick like their two favorite candidate input neurons to this like, network.
And then they just like simulated.
Then they just, like with this neuron, they simulated what would happen to these network that again was fitted on the actual connectome after they were getting rid of neurons.
And they will get rid of a neuron and nothing happens. Another nothing happens until they found that just three neurons. Again, maybe that is the magic number.
These three neurons are necessary for our model to make the, you know, the model oscillate.
But the cool thing is that this is a fly, right? So they could be like, okay, now I'm going to go and stimulate this neuron.
And basically it fitted the prediction of the model.
Like behaviorally, like, I can't remember the details of that. But it's like it would make it move or not move and things like that. So again you have all this. But maybe by thinking about what the collective should be doing that is like generating an oscillation, you can at the moment just like computationally find the substrate that you will need for that. So I think when I saw that confidence did completely blew my mind.
[01:40:05] Speaker B: And they're real. Damn it.
[01:40:08] Speaker A: Exactly as I've said that. Sorry it's late there still.
[01:40:13] Speaker B: Yeah, well, okay. It is late there and I appreciate you spending time with me.
[01:40:18] Speaker A: So the manifold loom large, we can continue.
[01:40:22] Speaker B: Yeah, yeah, yeah, no, that'd be great. I mean the concept of manifold is mentioned so frequently on this podcast, so but it's almost mentioned sometimes in passing. So I really appreciate you going like deep, deep into it and covering like all things manifold here and as well as getting to know a little bit more about the other facets of your work. And I'm, I'm hoping and I'm, I'm demanding that our paths cross in real life soon because we do have some overlapping research interests and stuff. So I hope to visit your world soon and anyway, hope that we can meet in person soon. So anyway, thanks for coming on.
[01:41:01] Speaker A: Yeah, yeah, we should continue this with some beverages.
[01:41:06] Speaker B: Oh, very good. Yeah, when I asked you to come back on to chat for a few minutes or whatever and I mentioned I want to ask you, what if there's no manifold? You said, well, this kind of over overlaps or touches on some things that you wanted to, to highlight from that current opinions paper. What, what was that you wanted to highlight?
[01:41:22] Speaker A: Basically what we discussed. Right. Like sometimes this brain area may not even be necessary for this behavior or for where you are with respect to this behavior. Right. Like this word by Ben Stoveski. Like if a rat learns this kind of dance of tapping on a. Banging on a liver a few times, after a while they don't need motor cortex.
But then there's a more interesting twist that if they learn this along with a similar task, they will always need motor cortex.
I don't know if you've seen that paper.
[01:41:51] Speaker B: Say it again if they learn it. No, no, no. Yeah. Can you unpack it?
[01:41:55] Speaker A: Yeah. So. So if they, if they are doing like, I think this is a newer version of the task. This like a piano that brats have like three keys and they have to do like, I don't know, A, C, B. Uh huh.
And if they learn it with visual cues. If they learn and they can either have to figure out the sequence and do it over and over to get rewarded.
So they have to figure out this. A, C, B, A, C, B. And I think they also train some mice on a visually guided version of the task.
And the mice learning the visually guided task always needed cortex and if they learned both and this may be a different paper, then they needed cortex also for the one that was not VISTA guided. So it's not only what you are learning and how you are learning it, but with respect to which other things you are learning.
[01:42:49] Speaker B: So which lab is this out of that I forget now?
[01:42:52] Speaker A: Ben Ovetsky.
[01:42:54] Speaker B: This is out of the Ovetsky lab. Okay.
[01:42:56] Speaker A: Yeah, yeah. It's like their series of papers on what's resting on livers.
[01:43:01] Speaker B: I remember the. I remember it. I have to like look back at it. They publish a lot.
[01:43:07] Speaker A: Yeah. These days. So maybe I've mixed papers or split papers. I know there's a few that, that show this and also like fit with the view that you outlined. That is what the way we think about cortex based a ganglia there is kind of like a.
Basically a joint controller. I don't know, it's like it all works together. Right.
I don't think I am much more sympathetic to that view than the idea that based as a ganglia selects actions and then cortex executes.
[01:43:37] Speaker B: Yeah.
Well, it's interesting though like reading that, that current opinions paper I was thinking about this morning like because what you guys talk about and I'm not going to ask you details of it because who knows the details of their own work, their own papers that they wrote years ago. But the. So what you talk about is like at different timescales you might need different levels of redundancy with different brain areas. The whole thing is about like our old way of thinking about brain area X is for function Y and how that's not accurate. And the new way of thinking about it is more this like heterarchy way where there's no privileged top of the hierarchy but things contribute based on context, et cetera and time scales, which is what you guys write about in the paper. But the way that you write about in the paper is timescales in terms of like when you're learning an action or when you're developing from the embryo or evolutionary times, you know, those sorts of timescales.
But the way that I was thinking about it and I'd have to go back and read the paper again.
It's. I think that it's written in a way where nevertheless. So we were talking about the concept of mechanism and it's like, I think it's written in a way where, like, you can use mechanism. If you zoom in, you could see.
Let me rephrase this. It's hard to get away from talking about a brain area without talking about its function in a given context. You're still like, oh, it is a controller, but it's controlling less in this function and it's not controlling in this other context or whatever.
But we still ascribe because it's like we have to ascribe a single concept to a single brain area and it's really difficult to not think of it.
[01:45:29] Speaker A: Exactly.
Yeah, exactly. I think that the.
We will stay maybe like we will say, like you can map or assign a function to a brain area within this time scale. So what you. Maybe it's not the best word, but as you said, it's like you're in development when you've acquired a skill, when you're acquiring a skill, where you're adapting a skill.
And within. Basically we didn't talk about that, but the experience of the animal, that is not that obvious. But what you know what to do and how to do and what you will be learning. Right.
So I think this, this kind of the. The background of this paper is that we were trying to make sense of our own data in the context of all these papers that seem to suggest different things. Right? Like, it's like, oh, but then now you don't need this brain area to do this.
And this is how it came together. Right. And of course it's hard because again, we can circle back to our conversation of understanding and having compressed representations. And it's much easier if you think like, this area does X, this area does Y, and this area does Z.
But maybe that is not how the brain works. Yeah. So it makes our jobs harder, but in a way, it may also make them easier in the sense that when you look at older papers in the context of your own data or thinking about the brain, you are like, oh, so this is why in this case this happened this way.
You know, like, one concrete example that I think wasn't there is that we have this paper that you sent me on the email from Justice Lab is Jun Seok park is the first roster on mice reaching, grasping and pulling on a joystick with a motor cortex and base tag Android recording some manipulations. Right.
And in that paper, we kind of find through different ways, our interpretation of the data is that cortex and basal ganglia jointly control the movement.
Right. But then, June. So before I, I have nothing to do with this paper that I love from Justice Lab as well. It's the same called first and Last Author.
They saw that if you.
During.
So in the paper that I was describing. Let's rewind a second. In our paper, we proposed that it's a joint controller. We were manipulating cortex, or a striatum, before the mouse was given the cue.
And we saw that in some trials the mouse was able to. Even if we are inhibiting either cortex or striatum, that could break free of this inactivation and do the movement. And then movement looked mostly fine.
But if you do this, when the mice start reaching, if you inactivate cortex, they seem to have a more limited ability to reach away from their body, like from a straight line. And if you inactivate striatum, they seem to. Understood. Okay, so the function depends on when you are inactivating kind of the same area.
[01:48:31] Speaker B: Yeah.
[01:48:33] Speaker A: So it's complicated. But, you know, it's like, I think talking about these things is what led us to, you know, let's write something together when there's an opportunity to do it.
[01:48:43] Speaker B: Yeah. All right.
Okay. So I can't let you go without asking you the question from my lab. Are you ready for it?
[01:48:51] Speaker A: I don't think so, but yeah, let's go for it.
[01:48:53] Speaker B: You're ready. No, you're definitely ready for it. And this is something I should have actually asked you. So this is going back to.
We didn't discuss it in depth when we were talking, but I mentioned it in passing. And I don't think we ever unpacked it. And we don't really need to unpack it. Or I can. I can describe it here. So this is your work on manifolds, looking at like how manifolds are common between species and between different animals. Like, if you. Oh, we did talk about it a little bit. If you take like the manifold in one low dimensional space in one animal, you can do like a pretty simple linear transformation and map it on to the manifolds in other animals performing the same behaviors or other species, et cetera. Okay, so the question is about that, and it's whether you've looked at first A, whether you've looked at that in untrained behaviors, but then B, and this is the main question that he wanted to ask is, do you think that you could go into.
Okay, so you record the neural activity and map out the manifold in one animal.
Do you think that you could go into, like a naive animal, go in blindly with the knowledge of the structure of the dynamics and the manifold structure of the representations in the trained animal. Could you go into another animal and like induce via like holographic optogenetic stimulation, could you know how to induce that same behavior just by knowing the manifold structure? Does that make sense?
[01:50:35] Speaker A: Yeah, yeah. It's a very funny story and it's, it feels a bit questioning. It feels a bit dystopian. So in that paper we saw, just to. For clarity, we saw that similarity across individuals from the same species.
I should say that Matt Peric, who is co senior author on that paper with me and we've been, you know, working together since he was a PhD student, I was a postdoc. He just posted I think yesterday on Biorxiv, a paper comparing dynamics and manifolds across mice, monkeys and humans. Reaching. Okay, okay. So this is a very timely question.
The paper, yeah, Like I think it's on bioarchive as of one or two days ago. So there, the, the story there is quite interesting because what they found, I feel like what saying we. Because I didn't do much. But what they found then I'll stick to that. Is that you have the same dynamical rules across all the species, but the geometry is a bit different. And our interpretation of that is that basically the geometry is like what influences the details of the behavior probably because of, you know, when it's read downstream. Right.
But just to clarify, but to your colleague's question, I think it's a great question and I want to believe that that would be the case if. And I think it would be the case if the animal has the underlying circuitry.
Right.
[01:52:02] Speaker B: So if, but that's the whole point. That's the whole point.
[01:52:04] Speaker A: Right.
[01:52:05] Speaker B: Is that they, they do share the same underlying circuitry.
[01:52:09] Speaker A: Okay. Yeah, I guess I meant if one mouse were like the, I don't know, the Jimi Hendrix of Maisland and the other was not very good using their post. But I guess in mice this is the less of a problem, but I think it would be the case and actually have some indirect evidence of that that you can decode across animals based on a decoder. Right.
So this suggests that is the same patterns. Right. So if you pattern the simulation, you should be able to generate the same population dynamics and there's evidence of that from Cardicero's group. So I don't think we talked about this paper where they look at.
I finished my answer saying that I think it should work and I'll explain the paper.
[01:53:01] Speaker B: Wait, but that's because the first thing you said, I just want to make sure I Have it. Right. Is that you can train a decoder in one animal and then transfer that decoder to a different animal.
[01:53:11] Speaker A: Exactly.
[01:53:11] Speaker B: And use it.
[01:53:12] Speaker A: And it works. Yeah. Like people are doing this. Right. Not only us. Now there's a lot of work on neural foundation models that. That is using that. Right.
That idea. And I think neural foundation models work because of all the preservation and dynamics that we have been discovering. Right. Otherwise they wouldn't work.
Maybe this is a bit of a tangent, but I think neural foundation models work. I was thinking about this too recently, because there's basically similar grammar and vocabulary, if you want, across animals. So probably the dynamic rules and the constraints are preserved across animals. And this is why we.
[01:53:50] Speaker B: Okay, that's interesting.
[01:53:54] Speaker A: Yeah. So to go back to this inducing population dynamics, there's this paper from Kar Dicer's group with Surya Ganguly where they're doing holographic stimulation. Have you seen that paper?
[01:54:07] Speaker B: I think so.
[01:54:09] Speaker A: It came out during COVID so I think this is why it's not in most people's radar. And it also only come out. Came across it by chance. But what they did is they were doing, you know, fancy stuff. So they were recording from a bunch of neurons in mouse v1 while mice were doing a no go, no go task. So it was. They were presented with gratings and, you know, to the left would be go leak and to the right would be no go.
And they found the neurons that basically they did PCA and found a manifold that was like going leftist go would lead to go and the other to no go. Right. Like the two grating directions.
[01:54:43] Speaker B: This is familiar to me, but I don't. I must have read it.
[01:54:46] Speaker A: But yeah, yeah. So you have the two grating directions and then they found the neurons that were most tuned to each of the gratings and they stimulated, let's say, the neurons most tuned to go.
And then what they found is that the rest of the neurons were doing the same trajectory along the manifold as when the mouse was actually seeing the grating.
And the coolest thing is that the mouse went and did what he was or she was supposed to do that it was lick or no leak.
[01:55:13] Speaker B: So that. So by. By stimulating the like tuned neurons, the rest of the population followed suit. Is that how to think about that?
[01:55:20] Speaker A: Exactly. And the mouse behaved exactly with the same. If you look at the behavior, it's the same psychometric curve, so the same accuracy as when they were seeing the stimulus. So they were basically making the mice. I don't know if we used to say hallucinate the grating or at least have the, I don't know, illusion that they had seen the grating. Right. And I think it's super cool. Yeah. Again, I think maybe they are unlucky with the timing.
[01:55:47] Speaker B: That's, that's kind of like, that's like a hop Fieldian kind of thing where like, if you're, like there's a, if you're, you're stimulating the right neurons to induce an attractor state or something.
I guess that's the simple way to say it.
[01:56:03] Speaker A: Exactly. Right. And I haven't thought about it that way, but if you think that this behavior emerges from the interaction of all these brain areas, it means that just like zapping the right neurons, even if there are a few in the right way, when the animal is kind of in the right context, you can do a lot.
[01:56:20] Speaker B: Yeah. So that there are these key nodes that you can sort of push and that transitions the rest of the system maybe into the right full on dynamical regime.
[01:56:32] Speaker A: That's exactly.
Yeah.
[01:56:34] Speaker B: So then you wouldn't have to do like holographic stimulation. Right. You would just find.
You'd have to find the right. But that's weird because it's like going back to the old neuron population doctrine story, right. Where you're just. If you stimulate the grandmother neurons, then all of a sudden there's grandma, but the rest of the neurons that we say are important follow suit and they're also saying it's grandma or something.
[01:56:57] Speaker A: I guess. Yeah.
I guess the idea is maybe it's not a sparse code, but it's a collective code. I don't know if anyone has said that word, but maybe that is the
[01:57:07] Speaker B: population distributed is that word, right. That people have said collective is another way to say it.
[01:57:13] Speaker A: Yeah.
[01:57:14] Speaker B: Have you done it in untrained animals? That was the other part of the question.
[01:57:17] Speaker A: No, we haven't because we didn't have any untrained monkeys. And also I think in untrained monkeys, to be honest, it would. Well, not. Maybe, then they think I said maybe.
[01:57:29] Speaker B: Untrained behaviors, Untrained behaviors, not necessarily untrained animals.
[01:57:33] Speaker A: I think we've, I'm pretty sure we've tried it out in monkeys reaching.
But this is trained, right. Because monkeys reaching for fruit is trained. So I'm pretty sure I tried it. We have like data from monkeys reaching for pieces of fruit and I think I tried it during the revisions of one of the papers and it worked.
Yeah. But it would be cool to do it in, in mice where perhaps things will be different. Depending. I guess the cool question is whether things will look a bit different because of the timescale the animal is.
So this is how it all comes full circle.
Yeah.
All right.
[01:58:16] Speaker B: All right, well, I'll let. I'll let him know.
What else? Anything else you wanted to chat about? I appreciate you coming back, coming back on with me to do this.
[01:58:25] Speaker A: It's fun again. We should do this over a year. It would be much more fun or some avenue verdict here. I don't know. Like, I think we've covered lots of things, right? And you actually got me thinking. I was flying back yesterday and I was thinking about your question about lfp stand it phenomenal or fundamental and these things.
[01:58:45] Speaker B: Oh, cool, cool. That's great. If you're thinking about me in the shower and when you're laying in bed, that's great. That's exactly what I wanted.
Just wait till like we'll be getting beers and I'll be like, oh, hang on, I gotta record this. And I'll put like a microphone in front of your face.
[01:58:58] Speaker A: Have you done that before?
[01:59:00] Speaker B: No, I've never done that before. I'm not. I'm not an asshole.
[01:59:04] Speaker A: I thought it would be fun because I think with enough beers, most people would agree. So it's not like you are forcing anyone. It's like, this is the best idea of our lives. So I guess it depends on the.
[01:59:14] Speaker B: But I mean the weird thing about like the. Oh, we should recorded that is that. So like you said, it's important to think about these things. And often those sorts of speculative ideas or what people really think will come out right when I stop recording or something, someone will say something. They're like, oh, people should have heard that.
So that's why this kind of conversation I think is valuable too.
[01:59:38] Speaker A: Yeah, no, I think it's. Yeah. And I think the value of going to conferences is mostly these things. Right. Like now we cannot watch talks online. But I think this kind of conversations where you are like relax and you know, just basically you start like speculating and since like a lot of ideas come this way. Right.
And I guess the challenge for all of us is like we are so busy doing is finding the time to think.
So I think maybe we should do. Let's. Let's do it now. More thinking. This is. By the way, I remember this post from the first time I saw John Krakow gave a talk. And that kind of stuck with me because I think he's right.
[02:00:20] Speaker B: Yeah, I mean there's lots of issues to work out on top of it. But all right, well, anyway, so I'm going to stop recording here and hang out for just a second. Okay. And.
Brain Inspired is powered by the Transmitter, an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives written by journalists and scientists. If you value Brain Inspired, support it through Patreon to access full length episodes, join our Discord community and even influence who I invite to the podcast. Go to BrainInspired Co to learn more. The music you hear is a little slow jazzy blues performed by my friend Kyle Donathan. Thank you for your support. See you next time.