BI 148 Gaute Einevoll: Brain Simulations

September 25, 2022 01:28:48
BI 148 Gaute Einevoll: Brain Simulations
Brain Inspired
BI 148 Gaute Einevoll: Brain Simulations

Sep 25 2022 | 01:28:48

/

Show Notes

Check out my free video series about what's missing in AI and Neuroscience

Support the show to get full episodes and join the Discord community.

Gaute Einevoll is a professor at the University of Oslo and Norwegian University of Life Sciences. Use develops detailed models of brain networks to use as simulations, so neuroscientists can test their various theories and hypotheses about how networks implement various functions. Thus, the models are tools. The goal is to create models that are multi-level, to test questions at various levels of biological detail; and multi-modal, to predict that handful of signals neuroscientists measure from real brains (something Gaute calls "measurement physics"). We also discuss Gaute's thoughts on Carina Curto's "beautiful vs ugly models", and his reaction to Noah Hutton's In Silico documentary about the Blue Brain and Human Brain projects (Gaute has been funded by the Human Brain Project since its inception).

0:00 - Intro 3:25 - Beautiful and messy models 6:34 - In Silico 9:47 - Goals of human brain project 15:50 - Brain simulation approach 21:35 - Degeneracy in parameters 26:24 - Abstract principles from simulations 32:58 - Models as tools 35:34 - Predicting brain signals 41:45 - LFPs closer to average 53:57 - Plasticity in simulations 56:53 - How detailed should we model neurons? 59:09 - Lessons from predicting signals 1:06:07 - Scaling up 1:10:54 - Simulation as a tool 1:12:35 - Oscillations 1:16:24 - Manifolds and simulations 1:20:22 - Modeling cortex like Hodgkin and Huxley

View Full Transcript

Episode Transcript

Speaker 1 00:00:04 I think it's, it's a combination of these detailed models and these simpler models that really gives sort of like a, cause the detail models are maybe are needed to maybe make sure to make contact with experiments. And, uh, but they are not alone, not sufficient alone to give you that, that, that insight, it's not like you can sort of just look at these traces and say, oh, there's APO, you have to simulate the whole experiment. You have to do the measurement physics. And that's so it says like a separate thing. And that's sort of this measurement physics has been sort of, I think, underdeveloped in, in, uh, in computation neuroscience, I think actually when it comes to simulations, like, uh, like, like our kinds of studies, uh, a quantitative, a huge, a quantitative difference would make a qualitative difference in terms of what we could explore. Speaker 2 00:00:56 That's the hope anyway, right? Speaker 0 00:00:58 Yeah. Speaker 1 00:00:59 That's the hope Speaker 0 00:01:05 This is brain inspired. Speaker 2 00:01:18 Hello everyone. I'm Paul. So on the last episode I had Noah Hutton on to talk about his documentary film in silico, which Chronicles Henry Mark's, uh, quest to simulate a human brain under the project names, the blue brain project and the human brain project. By coincidence, as you'll hear, uh, today I have gout to EAL on the podcast. GATA is a professor at the university of Oslo and Norwegian university of life sciences. And he happens to have been part of the human brain project since it's an inception in 2013 GATA focuses on what he calls measurement physics, uh, in biologically realistic stimulations of neural networks. His goal in the simulations is to faithfully predict or recreate the various types of signals that neuroscientists measure in real brains signals like spikes and local field potentials, E E G and EEG, as you probably know, one of the grand, uh, achievements in neuroscience is the famous work by Hodgkin and Huxley in the 1950s, working out the dynamics, uh, equations that govern the activity of single neurons, which, uh, has led to tons of productive neuroscience as GTA points out. Speaker 2 00:02:36 Uh, we still haven't had the same success simulating networks of neurons, and his hope is that by doing so, we can use the simulation models as tools to better understand networks of neurons across multiple scales and levels of biological detail and to test hypotheses then about these networks. So we discuss all of that and more, and you can learn more about GATA and his work, uh, and the show notes at brain inspir.co/podcast/ 148, where you can also learn how to support this here podcast to get all the full episodes and to join our discord community or to take my online course, neuro AI, the quest to explain intelligence, which is all about the intersection of neuroscience and current AI. Okay. Without further ado, here's GATA. So GTA the, the Genesis for our conversation today, uh, was an email that you, uh, sent in response to a conversation I had with Karina cartel, uh, about the difference between beautiful and quote unquote ugly models. Um, and I, I feel bad for her because she, this was like a two page thing that she wrote a while ago in response to what neuroscience needs. She was asked to write this. Yeah. And, and I, she, she kind of backtracked also on the ugly part because, uh, and you know, and, and you in your email called them messy, which, you know, was a, a messy kinds of models. So, yeah, Speaker 1 00:04:04 That's a, yeah, I mean, for first, I should say it was a quite friendly email. This Speaker 2 00:04:08 Was very Speaker 1 00:04:09 Friendly, very friendly email. So this is sort of a, and I, I, I sort of understand what is meant by, and especially this beautiful models because, uh, I think she had this, this nice example of the hop field network, which is clearly a beautiful model. It's sort of quite easy to write down. It has some beautiful mathematical properties and it tells you something about the, yeah. About the, I mean, well, you get to intuition about it. I actually asked, I was at the, when my first meetings I attended the neuroscience there, what was in was just after I switched to neuroscience from, from commons matter physics. So John Hoel was there, this was in Sweden. So I asked him about the role, what, what role his, his model would have. And it's, he said, it might be a metaphor for how the brain works or, or how memory works. And that was a very nice, that's what that's, of course what it could be. So I think we all agree that this was a very useful model and it's certainly beautiful also. And then, then you have this other extreme where you have this, um, this, okay. Say complex complicated models or, or messy models. So I guess the most, uh, value neutral thing is to say complicated models, but, uh, I'm not very ugly is, is sort Speaker 2 00:05:23 Of ugly. Has a yeah. It's a negative connotation. Yeah. Speaker 1 00:05:25 Yeah. But I mean, it's sort of, I think we all agree that it is, uh, in some sense, ugly compared to the beautiful models, uh, because they have so many more, uh, parameters a much, much more difficult to understand. And in, in the physics, some people sort of call this kind of model has like, uh, uh, what's called number crunching and doing so like doing this kind of modeling just for number crunching. And they sort of look down at some theoretical chemist for sometimes doing this kind of what they call number crunching. So, uh, so, so, but I think, think it's, uh, obviously these, these, uh, these ugly models have, I think have an important role to play. Oh, Speaker 2 00:06:04 I hope you Speaker 1 00:06:04 Think that together with it. Speaker 2 00:06:06 Hmm. I would hope that you think that given, given your work <laugh>. Speaker 1 00:06:09 Yeah, exactly. And it's not, because I think they are, have some special interest that they particularly the, whatever less, uh, find them less ugly than others people do. But it's just that I think this is more important type of model we need in order to make, uh, make progress. Speaker 2 00:06:27 Yeah. So, so the other thing that was kind of fortunate, and, and we're gonna talk about some of your complicated models and the approach. And the other thing that was, uh, fortunate is that I just, this last episode, I interviewed Noah Hutton who made this movie, uh, this documentary in Silco about the human brain project, um, by which you received some, at least some of your, uh, funding and you you've been in involved with the human brain project since it's inception. So, uh, you got to see the film also, and we don't need to talk about it for long, but I'm just curious about your, uh, reactions to the film, which you were not in by the way. Speaker 1 00:07:03 No, no. So, uh, so that's, I've been in the human brain project still in the human brain project since it start in, in 2013. And I think it, wasn't, uh, interesting movie and, and, and certainly, uh, marker himself is a very interesting, uh, character. Yeah. So, um, so I, I think it was, and I also heard your, your podcast, your live podcast or podcasted audience that was very Speaker 2 00:07:29 Yeah. Wanted Speaker 1 00:07:29 Share that with you. Yeah. So that was very nice. And, um, and I think Noah came, came across as a very reasonable person. So, so, so I guess my, my only say criticism is that is of the movie, is that it's all, it didn't take the grass pad opportunity to clear up the difference between the blue brain project and the human brain project, because that is, um, yeah, yeah, go ahead. So he, he mentioned it a little bit in, in the movie, but he, he sort of says that sort of that, uh, because these are very different projects and these, some of these more, you say grandiose claims of, of, of marker has to do with the blue brain project. If you look at the goal of the, so the blue brain project started, I guess, around 2005 and it had Marco had this, this Ted talk in 2009, uh, and then this, uh, then the human brain project started in 2013, and this was not the continuation of the blue brain project. Speaker 1 00:08:31 It was, they had a different goal. It was about really about making the infrastructure or making sort of making impossible to simulate large scale networks on the computers and make this available for the community. And, and that was one of the goals were also other goals of the project. So, and, uh, but, but this has sort of often people have criticized the human brain project and, and held it to the goals of the blue brain project. And that has been sort of a, and that I think, uh, that, that, that, that Insco, didn't really take the opportunity to, to take advantage of the possibility to actually clear that up, because I think much of the criticism and this petition that was signed against the human brain project was based on people who, who mixed it up with the blue brain project. Mm. So, and, and this was there also a, well, now I'm a little bit, cause this is a sore point for us human brain project is that it was also a critique in, uh, I think Atlantic a paper article there Speaker 2 00:09:30 At young. Speaker 1 00:09:30 I think we also did exactly at young, who did this, mixed it up in the same, in the same way. So, um, and the human brain project is still running that, I mean, that is, uh, I mean, as also was said in, in the movie, but the goal is, is, is quite different. The goal is actually to now one of the goals that I'm, we are very busy working on is to contribute, to make this sort of like this, what is it like the comp this computer with this infrastructure for doing large scale simulations so that people can actually be able to do these kind of simulations without having access to a supercomputer by themselves or, or, and also go, so it's sense, democratizing, uh, simulation science is that if you Haven a laptop computer somewhere, you should be able to sort of go in and, and run these large scale models and do research on them, uh, on, I mean, using these sort of like super computers at different places in, in Italy, Spain, Germany, and, and France and so on. Speaker 1 00:10:30 So that is really the goal. And that's this, this kind of tools are now collected under this umbrella. Ebra, uh, which then we hope the plan is that this will certainly continue beyond the 2023. So I really hope that this will both be successful in the sense that it's in the sense that it's going to be easy to use. And also that we're going to get user to, to at this, the, the people who want to do this kind of large scale simulations, uh, that, that they were able to find is useful and, and, and use it, I think it's going to be, so it's more about the tools than, uh, than the model, in some sense. Speaker 2 00:11:05 So, so the human brain project ends in 2023, but, but you think that the funding will stay strong to continue these types of large scale simulation projects? Speaker 1 00:11:14 I, I don't really know. I mean, we are actually, uh, we are applying now for a much, much smaller project, so to just maintain, uh, maintain and, and sort of like develop and, and make it like it smooth this, like the use of these tools that's been developed as part of the human brain project. So we are literally writing a proposal now to the EU to, to, and hopefully that will, but that would just be a small fraction of them, of the, of like the funding for the human brain project. Speaker 2 00:11:45 Do you think that what effect, if at all, do you think that that documentary could have on the future funding of, of, uh, your flavor of science? Speaker 1 00:11:57 Yeah, I don't know. I think it would've been, <laugh> really helpful if this MIS this misunderstanding mm-hmm, <affirmative> had, had been sort of cleared up. I think that has been, uh, because this, obviously this you, of this whole reaction to the human brain project has certainly hurt neuroscience, I think. Cause I mean, I think this, this, yeah, I think that's, Speaker 2 00:12:20 Yeah. Do, do you think some of the reaction, uh, simply stems from jealousy, perhaps envy? Speaker 1 00:12:29 I don't know. I think it's also that somehow this strong, strong claims that, that maybe marker made sort of rub people the wrong way. And I, I sort of, uh, also, I also, didn't sort of, I believe I agreed partially with, uh, like with his vision, but not, not completely, but it wasn't really that, I mean, I just, I mean, it, in some sense it was a lot of money. Right. But it was all spread on like hundreds of labs. So we had, I think I had all two, three people working, uh, in, in the group because of this. So sort of was like an R one. So it was, or whatever, like a rather modest. So, uh, but obviously I think it's is this, you see this, I think the neuroscience community, you see all this money and it's, it's a big sum and it goes to a sort of another type of, of neuroscience. Speaker 1 00:13:21 And then maybe don't sort of, even though it was an it project. So, so, uh, I don't know. I, it was obviously, there's always some worries. I think everybody, especially neuroscience where there's so many approaches to neuroscience these days. So, uh, I think of course, uh, the approach I do is very promising, so I should get the, <laugh> get sufficient funding. And I, everyone, I guess everyone in neuro neuroscience is doing is, is feeling like this. So I guess it is always, I guess also also this worry about getting, getting funded for your own approach. Speaker 2 00:13:54 Yeah. But last question about this, and then, you know, we can move on to more, uh, fruitful topics. Have you personally received, um, you know, pushback from the community, like any part of the community, or do you feel supported within the neuroscience community or what's the general feeling you have? Hmm, Speaker 1 00:14:14 No, I, I, I certainly hasn't been a, a particular pushback. I've also been a little bit on the never been in the leadership of the human brain project, I think sort of. So I've been a little bit on the, on the periphery. I, I worry, uh, however, maybe that this, that, I mean, some of this, some of these more grandiose claims, maybe that was people got the impression from, from, from maybe from like presentation of the blue brain project, is that if you sort of just make a very complicated model and put a bunch of like detailed neuron models in, in, in a network and simulate on the supercomputer, then <inaudible> sort of, this spirit comes outta the bottle. <laugh>, uh, if that is sort of like this very naive thing that, uh, I don't think anybody believed at this sort of, not, not anybody I know, not anybody I know. Speaker 1 00:15:03 I mean, this, this kind of, you have such a model it's sort of like a starting point for doing research. It's not the end of it. And it's, uh, so this, all this unknown parameters, which is also pointed out, uh, by, by like some, some of the people who are interviewed in, in Silco, which I think is a very, very good point that you don't know the parameters, but this, the way to find the parameters is to build sort of what I call a skeleton model and then sort of use that to sort of try to, to compare with data and so on, but it's not the, so anyway, so I think sometimes when you present this and maybe apply for money, people have this sort of conception of that. We have this naive vision that if you just put to make a very messy model and it looks sort of realistic, that is somehow it's going to act realistically. And I think that's sort of is that's not common. That's not how it works. Speaker 2 00:15:49 Yeah. Okay. Well, uh, so you, uh, detail, well, in this review paper, the scientific case for brain simulations, which was a 2019 review paper, I believe mm-hmm, <affirmative>, um, you, you talk about this approach and, uh, why it's supported and also, uh, like you alluded to before that it's gonna take, uh, a lot of different labs, a lot of different people working on it, spread out, uh, and that, and that we have the capabilities of, of doing that these days with super computers that you can send your data off to, and it runs on a super computer in some other country or something. Um, so maybe, uh, maybe you can just kind of give an overview of what your approach is and then why, so it, so it's, it's cool that you <laugh>, you think it's important to, for these, uh, models to be able to predict not only spiking activity, but local field potentials, EEG signals, uh, FMR, and these all come with their own, um, challenges of course, but maybe you can kind of broadly overview what, what you're Speaker 1 00:16:57 Doing. Yeah. I think, I think the, the overall approach I think is, is very similar to like the approach of Hodgkin and Huxley when they, when they model, uh, I mean, a neuron or actually a piece of the neuron, the axon. So they just looked at it as a, as a physical system. Right. They made a model for the, the axon and, and, and then they, they used all kinds of, uh, clever experiments, uh, to sort of, to, to design this model and determine parameters and, and, and run it on computer eventually. Right. So, but they Speaker 2 00:17:28 Didn't, they didn't have a computer. Right. So they, they had do this gut wrenching hand calculations. Yeah. Speaker 1 00:17:33 No, that was extremely impressive in, in so many along so many different axis. Yeah. So, um, so, so now after that, we now have a pretty good way. Well, pretty good ID about how to model single neurons, right? Mm-hmm <affirmative> we, we have, at least we have this cable ation, you have this what called multi, you can do this reconstructed morphology with D rights and so on. So we can make sort of like the, the structure, we know the D rights, and we can make mathematical models. Uh, we have the mathematical framework. What is hard though, is to find the parameters, what sort of I channel density should you put in the, so is it, IH, is it more along the, like API then right than the base all, is this these choices, right. So, so then you have to, to sort of, to, to, uh, you do as many measurements as you can, in some sense, often this, this patch patch, electrodes, uh, and now people also started to use extracellular recordings. Speaker 1 00:18:32 If you had this neurons on the, on like this micro electrodes array and so on mm-hmm <affirmative>, and then you have all this experimental data, and then you sort of fit the model parameters to sort of, to, to make this neuron model that you make predict, uh, well, as accurately as possible. Uh, these, uh, like the, the behavior of this, well, the modern, well, make the model behave as similar as, as what you see in experiments, and there's many types of experiments and so on, and already there, you have this, this problem that, uh, uh, this problem that you, it's not a unique solution. It's not, it's not like it's a one set of parameters that sort of, uh, that sort of this gives to uniquely best fit. And I think already, and there was this one, one model that we used a lot was in the, we called it hay model. Speaker 1 00:19:22 Cause the first author was, Hey, it was like from 2011, think it was in a group of gave. So there, they had like this, this beautifully multi compartment, detailed neural model, but they, and they had one sort of like one sort of examples, parameter set. They used throughout the paper, a couple of them, but they also gave up 500 other parameter sets, which did equally well. So that already sort of like this, this thing that, uh, about model solution degeneracy, which is another thing to discuss, but we are essentially trying to do the same thing for a network, because if you look at network studies, uh, especially in the cortex, there are no examples that I know of, of like a piece of cortex for a particular biological system that has been like made so that you can sort of essentially the, the, this network model can mimic a bunch of different experiments. Speaker 1 00:20:18 And, and we are now working with, uh, collaborators at Allen Institute, parti, Anton <inaudible> and, and Christo co, uh, on, on sort of to, with as you know, they have done a lot of, well, the Allen Institute has focused a lot on the mouse cortex over the last decade and mapped out all kinds of mapped out all kinds of things. And also then made this like first version of a network model. So then, then, so then, uh, but then of course, how do you constrain this? How do you constrain this? How do, how do you determine the parameters of this model? And that's, that's a huge challenge, but again, you have to use use the all well experiments, not only spikes, but, but also other experiments. So that's in, in fitting these, these complex models, these ugly models to experiments, you need to take advantage of all experiments available. Uh, and, uh, so that is sort of where you, but that was the same thing with, if you look at Hodgkin and Huxley, they did all kinds of different manipulations with patch clapping, right. That was space clapping. And so they did all kinds of manipulation to get a rich sort of set of experiments. And so which they use. So I, I say it's very much the same, same spirit as, as Kinley. Speaker 2 00:21:36 Yeah, well maybe we, we should talk about the degeneracy. So I had Eve martyr and she's, you know, become this famous figure in neuroscience showing that there are a thousand different ways to skin, a cat in the systematic gastric Gangan of lobsters and crabs. Um, so how, you know, and I know that you see the, and, you know, rightly so see the de Dey as, uh, a feature and not a bug of the system, but how much do you worry that it's going, that it has, um, that it will have an impact on the, uh, I guess the relevance of the brain simulations that you perform Speaker 1 00:22:16 Now, this is, uh, this is, uh, this is a major worry, of course. So then you have to do this, this thing that you, you, you sort of say you, you fit your model to one type, one type of data. For example, when it comes to, when it comes to like this, this mouse visual cortex model that has been developed at Allen Institute, they have, they have all these beautiful experiments where they show these different kinds of, of, uh, of, of like images and, and movies and spots and whatever visual stimuli to the mouse. And so you can sort of some sense fit the model to, to data for one type of experiments and then test it on others and so on mm-hmm <affirmative>. But I think also it's an important thing to what, what does this de generously mean? Because I think, I mean, obviously this wonderful podcast, by the way, I'm, uh, I'm a big fan. I think you're doing a good service to the, to the field. Speaker 2 00:23:05 So by the way, I should say, I should, we should plug your podcast, uh, since and science. Right. So it's largely in, uh, go ahead. Yeah. In Norian know, Speaker 1 00:23:15 So that's right. So I have this, this, this, nor essentially it's, it's a podcast where most of the episodes are in Norwegian, but there are, some of them, uh, are made in English. I think it's four or five or something. And with, uh, with some neuroscientists and that's called it's, it's found on the sense and science. So that's, uh, Speaker 2 00:23:34 And you, you got Terry Sinski to laugh a lot more than I did, so nice job. It was, he was, he was generous with the laughter. It was great. Speaker 1 00:23:42 Yeah. So that's true. So Terrys is there and, uh, and, and, and crystal K is there. And, uh, I also, Sean, Carol, he's not a neuroscientist, but he's a, I think he's a wonderful podcaster. So I really like, I sort, the, I'm only a patron subscriber, two podcasters, yours and hits. So that's, Speaker 2 00:24:00 Oh man, my heart, my heart is a fluter exactly. <laugh> Speaker 1 00:24:04 No, that's true. And, and I actually tell my students sort of too well, listen to this podcast because it's, I mean, following up, well, even in even reading papers within your own field is quite sort of like, as, you know, quite challenging, but you sort of having listen to a, like one hour, two hour podcast where you interview someone. So like, from a, like in, like in, especially this AI thing, which is a little bit like adjacent to what I, I do, but not certainly not interested in it. We're gonna, we're gonna come back to that. Yeah. Yeah. So this is sort of a, so this is, it is been very useful for me just to stay oriented, um, in the field. But what I was saying is that if you look at this, um, this deep, deep network, say them, the ones that can be the normal, deep sort of convolution networks that can be used for, uh, like image recognition, if you sort of train say they took like two identical identical networks, or like to start off with, and then you train, but maybe with a different, uh, initial conditions or maybe with a different order images or something. Speaker 1 00:25:05 Yeah. Then you'll end up with two networks, which are probably behaving the same way, um, performing the same way and both sort of hopefully then successful. But if you go into the detailed parameters and it's look at the synaptic connections or the connections, right. They will be different. Yeah. So in some sense, it's, it's, this, this helps at, at, at, it's not the detailed connection sets matter. It's some kind of more like, uh, some kind of more kind of average thing in, in the connection, whatever, some other measure of the connection that matters. And so that's also not a unique solution. So I think this thing of, of looking for a unique solution is something we're so used to, because that's typically how we made our models. You want to make them sort of low dimensional, I guess, what's it called so that you have few parameters and, uh, few parameters, and then you sort of try to find this sort of set the parameters so that in some sense gives the most like suitable behavior. But, uh, but that's just like, that's a special case. And even if, even if you looked at the, like the Hoskin Huxley, if they, instead of fitting, I channel densities, if the starters sort of to fit sort positions of individual IM channels, Jesus, then you would end up with the same kind of degenerative of solutions. So, so this DeGeneres is just that's the, it just has to do with how the real solution we use when we model Speaker 2 00:26:25 Well. And of course, you know, your brain and my brain, uh, we both can speak English and your English is probably better than mine because, you know, uh, multilingual and you're just Sprite. But, uh, you know, of course our brains are not, parameterized the exact same way, nor is the structure the exact same way, et cetera. Um, but you know, you mentioned in convolution neural networks, so the hope is that you can extract some principle, uh, out of, you know, how it it's doing what it's doing. And in the case of convolutional neural networks, those principles are, well, you need multiple layers, you need the convolutions and, and to, to form abstractions. So, yeah. Um, I don't know if you wanna say, so I guess I just wanted to, um, throw that out there to you and you know, how do you think about extracting, I guess, abstract principles from these detailed model simulations? Yeah. Um, in, in the face of Degen. Speaker 1 00:27:20 Yeah. So that's, that's, uh, obviously, and I think the first, okay, so the first step is trying to find this very complicated model. And, and how do you, uh, how do you, and, and what we are after we, we often call this, like trying to find a, what's called a multipurpose model, a model that can explain many things at the same time, because that typically, if you look at some of these earlier works, uh, on the modeling, the system, there are like these firing rate models, which I think are sort of showing some things like whatever's like this constant constant in variance and, and like responses to visual S where they contrast in variance. I mean, so, so there are certainly things, but they typically can only explain one thing at the time, but they're still useful. They explain that thing. And if you try it on someone and something else, whatever orientation, selectivity, or direction selectivity, it doesn't work. Speaker 1 00:28:08 Right. And that's, it's not surprising, right. Because it's, uh, it's, you need a, probably a complex model. If you want to have this multipurpose model, uh, that say, if you have like a model of the mouse cortex that should be able to reproduce, I mean, within some accuracy responses to different visual stimuli, and also maybe different brain states, like the, the, the mouse is like, uh, active or, or, or resting. And, uh, and, and so that would be sort of like a multipurpose, uh, multipurpose model. And that, that sort of is, yeah. And so then, then of course, then there's also question what you do with, what, what kind of model you have. I actually, the, the data from the island Institute, they have not only a beautiful model, they also have this beautiful electrophysiological and opto opt physiological data. So they have data from 50 mice. Speaker 1 00:29:03 And when look at them, even in the visual system, if you look at the, like the spike patterns and, and also like the, what do you call local field, potentially these other things they're quite different from mouse to mouse. So they, and, and probably also vary of, yeah. I mean, there's all kind of variability. So you have to think about, you want the make model for an average mouse or, or some kind of individual mouse, or what are you really? So there's all this other kind, these issues, but say, if you have this, this sort of like multipurpose detailed model that is able to predict, uh, predict all these, um, uh, uh, yeah. All, all these different experimental observation for the particular mouse, that would certainly not be the end of the project. Some people say that's just as complicated as a, as a real mouse always, and that's true, but then you have this perfect, I guess what's called white box a white mouse. Speaker 1 00:29:55 Well, that's probably not the, but anyway, a network that, yeah. Whatever. So they call about the black box and the white box. Right. So this like white analogous to yeah. A white box white box mouses, where, where you can actually do all kinds of experiments, right. So you can sort of then start looking for principles. And so then that would be a very nice starting point, I think, for making more, uh, model simplified models at, at the, at, at different sort of course, grain levels. I, I mentioned in this, uh, I think we mentioned in this paper, in, in neuron, uh, about the scientific case for brain simulations, that you can sort of maybe then in addition to having this biophysical detailed model, um, model network, biophysical, detailed network model, you can also have maybe network model of point neurons, uh, like integrating fire type neurons. And also then maybe in terms of like firing rates or population firing rates, mm-hmm <affirmative>, you would like these to be sort of linked together in a systematic way so that you can in under some approximations, uh, be able to sort of derive the model at a more cost current level from the, from the lower level. Right. So, so, so it's the starting point. And also it's not the end point, Speaker 2 00:31:06 So, and then be able to extract some principles from comparing the, the different granularity, uh, models. Speaker 1 00:31:13 Mm-hmm and that, of course the, yeah, go ahead. I'm thinking. Yeah. Sorry. So, no, I, because I'm mean I'm in the, there's all this, if you think of all the models that has come about a good suggestions for like, principles for how the brain works, they have like the well predictive coding and, and thousand brains theory, and like this inside out perspective of, of bhai mm-hmm, <affirmative> it's are all all interesting. Right. And, uh, and, but it's, some of them have been around for a long time and it's just diff difficult to find out, which of those, if any, is correct. Right. So all, so all maybe exactly. Speaker 2 00:31:54 Maybe there's a D but anyway, there. Yeah, Speaker 1 00:31:56 Yeah, exactly. Uh, but then at least if you could then someone who believes in one of these sort of try to make X a model based on these, because we know quite a bit about the, about sort of how to make, um, well, I mean, we know quite a bit about the structure of the mouse or, or the visual cortex, right. In terms of the neurons and that they they're connected, we don't know the, how strong these connections are. And so we certainly don't know all the parameters then we have, don't obviously we don't know this plasticity rules either, but nevertheless, there are some constraints, right. So, so if there's one of these IDs that easily can sort be accommodated with this structure, uh, that adds to the credence. So that ID, and, uh, and if there's another ID, which is much more difficult to, to sort of, to, to, to fit with this, what we know about the structure and these type of models, then it's sort of like less, less screens. So, so that would be, uh, certainly a, a way to sort of, to yeah. To, to, to also get hopefully closer to these principles. Speaker 2 00:32:58 Well, so, but you view these models, not as hypotheses, but as tools. Right. I, I don't know if you wanna just comment about that because I, I think some of the criticism is, is based on that notion like, well, what are they actually, what will these models be testing? You know, like what, what's the question, right? Yeah. But that, but the whole point, I suppose, not to put words in your mouth, is that the question they're question, agnostic tools. Speaker 1 00:33:23 Mm-hmm <affirmative>. Yeah, exactly. So that's how I think about it. Say if you have this, this, this you're out there, that's my, I, I like to hike in the mountains like men in nor regions do and say, if I get an idea, oh, I believe this MCHA is very important to, to get the hippocampal circuit to function. And I do get that out hiking. And then I get back to my little cabin and then sort of do something with ING and get the M channel and then simulate it down or some super computer in every Luo or whatever. Right. So that you can test that, oh, what a consequence of that. So that's how I, it's a tool for, for testing hypothesis. It's not the hypothesis in itself. And, and I think it, I mean, my background before I came into neuroscience was in, in solid state physics where I worked with like materials mm-hmm <affirmative> and, and there we have, we know how to, in some sense, the, we have the fundamental principle for, for solving materials. Speaker 1 00:34:16 We, we just have to put all these Athens into sort of like the grid or like the, whatever. Yeah. The like the structure that we know is there and the LA structure. And then we have to solve the shredding, Aion this quantum mechanical ion for the electrons, and that's a very complicated evacuation to solve, but nevertheless, we know that if you're able to solve it, then, then sort of, we, we, that sort of gives you the it's a close approximation of the truth. So that's, uh, but, and that's the same thing that, well, that's very useful, but it's not the end of the story. You don't really understand you, you need additional simpler theories to understand particular at the moment, particularly phenomena, like for example, super connectivity, some materials can conduct electricity without end resistance or, and that some, some materials are, are semiconductors, others are metals and so on. So you need also these other, like for the course grain theories. And, and, but, um, I think it's, it's a combination of these detailed models and this simple models that really gives sort of like a, cause the detail models are maybe I needed to maybe make sure to make contact with experiments. And, uh, but they are not alone, not sufficient alone to give you that, that, that insight. Speaker 2 00:35:34 Okay. Well maybe just coming back to, uh, the models that you work with. So one of the goals, and I guess the main goal is for the model to predict all these various types of signals, like I was mentioning earlier, and I, I guess you started off with, uh, wanting to predict local field potentials, uh, because well, uh, I guess to give it away, you found that a model that predicts the spiking activity, uh, of a network doesn't necessarily predict the local field potentials of the network. So, uh, talk about that and, and why it's important you think to, you know, come up with these equations, like you were saying in solid state physics, uh, to make predictions about the neural measurement signals that we, uh, measure mm-hmm Speaker 1 00:36:18 <affirmative> I think so. I mean, if you look at the, at the, at the systems neuroscience, right? All, all the insights we have from studying systems has been from measuring spikes, the extracellular signatures of, of action potentials Speaker 2 00:36:31 Through my, generally through my local field potentials away. Yeah, Speaker 1 00:36:35 Exactly. And I think there was also like, I guess historically there were two reasons. I mean, you couldn't store them all also because you didn't have cheap, hard drives and stuff. Maybe that's not a problem problem now, but obviously there was like a, sometimes we very, a little bit BLIS in neuroscience that the spikes are so easy to interpret. Right. You know, that they are from action potentials of neurons in the neighborhood. Uh, and, and, and so that's why this is typically typically what you measure and analyze, right? This Speaker 2 00:37:07 Is, Speaker 1 00:37:07 You know, at the heart of it. Yep. Speaker 2 00:37:08 You've made the point also though, though, we're still not confident that we understand the quote unquote code of spikes, whether it matters the timing between them, the firing rate mm-hmm, <affirmative> the overall firing rate, et cetera. So we're not, uh, that comfortable even with spikes. No, Speaker 1 00:37:23 <laugh> no, but, but I guess we do believe that if I knew all the spikes in the brain of an animal, then we have sort, we have all, we need to understand the in principle, uh, the information flow or if you're just very clever and figure things out. Yeah. So that I think is also also true. Uh, so, and, and if you sort of look just for the history of, I mean, of spikes, right? It was its receptive fields that was sort of like the UBI and, and, and, and obviously told us a lot about neural representations. And now we have, uh, with this manifolds, uh, whatever it is, neuro manifolds, which is also obviously a, a nice way to look at this, which also tells us, uh, tells us things. Uh, but, but from the point of view of falsifying models, say if you have geome spikes and LFPs local field potentials, just to make sure that, I mean, the local seed potentials yeah. In the cortex is measured from the same electrode. It's just a low frequency part. So maybe the, the, the part below a few hundred Hertz. And then, uh, and then the spikes are actually extracted from the, the high frequency part above a few hundred Hertz. So it's the same, same electrode. It's just that this it's like a different aspect of the signal. And then, and the local, Speaker 2 00:38:41 The local field potential, just to carry on with what you were doing, which I, I should do more often to describe what we're actually talking about. The local field potential has traditionally been thought to measure the, uh, synaptic input to the neurons, uh, and, and a broader scope than, of course, a spike, uh, which is the output of a neuron. Speaker 1 00:39:02 Mm, exactly. And I think that is still true in many situations. Uh, that's true. We're actually writing a book now on the electrical, what's it call electrical signals of the brain. Oh. Uh, which hopefully come out next year on, on Cambridge. Uh, so we will go through the, like the, what I call the measurement physics, the link between what the neurons or our group goes through the, what the link between the neurons are doing and what you would measure in terms of spikes and local field potential and EEG and E ECOG and also Meg, uh, and so on. So, um, but, but it's, it's, uh, the thing, uh, so, so the thing is with, um, uh, the, yeah, so the, uh, the thing is that if, if you, the, a local field potential, uh, well, even with the best electrodes that has say the neuro pixel probe, uh, that, uh, that is like they're using at the, the island Institute while it's used all over the world. Now, uh, you can only measure say, like, I think if you have one of these neuro probes, that's the multis SHA multi electrode that has like made many contacts. And, but even that you can sort of inhale quite deep into the Cort and cover lots of mouse, visual cortex, and well mouse sprain altogether. You can only measure about 70, 80 neurons spikes on 70, 80 neurons at a time from each, each of these. Shas Speaker 2 00:40:26 You say only. So Speaker 1 00:40:27 If you want to Speaker 2 00:40:28 Back back in the day, when you can only measure one it's it's much, you know, much more enticing Speaker 1 00:40:33 That's that's true. So it's, uh, so it's only, so it's only compared if you, if you want to use this, this data to constrain a network model, so then it's then it's. Uh, so then, then, yeah, so that's sort of the key thing that if you want to constrain a network model, say if you have this model with many, well, thousands or tens of thousands or hundreds of thousands of, well, a few tens of thousand, a hundred thousand, uh, neurons mimic piece of, of, of visual cortex, it's, uh, certainly the spiking data are important, but they're quite erratic and, and stochastic. So it's, it's a lot of variations of that model that will be compatible with like, uh, well, the experiments. So you have on spikes, and that's where you come in with, if you can, if the same model can predict not only spikes, but also other thing that the local field potential, then it's more, you put more constraints on. So that is sort of the, the why, like one of the reasons I'm very interested in LA local feed potential, and it's also E ECOG and EEG that to be able to predict all these different measures from the same physical model. So it's sort of, yeah. So that is all that there's. Hmm. Speaker 2 00:41:45 Does that have a touchpoint with, um, thinking in terms of one mouse versus average mouse that LFP signals essentially would also get us closer to the average mouse? Speaker 1 00:41:56 Yes. I think that's, that's certainly also true. Yes. And I think also you should just look at the, a single mouse, uh, at one trial versus another trial for single, for the same mouse. Yeah, yeah. Right. So it's, everything is just more, uh, less variant because it's, what, what is you, you said that LFP is sort of reflects the synaptic inputs to neurons and, and these synaptic inputs of course comes from pre-synaptic spikes mm-hmm <affirmative>. So in some sense, the local field potential is sort of like some kind of average weighted average of spikes, just a different kind of weighted average. So in some sense, it, it captures capture a lot of the spiking going in, going on in the network in an indirect way. So it's, uh, it's so it's, it's sort of like some sense there, so that's, and, and there, there's the thing, if you want to compute the local field potential, say the comput, what does the contribution to the local feed potential, uh, from a neuron that gets a synaptic input say at the then, right. Speaker 1 00:42:57 What does it look like if you want to model that you, you cannot use a point neuro model because a point neuron model doesn't have doesn't generate an extracellular potential. And it turns out that the, this contribution to the local field potential, that depends on the morphology and where the synapses are. And, and then you need to use these BI fiscal detailed models. And actually, if you want to do this measurement physics properly, that's not only for the electric measures like spike LF, P E G eco, or the magnetic measures, particularly imaging, but also for the optical measure, like the response to voltage and the dye imaging and, and, and, and, and so on that very much depends on then you need this biophysical detailed models. So to make this proper measurement physics link, then you need this, this biophysical detailed model, even if you, but there are, even if you sort of maybe can get away with simpler models to get an idea about, uh, like how the information flows in the network, if you want sort of to translate it into the things you measure, you need to go go via this, uh, this sort of like biophysical detailed models. Speaker 1 00:44:01 And, and so that's something that we have worked quite a bit on doing this kind of making this practical tricks for, for making that possible so that you can sort of compute EEG contributions from a network model even of, of point neurons or even, uh, fir rate, uh, fir rate models. So it's, it's sort of this more mundane thing of, of doing the measurement physics. I sometimes compare it to the, to this, I mean, the discovery of the, the, the Higgs Boal in, in C Switzerland, mm-hmm <affirmative> because there, you had, if you look at the people at CERN and they, they had maybe like 20, 30 people, I don't know who worked on why there should be a hi post in the first place. Right. Mm-hmm <affirmative> that has to do with the standard theory quirks and a standard theory of, of particle physics, but then it's like, okay. Speaker 1 00:44:47 So, but then most of the people working on, how can I, how can you measure this HIPO, right? It's not like you can sort of just look at these traces and say, oh, there's a HIPO zone. You have to simulate the whole experiment. You have to do the measurement physics. And that's, so it's just like a separate thing. And that's sort of this measurement physics has been sort of, I think, underdeveloped in, in, uh, in competition neuroscience, most people are, uh, have worked on sort of this information processing and seeing how spikes generate new spikes. And so on which I understand, because this is sort of the most interesting thing, but if you don't do this measurement physics correctly, then it's, then you go to make incorrect, incorrect sort of like comparisons with, with experiments. So there's many examples of, of people who haven't sort of, I mean, papers are sort of not very valuable, uh, when they're compared with experiments, because they've forgotten this sort of basic, or they haven't done the measurement physics thing. Speaker 1 00:45:43 So that's one of the reasons we, uh, well, well, one of the reasons we, um, uh, well do this, this, this sort of like model the potential, there's also this other thing that, I mean, uh, uh, people obviously measure local field potential and eco and EEG, and what do they do with them? Well, typically they do statistical analysis of some sort, right. And then try to interpret this sort of this data in terms of some kind of underlying neural activity. And how do you, how do test this data analysis methods? I think it's always a, a good, like a saying approach. If you can make some good benchmarking data where, you know, the ground truth, and, and if you have sort of a, uh, have sort of like a test circuit, uh, we'd say like a piece of mouse cortex, even if it's not, I mean, fine tune to, to, to, to, uh, to correspond to one particular mouse in, in that sense, not very realistic. Speaker 1 00:46:43 It's still probably good enough to test, uh, a method for, for example, what's called current source density analysis, uh, which, which doesn't, which will work for also unrealistic models or unrealistic mic, it does depend on. So, so I think depend on sort of being biologically accurate. So I think that's a very important, and we have used that actually to test sort of different. We have also developed some new methods for current source, the analysis, like in the past, and they were tested on, on, on bench model based benchmarking. And I think that's, um, this is also important for developing automatic spike, sorters, I guess you were sort of, uh, in your, yeah. I don't wanna talk about it to, yeah, I know. <laugh>, that's sort of exactly. So this is sort of, uh, I, I mean, we are in, I'm in Oslo at this, uh, called the center for integrated well simpler center integrated center for integrated neuro plus. Speaker 1 00:47:35 So we have modelers and experimentalists sitting, uh, side by side. So we also look at this, these people well doing spike sorting, right. And all the issues related to that. And it's like, uh, yeah, so that's extremely, yeah. So, so having automated, validated spike sort of methods I can trust, that'd be very useful. I do the field. Right. So, yeah. And I think there with the, then you need ground truth data. You could get it two ways. If you have dual recordings say like measure things, measure spikes, and maybe optical responses to two photo calcium imaging or something, but, but, uh, that's sort of quite hard to get by. And, and, and, and to at least get a lot of these kind of data, but with model based data, I think the, the, the modeling of these cellular signals are quite, quite well established. So I, I, I sort of, uh, so, so I think it's, it's, uh, the modeling formalism, you can, you can trust Speaker 2 00:48:29 Yeah. The spike sorting thing. I'll just, um, make one further comment on that that has become more important with these high density, multi electrode recording, um, techniques, because back in the old day, when I started, you could, you know, with a single electrode, you could everything drifts while you're recording. So you could kind of chase that neural signal and be confident that you're still recording the same neuron, but you really, you can't, you can't do that with these multi electrode, uh, systems. And, you know, there's questions about you, you, you imp the cortex, right. Or, or put it, you know, down into the brain, and then this is talking a little shop, but, and then you wait and, you know, you go, go get a coffee because you know that it's gonna drift, uh, over time. And so you want it to be as stable as possible, but it's never completely stable. So there are different, you know, ideas about how long to wait and so on. So anyway, Speaker 1 00:49:25 But you were working with, with monkeys, right? Speaker 2 00:49:27 Monkey monkeys. Yeah. Yeah. But they were head fixed and, you know, so they, they weren't freely behaving in that sense, but still there's there's drift. Yeah. So anyway, just a size Speaker 1 00:49:39 I say about this, the, the usefulness for, for modeling signals, I think you are, uh, you had a nice podcast with your old post advisor Jeffrey show. Speaker 2 00:49:48 Oh yeah. Yeah. He was, he was my advisor when I was, yeah. I did a postdoc with Speaker 1 00:49:52 Him. So he referred to this, this work with our, uh, Jorge and others town is Speaker 2 00:49:57 Yeah. They work Speaker 1 00:49:58 On that. They're Speaker 2 00:49:58 Using. Yeah. Speaker 1 00:49:59 Yeah. So I was, uh, so I know this, so they are doing some very nice work using actually some of our tools. So Speaker 2 00:50:05 I wondering about that. Yeah. Speaker 1 00:50:07 Yeah. So exactly. So this is, uh, Jorge is a great guy and I, well, Jeffrey also, I'm quite sure I just haven't met him in person. Uh, but, um, but ho was, well, he's been listening a few times, so this is, it's not, so this, I mean, computation neuroscience community is, is sort of not so big compared to the neuroscience community. Right. And within the computational neuroscience community, there's like this, the people who model signals are, it's a minority of the minority fraction. You, so like <inaudible> is, is, is one of them. Right. But, uh, more and more people are, I think, well, that's what they always say. When you sort of want to promote your <laugh> your approach or tell about your approach. But I think more and more people are, are sort of, if you want to go beyond this sort of, I would say like anecdotal understanding of networks. Speaker 1 00:50:52 I think there's been some very beautiful, uh, models studies of, of learning about how principles for how, for how networks may operate, how we get the dynamics. Maybe particularly this one example is this balanced excitation inhibition ID, which was beautifully demonstrated, uh, like 20, 25 years ago in, in a more generic network, like simple neurons. And, and, and we have learned a lot from those, but, uh, if you want to make sort of not what, like this generic studies go beyond that and more make models for particular particular systems, particular like pieces of cortex or, or particular systems, then you have to sort of look into, well, then you have to sort of do this, this measurement physics more properly, even though this, I think, uh, obviously you mentioned Eve martyr and, uh, the work of, uh, well, her work obviously, or the work in her group on, on this, uh, this, those, what is gastric gang li yeah. Speaker 1 00:51:52 It's certainly very, yeah. Yeah. <laugh>, it's, uh, has been, it's very important. And as obviously illustrate the, the many of the same issues that we have to have to address in, um, in, in competition, in, in one dealing cortex, I would say though, that one, I think life, if we get stronger computers, say like, if we get this, like the Ebra as, uh, like the infrastructure following the human brain project so that you can sort of run Disney, large network models and run them for a long time, then we can start also exploring, uh, like plasticity rules because, uh, actually coming out from E Marty's lab, there are this sort of beautiful work on, on, on SIS. Why does sort of a, uh, why does a layer five neuron in the cortex know that it's a layer five neuron? What, what happens? And, and it turns out of course, and it was like also beautiful work. Speaker 1 00:52:51 I remember by one of our posters, Tim, Tim mole, I think he's now at Cambridge where this showed how sort of you can sort of instead of fitting channel entities for in, in neuro models, you can sort of essentially just let them tune themselves using this essentially like plastic rules based on the intercellular calcium concentration. And then suddenly you have changed the parameter fitting from channel densities to, to necessarily fitting these, this learning, these plasticity rules. So it might get much simpler. And it's the same thing with, uh, there was also this beautiful work from the group of, uh, work from Gasner. And that was like 10 years ago where they showed how you could get to this balanced state, uh, without having to fine tune and, and to, to get this right parameters, you could have this particular synaptic plasticity rule. I think it was an inhibitor synaptic plasticity, which let the network tune itself. So I think it's, it's li that's closer to what the real brain does. I'm quite sure. And it will make life easier if you just are able to run the models longer so that we can use this, use these for this. Speaker 2 00:53:57 So, so you make these, uh, high, you know, complicated high parameter models. And I know you, you make them at different, you know, levels of abstraction as well, and then you can compare between them, but how long are we talking? Uh, are, do, do you run your simulations? So it's not long enough for a, a sort of a plasticity allowance? Speaker 1 00:54:16 No. So, uh, so typically they are, uh, run for a few, few seconds of, of biological time. Uh even though we haven't really pushed that. And, and so far we haven't looked, I think we can sort of add short term plasticity, uh, like that, that is something we could add to the model. And that could maybe give yeah, and add, well, add, add something to it, but this sort of more, this homo static plasticity and, and synaptic, like long term, like long term synaptic plasticity, it's not something that we can, we can do. And, and they are obviously, I think hopefully that will make our life easier when the, in the future we can actually explore these things in the, in, in models and, and let, I think you're going to make the parameter fitting problem, Speaker 2 00:55:03 But you could, um, potentially do that right now. So a lot of what you do is, uh, model multi compartment neurons, where, where you're breaking down the neural neural structure into like lots and lots of different sections, but you also test, uh, dual compartment, right. Where you have an ale area and a basal Soma area. Yes. And, uh, two Speaker 1 00:55:24 Compartment we hold yeah. Speaker 2 00:55:25 Two compartment. Yeah. Um, yeah, mm-hmm <affirmative> you could potentially already, I mean, but, but they're not as computationally costly to run. Right. So you could potentially go down that avenue right now with those. Right, Speaker 1 00:55:36 Exactly. So, so actually when it comes to this, uh, this, this Allen mouse visual cortex model, they have like two versions, one with the bio physically detailed models in multi compartment and, and, and one with point neurons. And, uh, we are actually using, using both. And actually when it comes, we have a lot of well master students working on this model here, and they they're typically use this simpler point neuron version simply because then you can run it like normal computers and you can sort of get feedback in. And I think it's true when it comes to symatic plasticity and, and it's like effect of spiking, uh, then, uh, and sort of how self tuning of these connection parameters then AB I think sort of actually exploring pointer on networks first is probably a right avenue and, and people are, are doing that. I don't really, I don't have a full, I don't follow that feel as closely as I should, but certainly that's, uh, that's important work there. And I, I, I think that's true. So you just have to be practical about it. And, and, uh, but at the end of the day, you would like this different approximation schemes to be internally connected so that you don't just invent something that on the course grade level that could never happen for a real neuron. Right. So it has to be some kind of a consistency, Speaker 2 00:56:54 But, but even the multi compartment neurons, that's a decision to make, uh, how many compartments, you know, because you're still, it's not bio physically equivalent <laugh> to a real neuron. So, you know, how does that decision get made? For instance, Speaker 1 00:57:08 I think there typically the standard rate that you, you, you choose the, you divide it up into these small sections, which are compartments mm-hmm <affirmative>. And, and the key thing is that, um, the membrane potential within each compartment should be the same meaning. So there shouldn't be a potential difference within the, within the compartment, uh, which is sort of, uh, larger than some kind of fraction of, so that, that's how we set it. So the, so said the compartment should be equity potential, as we say, Speaker 2 00:57:39 But then how, how you Speaker 1 00:57:40 Decide way. Speaker 2 00:57:41 Yeah, go ahead. Sorry. Speaker 1 00:57:43 No, so how so, what was the question? How, how do we decide size of the compartment or, Speaker 2 00:57:47 Yeah. How many, how many, how do you decide how many compartments, I mean, I know sometimes you just kind of plug in, like with the Allen Institute model, you use other people's models also, and sometimes alter them, uh, you know, changing parameters. But Speaker 1 00:58:02 I think, I think, I think in neuron, what they often do, what neuron is this simulation tool that is used, uh, still sort of like I is the most used simulation tool where you can import this reconstructed morphologies of neurons into the model, and then it's sort of compartment license it itself. And then you have this, uh, what's called, um, you have this measure called the electrotonic length. How, how fast does the potential, if you have this little piece of a cable, then you have this what's called electronic, uh, length, which tells you how fast does the potential decay from one end of the cable to the another. And that depends on the, on the, on the diameter and, and circum. Yeah. Like these things and the resistivity and so on. And if you say that, well, it shouldn't decay more than maximally say 1%. Speaker 1 00:58:49 And that also depends on frequency. So you have some criteria like this that you can use to sort of to say, but, but again, what you typically do, uh, well, what you could do, if you're worried about this, you can sort of change it and make a more stringent criterion and see if the results you're interested in change. Right. Right. Yeah. So, so that's like, uh, so, so that's sort of like, yeah, so there are like these approaches to at least do the, to like have some sanity tests in the model on the modeling itself. Speaker 2 00:59:18 All right. So, uh, in terms of, you know, what you've been able, able to accomplish, uh, you've been at this for quite some time. And I, I mentioned earlier that, and I think this is correct that you started with, um, trying to essentially predict LFPs and, and spiking, but LFPs was a local field potentials was the problem. Uh, and I know that you, you are continuing to, um, you know, move into EEG signals, which has not been as much of a problem as the LFP was originally if I understand correctly, uh, and other types of signals. So, you know, what, um, you know, what, what lessons have we learned, or have you learned so far about the importance of being able to predict these signals? Speaker 1 00:59:59 Yeah. So I would say what we have done. I mean, we have worked on these signals for like, I guess 15 years or something. And, and most of them was sort of to just sort of, to look like what called generic studies. Say, if you have a population of, of, of parametal neurons that receives the synaptic input, how, what, what really determines how strong the local field potential measured inside this population would be? I mean, and morphology is important, but what is also found is that it really the distribution of synaptic inputs. I mean, it's like the homogeneously distributed, you get a very small LFP, even though the spiking resulting from this might be very high, right. But it's just that you, you need this like asymmetry or unbalance in the input sort of to, uh, and, and then another thing we found out is that, or we explore systematically. Speaker 1 01:00:51 I think people have sort of known this before, but I think we have done it sort of taking it to a quantitative level and, and that effects of correlations, how correlated the synaptic inputs are. So that really determines a lot. So, so say if you have a, so we had one, one paper that came out, I guess, 10 years ago, something where we looked at, uh, how local the local field potential is. So if you, if you put down on electrode, you know that, well, if, if you, if you measure a spike or spikes, you know, that they typically come from within hundred micrometers from the tip of the electrode, but what about the local seed potential? And then experiments have sort of had very different estimates on that all away from like a few hundred micrometers to centimeters. And what we found by exploring it in a model is that this could be that this what we call the spatial reach, uh, very much depends on how correlated the inputs that they correlated to the correlated, the, the, like the, the, the, the different, well, the signals that to the neurons that sets up this LFP. Speaker 1 01:01:58 So it's a little bit like if you have a, a microphone hanging over a football stadium, right. And then it's, uh, if there's like this like small talk and nothing is happening just boring, then it's, you don't hear it very well. But if, if somebody scores a touchdown, right, you get this sort of some sense correlated chair, and then you can hear it from outside the stadium. Right. So it's a, it's a little bit the same, the same kind of ID. If you have like this correlated neurons, that's singing in, in synchrony, then you get a strong LFP. So anyway, so these are the kinds of studies that we have have investigated. And also, for example, if you measure LFP, could it, is it necessarily due to neurons around you around the electrode, or could it be this very loud neighboring population? And you have explored this quantitatively and, and, and learned, I think a lot about this, how these, um, uh, uh, how these sort of like what really determines that, that competition, uh, and then they're looked at sort of aspect effect of active and conductance, IH, for example, that you can, can they, how, how will they affect things? Speaker 1 01:03:06 So we had done studies like that. And now also, as I said, focus on EEG, uh, because I mean, the traditional way to ni one, one traditional way of analyzing EEG signals is to, uh, to, to try to do sort of like identify estimate sources where there's like DPO sources and that's like an ill post problem and, and very hard, but you can put it in screen. So that's a lot of, of work that has gone into this, but, um, what we have done this or to add to this is that we now can actually compute the current. If you have a particular neuro population that gets synaptic input at a certain place, we can compute the current type of moment. So we can actually make that connection between the current type of moment. And what's actually going on into, in, in the circuit. So that's sort of is, is sort of what we have worked a lot on this what's called the, the forward modeling of, of electrical, uh, electrical signals. Speaker 1 01:04:01 So I would say most of the work we have done has been on sort of finding these principles. What makes large LFP, when is it large? When is it small? Uh, and, um, and also we worked on, on making tests, like ways to make test data for spike, sorting algorithms, sort of like different kinds of application of, of forward remodeling, testing, data analysis methods, and so on. But now we are trying to, to, uh, I would say like the last couple of years in collaboration with, uh, an Institute, we're trying to use this to, to constrain and, and, and work towards this multipurpose version of this mouse visual cortex model. So that is something we had actually couple of, of workshops that we are organized together with the Institute out on IPOs. So if people are interested, they can look it up on the net and find some, uh, I mean, they are, they're out on YouTube, this, this thing. Speaker 1 01:04:56 So, so it's this, this goal of trying to, trying to make this multipurpose model. And one important thing is that it's well, there's lot of parameters, right, obviously in this, uh, and you can think <laugh> in these models and you say, well, with five parameters, you can fit an elephant and right with six, you can make it blink or whatever, but that is for statistical models, right? When you just fit into a mathematical function, fitting a curve to mathematical function. But, but, but if you have this mechanistic models, like physics type, that type of network models and, and finding combination of connectivity or connection parameters then, and so on, that sort of makes the LFP look right, is very hard. It's not like it's easy at all. So it's not like it's like many models that at least at the present stage, the hard, the challenge to find one model that actually is quite closely experiments. It's not that we have a, if probably, if you find one such model, we can sort of expand and find where it sort of is find this parameter set, but at the, it's not like it's many models, it's very easy to find a model that fits the experimental data because it's a mechanistic model. It's not a statistical model. Speaker 2 01:06:07 Well, so you you've tested models with like a handful of neurons and, you know, you're scaling up how, how much of a problem is that aspect of it, of trying to find the right parameters for, let's say an LFP signal or something like that as you scale up. And then, and then what's your outlook on scaling these simulations up? Speaker 1 01:06:26 Yeah. In terms of one thing that the LFP is, um, it is also, it's a course grain signal. Right. So, so I think if you, if you, so it's not like a moving one neuron around a little bit or changing a little, it sort of takes the it's sort of the roar of the crowd in some sense. Right. Yep. So, and we haven't, uh, so, so we have actually now, uh, I mean, what we have done in that, the last paper that is just while out in, it's done out on by archive coming out in pro it's out on actually it's out in plus computation biology, where looked at the trick for being able to, right. If you have even a firing rate model, uh, when you model say populations and say cortex, then like a, I don't know, LA Laly organized set of populations and, and being able to convert that into, to, um, uh, to convert that into LFPs using as sort of like some, some tricks we call the kernel trick so that we're able to actually, to, to compute these signals, uh, without, uh, because I want to do brute force it's, it's actually a, it it's actually an, uh, quite additional, well, I mean, it's, it really is an additional load. Speaker 1 01:07:41 So if you want to combine it with, with say firing rate models of, of, uh, like, like the neural field models, neural mass models, then you need to do some kind of trick to compute, say the EEG contributions. And when you want to make, uh, neural field model for the whole brain predict EEG signals, then you need some kind of trick. And that is something we have, have worked on. So, so the limitation one, one important thing is that finding a, say, finding a mouse cortex model tuning the parameters so that it becomes multipurpose is very hard. And so that's going to keep us busy. It's, it's not like an either or thing either, right? We're going to hopefully get closer more and more sort of more and more multipurpose. And, and, and, uh, we don't don't really know how fast the progress it will be and so on, but that's a hard thing. Speaker 1 01:08:31 If you have a model, the computer, this LFP and EG and so on, that's very easy. The measurement physics is much easier than the so sense. The network physics is the only thing. So it's, uh, the limitation is actually being able to make well, both make large larger in network models, covering maybe cortical areas and actually whole, whole CORs and, and making it make with sensible parameters. So that actually it's, it can actually predict sort of similar and predict some of the things that, uh, of the experiments that's sort of the hard thing. And, and, uh, but take going from that model to the, the EEG signals over the other things is not, not so hard. And the first, the first of, of going say, going towards the human brain, it's, it's, uh, I mean, obviously there's, uh, not only con problem of, of knowing enough about the, the neurons and, and particularly the connections, but it's also then a matter of, of, of, of scaling up and being able to, to run large enough network models that they sort of yeah. Speaker 1 01:09:41 Makes, makes sense that their network behavior sort of like resembles what you see in experience. Of course also in humans, you have, you have much less data because, because you only have, yeah. You don't have like non in like essentially just EEG GM and imaging. Right. Which are noninvasive. Yeah. So, um, but I think that maybe the, I think what we're, maybe it's, it's a little bit like if you sort of look at the, I mean, like deep networks, again, convolution networks, the neuron dynamics that is typically assumed, right. You have like this lower linear, linear threshold is really network. So, so the neurons are sort of fixed and, and you tune the sometimes synaptic weights. And I think with this say with the mouse visual cortex where the they're mapped out during this patch, electro these a automated way to find quite good neuro models. I think the, the, the, the weak weakest link now is finding the synaptic connections. We think we're better off making, make some kind of automated procedure for getting at least decent neuro models. So I think in, in our project, you often start with taking the neuro models for, for granted, at least as a starting point. And then we have to deal with the synaptic connection weights. Speaker 2 01:10:56 Okay. So I just wanna, um, understand and reiterate the, and I may be repeating what you've said earlier, but, you know, with something like predicting an LFP signal or, um, figuring out how, how widespread from how widespread you're, you're recording in an LFP, the goal is not necessarily to understand the meaning of the LFP, but just to be able to get the models. Right. So that when you are testing these against experimental data, you know, you have a, a well constrained and built model, right. So it's, it's not specific to interpretation of the signals you're recording. It is, it is specific to the model as a, a tool. Speaker 1 01:11:36 That's true. Yeah. So it's, it's, uh, so it's, yeah. It, it's a way. Exactly. That's true. So it's, it's a way to, to predict, given that you have your model give, if you believe in your model, if that's your hypothesis, then what are the consequences in terms of what the LFP should look like? Right. And because I don't think there's this question whether local field potential has sort of actually feeds back to and model its dynamics. Um, it could be, I don't know whether it's not quite clear how, whether that is important in practice, but I, I look at LFP sort of like a, actually talked about the Stanovsky and <laugh>, which I made left. He called LFP the exhaust fumes of the brain <laugh>, which is some, some that's true. It's sort of, so it's more about what can you learn from the exhaust fumes it's if that's the only thing you measure. So it's, I think that's, that's true. So if it's like a proxy for the spike, Speaker 2 01:12:35 So, um, this the Speaker 1 01:12:36 Way. Speaker 2 01:12:37 Yeah. So just to stick on LFPs for a moment, um, uh, neuro oscillations kind of waxes and wanes as a focus of, uh, neuroscience, like you mentioned, BJA he's studied oscillations a lot. Uh, is that something that, uh, is of interest to you to, um, recapitulate oscillatory dynamics? Also, Speaker 1 01:12:59 For me, it's one of the, if you, one of the features that, that a successful model should sort of say, if you had the, had the model for the hippocampal formation and, uh, mm-hmm <affirmative> and so on that, and we obviously oscillations are, are prominent, then a successful biophysical based, well model of that should reproduce the, the oscillations. Uh, so for me, it's, it's another, it's sort of like whether is oscillations or something else that's yeah. That's uh, so yeah, so it's, it's, uh, so that doesn't really change in terms of the, the modeling, even though it's, uh, what we've shown, I mean, but it's important to, we, for example, found that we have like this, uh, this res and for some of these hippocampal neurons, you have this resonance, I think that you can sort of get the largest you get, how is it, I mean, you can get the typical or that largest LFP is at some, I think it's data frequency or something like eight thirds or something like that. Speaker 1 01:14:00 Yeah. But anyway, so we saw that we get actually in principle, that could be an artifact of how the, in some sense, the, how, how the AIH is distributed. It's just like the it's an IH messes up the, or changes the LFP. So it just illustrates that you have to be, uh, when you see an oscillation, it doesn't necessarily in an LSP. It doesn't mean that the firing rate necessarily is oscillating at the say, certainly not. You don't get the waiting, right. At least. So you have to sort of to, you have to do the measurement physics. Right. So that's why really important, I think to have this, this, this, this modeling framework, uh, sort of as like a way to test your hypothesis when you have an ID that, uh, well test why there's like a firing rate thing versus, uh, like another thing that sort of sets up oscillation, because it's, I mean, we have worked on this cable ation and, and neuron models and networks for a long time. Speaker 1 01:14:58 And we have really spent a lot of time on this in our group. And we are still surprised by the predictions from the cable. And sometimes you have these intuitions that, oh, then there should be a bump here, and then you do a simulation and no it isn't. And then, then they go back and, and figure out why it isn't there maybe, but it's, it's difficult to have intuition about how this signal should be. It's, it's sort of a, it's all this, what you call folk physics, it's sort of where you, uh, where people sort of, they all like these rules of thumb, which are, uh, sometimes which should be tested. And they sort of like, it's a little bit like spread spread from father to sun or on the, around the campfire, <laugh> in the neuroscience community. Right. So you have to test these things, right? Speaker 1 01:15:39 For example, a thing that if you sort of measure strong local feed potential in layer four in, uh, in cortex, it doesn't mean that the neuro realm that generates that, that lay in LFP has a so wide layer four, they don't have this, this locality that you have for spikes. So these are still, you see sometimes meet this still. So it's just that it's, uh, some of these things have to be, be, be corrected. So I think it just make, can make the, the field more quantitative and, and precise and help us, uh, when we sort of do this kind of measurement physics exercises to get sort of a, to, to, to at least not fail on that, because it's so many other, other things which are much are inherently difficult. Speaker 2 01:16:26 You mentioned, um, the, you know, recent kind of explosion in focusing on low dimensional manifolds and the dynamics of populations of neurons. I mean, are there simulations that you do not long enough to, uh, connect with that or, uh, how do you, how do you see, how do you see that, uh, in general, and then do you think about that in terms of your own models and simulations? Yeah, Speaker 1 01:16:49 No, I think it's, uh, I mean, if you, if you think of say, if you want to, to, to, to make this mouse visual cortex model, right, mm-hmm <affirmative>, so you have this, this, this model with some hundred thousands of neurons and say, if you measure in, in animals that you, then you can measure these while you measure receptive fields and, and selectivity and so on, and you can also measure these neural manifolds and experimental data. And then that's one thing that your model can be compared with, right? It's another sort of things that you compare that's based on spikes. So it's another sort of like, uh, well, another thing that I go go that another kind of insight data that you can use to constrain your model. So that's, uh, so I think that's all, all these different measures that you can get from that you can use to, which you think are important is something you can use it to test your model. So then you will have sort of like this benchmarking test suit, right. If you sort of have, have this, this normal should predict this, how does, well, does it do on this sort of how similar is it to whatever you present experimental data, and then, and then you have little bit like they have for this what's called the brain score. Is that what they have in the DeCarlo Speaker 2 01:18:03 Lab, brain score brain? Speaker 1 01:18:06 Yeah. So that is if get some, I mean, similar thing, if you have a test to not with a single brain score with some kind of multi objective thing and, and see how Speaker 2 01:18:15 High dimensional high dimensional test scores, <laugh>, you gotta reduce the dimension. Those also exactly. Speaker 1 01:18:20 But I just be so, because I think that's why I think I, I, what I, what I like about this overall project for one thing, obviously, I think it's, I mean, it's, it will be cool to really understand a piece of cortex at the level that you understand the neuron, right. That will be a, a very important thing. It has so many, uh, um, yeah, but, but I also think so many applications and, uh, and ramifications, but I also think it's it's because it it's really a project or a program where you can make progress because we can measure success because we compare with different kinds of well experimental data, and you can compare how well you're doing mm-hmm <affirmative>. And, uh, at the moment we have far away from that's certainly room for improvement in our models. I mean, this is like, I would say early days, and we only worked on this for two years, and I hope that more people get interest in this. Speaker 1 01:19:13 I mean, this, I think this, this experimental data set that they have in electrophysiology and, uh, and, um, an optical physiology that they have freely available at the Allen Institute for this mice, which are sort of like the like 50, 60 mic, and then the same age and everything is all as re almost as reproducible as it can be, because it's almost like this industry style lab. It's a fantastic opportunity for, for this kind of neuroscience. So this is, uh, which hasn't been there, which hasn't been there before. So I think it's, uh, it's really a, so I hope there are some, uh, young eager brains in, uh, whatever listening to this us or in Indonesia or, uh, Australia or in the north Finland or in Norway or whatever now. So we can sort of, they, all this data is available for some sense for everyone with a laptop. And if you also get these sort of possibilities to do these large scale simulations from all these places, it's, it's an enormous PO yeah. It's enormous opportunity for, for, for neuroscience. Speaker 2 01:20:19 All right. Gotcha. And I would say Speaker 1 01:20:21 Just one thing I have to say. Yeah, because it's the reason, I mean, Hodgkin ley did essentially this, this made this neuro model by the, by themselves. And ideally I would also like to do the same thing for a, for a cortex by themselves and well, for, for piece of cortex by myself or in our group. So, so the reason we have this large scale initiatives, like the human brain project and others, and, and is really that it's collecting all this data and just building this infrastructure for simulating things. It's just not the, it's not the, it's not the job for a single group. And also just making this, what, what call skeleton molds the candidate molds, just putting up with some kind of plausible, um, some kind of plausible starting point is it takes, it's like many years. So it's, it's a kind of, it's a kind of approach when you do this approach, the Hodgkin hacks style approach to, to, to say a network in racial cortex, you need sort of like you, you need the large, large community, many people. So it's not like this collaboration has a value in itself, even though that can be fun also, but that's not really, that's not really it, Speaker 2 01:21:34 That that's something you also, uh, make the case for in that 2000, 19 neuron perspective as well. So, uh, point people to that also go out to let's let's um, I wanna switch gears and ask you one more question before we move on to some extra time for the Patreon supporters and good. And that's just kind of your broad thoughts about, so a lot of what we talk about on this podcast is the connection between deep learning and AI and neuroscience and cognition and brains. And I'm wondering how you view your, uh, simulation based approach with respect to, uh, uh, a deep learning approach to understanding brains and minds and cognition. Speaker 1 01:22:21 Yeah. Obviously, I mean, very concretely, the, the goal of the goal of sort of our, our, my approach is really to mimic a piece of the brain. It's not really to at the moment. Right. Uh, and hopefully then expand it to go beyond just the visual cortex to like a whole whole brain. So it has like a very different, uh, different goal, uh, on the other hand, I mean, it's, it's, it's now, when you, if you want to sort of tune these parameters, I think, yeah. I mean, everything is allowed in law war and optimization. I mean, you need, if you had get some clues from, uh, from AI in, in sort of how starting point for making these networks, uh, so that it get some extra hint, so that like the parameter space you have to search is smaller, right. Uh, is, yeah. So I think it's, and especially have all this, this great brain power and of course, resources going into AI. Speaker 1 01:23:18 So we have, uh, uh, like a bio AI group also, uh, at, at our university. So I, which I, where, and I collaborate with some, some, some people there, uh, also because it's, it's fun. And, um, but also because I think it's, it's, we need sort of, um, I don't know, it, it, I think in, you talked a little bit earlier on, on the early podcast about we should be more charitable for each other. And I think that's the very important thing I think in, in, and I think we should be charitable is, is a good thing in general, but, uh, also in neuroscience of, I mean, in the, in the sort of in this, like the goal of the task of trying to understand the brain, we don't have too many success stories, I would say, right. I mean, it's not like clear. Uh, so I think if, if, uh, people, you should be very sort of open to all the people's approaches and, and us less, unless people are doing something which are clearly unethical or, or like factually wrong, but, uh, who are, who am I to tell? I mean, I pursue this like biophysical detailed thing trying to model at, at that modeling at, as a physical system, partly because I think it's promising, but also I think, because it's something I know how to do, because that's sort of what I've been trained. Right. I'm trained as a physicist. You have to, if I figured out that what I really should do was monkey experiments. I mean, that would be the most promising that would be helpless. Right. I could never do never do that. Speaker 2 01:24:43 I'm sure. I'm so you, you can teach an old dog new tricks, but there's a limit. Speaker 1 01:24:47 Yeah. There are, there are limits. So I dunno. And then you have like on, maybe on the other extreme, you have like this or other opposite, have this Eli Smith with, uh, this other what's called right. And spa, I think spa should spoon. Exactly. And we should just throw a root on each other and, and, and take compare notes and see how we're doing. So we should be very, and I think AI also with the people working there, trying to make connections to, to the brain. Excellent. We should just like what, for example, the Carlo is doing and, and comparing there, that's, that's exciting. So I think we should be spend less time criticizing each other's approaches and just try. Speaker 2 01:25:24 Yeah. But so, but those, the AI models, right. Are still built on essentially built on, uh, neuroscience ideas from very, very early on, uh, you know, like point neuron kind of ideas. Right. And are highly, highly abstracted. So in that sense are at the opposite of, into the spectrum of something like you're doing, uh, creating these simulations. And do you, uh, hope and or think that, uh, the simulation based approach might actually end up teaching AI something or, uh, importing some principles into AI to help improve artificial intelligence because it should, and principle flow both both ways. Right. Speaker 1 01:26:08 I agree. Yeah. It could be. I mean, one thing that is, if you see this very successful, deep learning applications there, what called like single, single purpose that like category image classification, right. And, and there's is struggling with this transfer there, it's not multipurpose in the sense that it cannot provide for many very different tasks, uh, while the mouse visual cortex is, is, is, is used as input for, for dealing with many different tasks. Yeah. So, so maybe it's something with the neurons and maybe the, like, especially the, the temporal dynamics, uh, as of, of, of real neurons, which are sort of not captured well, the deep networks is not captured at all, but I mean, even with, uh, with these artificial networks that is crucial for, to get to this multipurpose things. So I think we should certainly should certainly well, as we start making, well, hopefully make more progress along these biological networks and, and we get towards this multipurpose models. And I, I certainly think that it should be, well, that could be that, that is something for the AI people to look at also. Speaker 2 01:27:16 Hmm. So GTA, uh, thank you for the thoughtful email, which, uh, generated this conversation. I I'll, uh, pass this on to Karina also. I'm sure she'll get a kick out of me having had you on the podcast. Yeah. Uh, and, and thanks for being with me today and, and sharing some of your work and, and much luck moving forward. Speaker 1 01:27:34 Yeah. And well, thanks love for being invited. I really appreciate it. Speaker 2 01:27:54 I alone produce brain inspired. If you value this podcast, consider supporting it through Patreon, to access full versions of all the episodes and to join our discord community. Or if you wanna learn more about the intersection of neuroscience and AI consider signing up for my online course, neuro AI, the quest to explain intelligence, go to brain inspired.co to learn more, to get in touch with me, email Paul brain inspired.co you're, hearing music by the new year. Find [email protected]. Thank you. Thank you for your support. See you next time.

Other Episodes

Episode 0

July 22, 2021 01:00:48
Episode Cover

BI NMA 03: Stochastic Processes Panel

Panelists: Yael Niv.@yael_nivKonrad [email protected] BI episodes:BI 027 Ioana Marinescu & Konrad Kording: Causality in Quasi-Experiments.BI 014 Konrad Kording: Regulators, Mount Up!Sam [email protected] BI episodes:BI...

Listen

Episode

July 09, 2019 01:02:19
Episode Cover

BI 040 Nando de Freitas: Enlightenment, Compassion, Survival

Show Notes: Nando’s CIFAR page.Follow Nando on Twitter: @NandoDF He's giving a keynote address at Cognitive Computational Neuroscience Meeting 2020.Check out his famous machine...

Listen

Episode 0

November 27, 2019 01:15:24
Episode Cover

BI 054 Kanaka Rajan: How Do We Switch Behaviors?

Kanaka and I discuss a few different ways she uses recurrent neural networks to understand how brains give rise to behaviors. We talk about...

Listen