Episode Transcript
[00:00:03] Speaker A: Even if you assume we have a whole mouse brain connector, I would hazard a guess that we would have a problem or a challenge to read out any memories from it after arrow. And I think maybe that is the frame. We can think about this for one moment. Like even if you have the whole connector, can we read out a memory from it?
If you want to decode a non trivial memory, then one of the things you want to do is to do more than just identify. This animal has learned something and here we can find what it has learned. But you want to be able to differentiate between different memories.
[00:00:41] Speaker B: We have models of how these systems work. These are model organisms that have had decades of research on them. And so the very first thing that you're doing is you're saying, here is our model of the hippocampus. Here is our model of how place cells and grid cells interact with each other. Now, given that model which could be wrong, what is the experiment that we could do to try to decode the memory based on that model?
[00:01:18] Speaker C: Okay, I'm going to ask the question that I think needs to be asked, which is non trivial is doing a lot of work here. What does non trivial? And actually, I would love if some folks came up to the microphone and offered their perspective on what does non trivial look like.
[00:01:40] Speaker D: This is brain inspired, powered by the transmitter. Can you look at all the synaptic connections of a brain and tell me one non trivial memory from the organism that has that brain? If so, you shall win the $100,000 prize from the Aspirational Neuroscience Group. I was recently invited for a second time to chair a panel of experts to discuss that question and all the issues around that question. How to decode a non trivial memory from a static map of synaptic connectivity. Before I play that recording, let me set the stage a bit more here. Aspirational Neuroscience is a community of neuroscientists run by Kenneth Hayworth, with the goal from their website to, quote, balance aspirational thinking with respect to the long term implications of a successful neuroscience with practical realism about our current state of ignorance and knowledge. In end quote, one of those aspirations is to decode things, memories, learned behaviors and so on.
Decode those things from static connectomes. Aspirational Neuroscience holds satellite events at the Society for Neuroscience Annual conference and they invite experts in connectomics from academia and from industry to share their thoughts and progress that might advance that goal.
In this panel discussion, we touch on multiple relevant topics. One question is what the right experimental design is or designs are that would answer whether we are decoding memory. What is a benchmark in various model organisms and for various theoretical frameworks. We discuss some of the obstacles in the way, both technologically and conceptually, like the fact that proofreading the connections in a connectome, manually verifying and editing those connections is a giant bottleneck. Or like the very definition of memory. What counts as a memory, let alone a non trivial memory? This year there were five panelists, including Mihai Januszewski, who is a research scientist with Google Research and is an expert in automated neural tracing. Sven Dorkinvald, who is a research fellow at the Allen Institute and was very involved in the first full Drosophila Connectome paper that happened recently. Helena Schmidt, who's group leader at Ernst Strungman Institute. She's an electron microscopy expert and deals with the hippocampus connectome. Andrew Payne, who is the founder of e11Bio, which is a focused research organization.
He is an expert in expansion microscopy and viral tracing. And finally, Randall Kuhn, who's the founder of Carbon Copies Foundation. Randall's a computational neuroscientist dedicated to the problem of brain emulation. These people take lots of questions from the audience once I do my best trying to get the conversation going with a few questions. I do apologize that the audio is not crystal clear in this recording. I did my best to clean it up and I take full blame for not setting up my audio recorder to capture the best sound. So if you're a listener I would encourage you to check out the video version, which also has subtitles throughout for when the language isn't clear. And mostly those subtitles are correct. But if you're a keen observer you will note when they are not.
Anyway, this is a fun and smart group of people and I look forward to another one next year. I hope the last time I did this was episode 180 bi180, which I linked to in the show notes. It was another panel discussion like the one I'm about to play for you. And before that I had Kenneth Hayworth on, whom I mentioned runs Aspirational Neuroscience. I had Ken on with Randall Koonin who was on the panel this time. They were on at that time to talk about the future possibility of uploading minds to computers based on connectomes. So that was episode 103 which I also linked to in the show notes. And those shownotes are at BrainInspired Co Podcast227. All right, I hope you enjoy this panel discussion. As you'll see there are plenty of issues to resolve plenty of optimism. I'm the only pessimistic person there that voiced that opinion. Anyway, so that was kind of interesting, but lots of interesting discussion. Okay, I hope you enjoy it.
[00:06:18] Speaker B: Hi.
[00:06:18] Speaker A: So I'm Randall Koonin and I run a nonprofit research foundation called the Carbon Copies Foundation.
I used to work at a company called Voxa, working on the high throughput electron microscopy pipeline, doing the software side of that.
They were contracted to the Allen Institute, working obviously on their big project.
And in a previous life I used to model hippocampal andor episodic memory systems and also prefrontal cortex TD learning, like reinforcement learning, sort of phenomena that you could find there. And now my focus is on trying to validate for ground truth the transition from structure to function. So that mapping and building an architecture, deciding how to estimate your parameters and how can you tell that what you're ending up with is actually mapping towards most of what the tissue that you have data from is actually doing and cares about has some sort of meaning. So the memory decoding prize is super interesting to me for that reason.
[00:07:22] Speaker E: Hi everyone, I'm Mihailsky. I'm a research scientist at Google. My formal background is in physics, actually, not neuroscience. But what keeps me busy these days is trying to build software to automate connector mapping. So the goal is to make the data analysis a non issue. It'll be great if you could acquire whatever data set of brain you want and have it basically automatically analyze so that you can science with it other than spend your time and money on stuff like stitching or segmentation. So that's what we are trying to do and to do this automatically.
[00:07:56] Speaker A: Hey everyone, my name is Darkenwald. My work is directly adjacent to Miel's trying to reconstruct connectomes and solving other problems other than the automated reconstruction.
My work has contributed to like the five Wire project and the Micros project that you just heard about.
[00:08:14] Speaker F: Hi, I'm Helene Schmidt and I'm from Frankfurt and Trimmel Institute in Germany.
And my lab is working on connectomics of mammalian navigation. And our aim is to acquire 3D large scale 3DM data set of the full entorhinal cortex and the campus circuit.
[00:08:38] Speaker C: Hi everyone, I'm Andrew Payne, co founder and CEO at e11Bio, which is a nonprofit focused research organization. We are a young org that is building tools for optical connectomics to try to make that field move faster. Now that this technology is ready for extracting molecular information alongside the static connectome.
[00:08:59] Speaker A: Thanks everybody.
My question for you is what would it take to decode a non trivial memory from a static connectome?
Now actually, so one way to approach this is to ask each of you, given you know, your expertise and what you do on a day to day basis, if there is a specific advance or obstacle that if you solve that obstacle, would you believe increase the efficiency with which that path could be accessed to decode a non trivial memory? We can talk about what that even means as well.
Andrew, do you want to hazard an answer?
[00:09:42] Speaker C: Yeah, I mean, I think I'll just start by saying that there are a couple of new, I guess, pieces on the board that everyone should become familiar with. The first is that it is now possible to extract connectomic information using conventional light microscopes. And so everyone saw Tabakoli et al 2025 on the screen. That preprint, definitely check it out. It is the first, you know, bonafide demonstration of extracting connectivity on a cheap light microscope.
Why is that important? It's not just that it's easier to get the information, but it means you can use multiple colors. And when you have multiple colors, you can start thinking about what other molecular details can you co detect with your connectome easily in order to more accessibly decode that memory. For example, there is a whole body literature on tagging cfos cells that's easy to see if you can get the molecular information in the same assay. And so at E11 we just dropped a preprint last month demonstrating how you can read out 24 different protein molecules in the same sample when getting this kind of morphological reconstruction of the neurons.
This could be a game changer. And I'm going to position those two pieces on the board. Maybe they'll come up again as we keep talking.
[00:11:05] Speaker A: I would also just encourage the panelists to argue amongst yourselves as much as possible.
Well, we can argue about what the best way is to view connectomics. We may do that tonight. I think even if you assume we have a whole mouse brain connectome, I would hazard a guess that we would have a problem or a challenge to weed out any memories from it. Uproar. And I think maybe that is the frame we can think about this for one moment. Like even if you have the whole connectome, can we read out a memory from it?
And I think from working in the fruit fly, one thing that we stuck is that in the food fly a lot of coreotomics work really benefits from having the sensory modalities being so close to the brain and kind of understanding what they relate to the behavior assays that are being done and how they can be related to the connectome. That is a very, very short path to associate those. And I think that allows people to do interesting neuroscience and potentially read out memories. So I think we ultimately need to understand what it means to read out memory. And I think that has to be related to behavior or to a function of those neur. So we will need that information to have a chance to read that out. And I do like the songbird. That's the first system that I really got to work with. I think the songbird song is a wonderful modality and the hypothesis are very, very clear. And I think that is one of the lowest hanging fruits that we have to read out memory and seeing if there is actually a chain of synapses and neurons in the hpc.
I would guess there's not. And then we have to answer the question of what we do next and how we kind of go about reading that out.
[00:12:48] Speaker F: Well, well, I think before we go and can read out the memory, I think there is a step before that so that we need to narrow down the search space. What connections are we looking at, what areas where like just to narrow down the space. Right.
And then we can think about experimen.
[00:13:12] Speaker E: I'm going to do a yes. And on what Helene and Sam are saying, I think they are both right. I think what you want to do if you want to read out a memory is to have a learned function which maps some inputs to some outputs. Ideally, you will have the inputs and outputs quantifiable in terms of information, period. So should be able to compit and makes the experiment scalable. But say yes, you want to be close to some sort of sensory input, or you want to be able to map your sensory input in the area that you're scanning.
I don't think you need the whole brain for this. I think we know enough from existing neuroscience projects to be able to target specific brain areas like maybe the lateral amygdala for fear learning and other HBC and rank word and so on. It's going to be expensive because of the data analysis challenges, but that's getting cheaper.
I think it's mostly a matter of organization and maybe some microscopy challenges, which I'm not so clear on, but we can also discuss later.
[00:14:13] Speaker A: So for the songbird itself.
[00:14:15] Speaker B: Right.
[00:14:16] Speaker A: What would be the right experiment?
I would imagine that you would something that would be more impressive would be to decode different songs from different birds. Would that be the right experiment? What would be the right experiment? Just to respond Directly to the question, different birds, different songs. I personally don't find that as interesting because if you want to decode a non trivial memory, then one of the things you want to do is to do more than just identify. This animal has learned something and here we can find what it has learned. But you want to be able to differentiate between different memories.
This is where you start talking about things like how many bits of memory do we want to be able to return?
And that's where it would be very interesting if you can explore and find multiple songs in one bird, for instance.
[00:15:08] Speaker E: So, you know, I'm not a third neuroscientist, but if I understand gradient for these silver finches, for instance, is one stone pervert, right? Is that good enough or not?
I mean, for us it's worth to stick to the zebrafinch example. I think it actually is a good model and it would be interesting even to only, you know, for a particular bird to be able to say, oh, the sound that it learned is composed of four syllables and the length of the syllables is that and that. And maybe you also can get some frequency information about the syllables already. That would be interesting. And it's also quantifiable in terms of.
[00:15:44] Speaker A: Bits or whatever you want to use.
[00:15:45] Speaker E: So I mean, I don't think it's trivial at all.
[00:15:48] Speaker A: This is where I get confused about the difference between a behavior and a memory, because it was an alternate proposition was if you could accurately Decode the last 10 bits of behavior of an organism before the sacrificing of the, of the organism to fix its brain. So how do you guys think about the difference between because a song is in some sense a memory, because you, you have to have learned it, but it's also a behavior.
So are those meaningful differences between a memory and a behavior?
What do you think would be a more tractable the last 10 bits of behavior or a non trivial memory to decode?
Which of those two are we closer to?
At the risk of saying things about songbirds that I'm not aware of.
So I think when you're talking about a behavior, the behavior of course is composed of a number of different steps, for instance, in that sequence of a song. So then you would say, okay, if I can differentiate between different pitches, different interval lengths, things like that, then you're decoding something significant about the behavior that requires learning. Therefore there is memory of some sort.
[00:17:03] Speaker E: Right?
[00:17:03] Speaker A: Because memory in essence is anything where, you know, future is dependent on past.
And yeah, so to me that would be, would be a memory as well, or it could be learned memory if you decompose it into those parts of the behavior.
[00:17:20] Speaker E: There is.
[00:17:21] Speaker A: So I will risk this. There seems to be a rise in attention to the cognitive functions of astrocytes so that it's not just all in the neurons and the synapses.
Do you guys see astrocytes as being or other, for example, as being important in these kinds of issues or is the defectoma dust, you think? We have been a little bit reductionist when we talk what we mean as a connectome. And I think people may think first of the synapses between the neurons, but I really think it is much more.
It is glia cells. It is all the ultrastructure that goes along with it. It's the neurotransmitters that are expressed by those synapses and all of that. And we may talk about this as an annotated connectome, but I think most of us don't think about this as a binary connectivity matrix. And I think that's very important.
I think the role of glial cells is becoming more and more clear to be fundamental to how the brain works as a reason why we have so many more glia cells in human brains. I believe so I think, yes, we should not disregard anything of the results.
[00:18:28] Speaker E: I mean, again, yes, and in the sense that, yeah, the are clearly important, but I think, you know, at least in some of the theories of memory that we would want to maybe verify the prediction is that the memories are stored in maximum network synapses. So then you could ideally be able to ignore the gleam. Well, I guess we'll find out once.
[00:18:45] Speaker A: They time the variables.
[00:18:47] Speaker C: I think I'll bite on the temporal component.
When we think about recent work, like for example, from Adam Cohen and colleagues on the epsilon like pulsechase dyes, we know that we can measure in a molecular assay synapse turnover and synapse plasticity. And we can just around the corner of fearless prediction, we'll be able to detect that information. What is the t minus one connectome? What is the T zero connectome? And depending on whether we can develop more halo tag like dyes, you know, the t plus one. So we can get a couple of different time points and that gives you information about, you know, what neurons you could push on if you were simulating the connectome that you have observed. But the question is, you know, what, what model would you use where you could actually use that information quickly and maybe spend as a thought on that?
[00:19:47] Speaker A: Yeah, so the temporal aspect. Right. So the goal is to Decode from a static connectome. But as soon as you start talking about temporality, I mean I guess it's still technically slices of static connectome in that sense, but you add that temporal aspect to it, I think it still.
[00:20:06] Speaker C: Counts as a static connect dome. You're reading it out in a single time point, in a single asset.
[00:20:11] Speaker B: So are there other obstacles that you.
[00:20:13] Speaker A: Face in your day to day work that you think would solving those obstacles or. Sorry, I think challenges that we are facing are very clearly that our data is extremely noisy. In the end, I think the work in intrasolphala is showing and the first work really to compare connectomes is very important and it's showing that there are many source of variability that we are facing and one of them is technical noise. And we may only be able to suppress technical noise due to errors, imaging artifacts and all that to a certain degree. I think it's a question if that is a sufficiently low degree that we can push that. I believe so. But the other source of variability, biological variability between individuals, ultimately we have to define that first before I think we can understand what is real differences and what may constitute real differences than between world memories, between different brains.
And to that also belongs accuracy of synapse readout. And I think our work may have contributed to that a little bit that maybe you only have to read out if a synapse is large or small, maybe that is sufficient. I think more work needs to be done. But that would certainly be helpful to knowing if that is the level of accuracy that's really needed. But I think other labs has done separate work on that in hippocampus showing that we may need to read our synapses at a level of 10 or 14 bits of accuracy.
So you think the variability itself, it's an open question whether the variability of the stochasticity is an obstacle because it's an inherent feature of the system. Well, I see actually a difference between those two brains or if it's just like happenstance of noise that happens naturally, like this one brain was acquired in the morning, other one in the afternoon. Is that enough that constitutes the creates those differences. Like I think we have to get a handle on this first. So far COR has been a field of N of 1 and we are slowly getting to a place where we have n of 2, 3, 4 in Drosophila. I think we really need that too in the mouse to understand differences between brains.
[00:22:16] Speaker E: So fine question to your.
Let's say we don't have this perfect understanding of variability and by the way, memory is a source of variability, right, between different specimens.
So let's say we don't have the perfect handle and understanding of everything. But you can show, but you can make predictions about what the organism actually learned, meaning that you look at the connectome and you make some correlations about the inputs that were, you know, associated with whatever, some positive or negative feedback. Would that work in your mind or not?
[00:22:47] Speaker A: Are you saying as long as you have a model that can, that you can show that you can read out a difference in behavior from a system, is that enough?
[00:22:56] Speaker B: Yeah.
[00:22:56] Speaker E: Okay. I'm asking you because you said, you know, we have to understand everything first. You have to understand social variability. I'm saying, but we don't understand it. But you show the correlations and the correlations, you know, are statistically significant.
[00:23:06] Speaker A: How, how do you show statistical significance in this? I, I think that is the point, right, that you need that error bar. What is, what is statistic?
[00:23:15] Speaker E: Well, okay, but, but you, okay, so you can, you can, you can approach it from like almost kind of statistical theory, right?
[00:23:20] Speaker A: You, you.
[00:23:21] Speaker E: And we're getting into current experiments here. But let's say you have an animal, it has some input, maybe it's tones or colors or whatever, and you have each individual learn some behaviors. So maybe the vast combination, like combination of A and C is bad, A and D is bad, A and B is good, B and D is good, whatever. Like you can encode some number of things in some brain area. You take the connectome and then you look at the structure, you make a prediction, Somebody does the analysis and you say A and B good, ANC bad, and so on. Some of us going to be correct, some of that is going to be incorrect. If you say, okay, but it's only n of 1, then you do more samples and more animals and you can keep doing this. At some point you will reach statistical significance, however you want to define it. And you will not understand necessarily all the sources of variability, but you will have a statistically significant correlation. It's not good enough.
That's an upper question, right? How do you define this?
[00:24:20] Speaker A: I want to use that to other panelists.
What do you think? Permitting no response.
[00:24:32] Speaker B: Sure, sure, sure.
[00:24:33] Speaker A: We can take my.
[00:24:34] Speaker B: Sorry, I use my privilege.
So I think, when I think about these things, I think that the very first question is we have models of how these systems work. These are model organisms that have had decades of research on them. And so the very first thing that you're doing is you're saying here is our Model of the hippocampus. Here is our model of how place cells and grid cells interact with each other. Here is our model of how a place cell gets formed in a single instance. Now, given that model, which could be wrong, what is the experiment that we could do to try to decode the memory based on that model?
To show that it's wrong or to show that it's right? In terms of the bird saw, the birdsong is very concrete. This is not like oh yes, we could decode birdsong. I want people to understand that. Right. There is a theory, like you said Stant, that could be completely wrong that there is a chain of neurons in HPC that sequentially activate like a timing board and that those chain of synaptic of neural activations are then feeding their axons down to a motor nucleus that gives particular tweaks. And the textbook model says that if that theory is correct, then this connectome will show what the bird learned. I think that this is the kind of concrete thing that I'm hoping that we can discuss.
We know that there are dozens of model organisms that have decades of research and connectomes can get those and decode memories or fail to and show that those are.
[00:26:45] Speaker F: So because you were talking about place dose and grid cells.
[00:26:50] Speaker B: Please give us a tutorial.
[00:26:53] Speaker F: No, no, I would just propose a, an experiment like teaching a mouse, like a parkour, like first go on the bridge, then to the stairs, then to whatever treadmill and have several stations like this and then whatever take 10 or 20 miles and then they have to learn each of them a different sequence.
And then we have a model. Well, we need to figure out first where to look at, Right, exactly what connections to look at. And then we would try to predict then what sequence was learned.
[00:27:31] Speaker A: I think for the start, I would start with a very simple tmaze, go left, right, and see if I can read that out of the connectome. Right. I mean that's not 10 bits. It is not 10 bits, but I think that that would constitute for reading other memories left. I think I'm enameled by hippocampal replay and that we see in sleep. I think there's evidence that there's something in the connectivity between those neurons that we should be able to read out. And that is certainly something, I think that would be worth targeting.
So isn't, I mean, just going back to the structure versus function problem, is it an insurmountable problem that for example, in artificial networks you can freeze the weights and generate different dynamics even with a static frozen ways within the network.
And so there's this problem of multiple realizability and degeneracy and the dynamics of neural activity. Is that not a fundamental hurdle or is there a way, do we believe that we can in some sense analyzing the structure, have a measure of some capacity of possible functions, for example, or is it a fundamental hurdle? I mean, to that point we have. I think what we're missing is a lot of annotations in the connectome, including the biophysics of those neurons, which I think will lead to multiple possible outcomes when you model such a circuit. So yes, I think we have some of those problems. A connectome alone may not allow us to encode that. However, I think work by Philip Shu in the Drosophila Connectome kind of showed that even with a model that is not perfect, you are able to produce activity that matches measured outcomes. And I think that's very encouraging to see.
[00:29:22] Speaker E: I think we are being too cautious in this discussion. We are finding potential problems before we have even started or design an experiment, right? Like maybe we don't know sources of variability, maybe we don't have the biomolecular information. Maybe there is degeneracy. There's an endless source of potential problems. But why not just try? And maybe we fail, in which case, of course you have to go back to the drawing board and start thinking about those traumas. Maybe it just works in yourself in that it actually kind of does work. Not the memory reading, but reproducing some stereotypical behavior from not fully perfect connectome and with a lot of missing information. But yet it was still a kind of strong enough behavior. But it was relatively easy to reproduce maybe strongly encoded memory. It's just like that. Why not try?
[00:30:08] Speaker A: Feel free also to come up to the microphone to ask questions. Anyone is I can hand the microphone out or you can come up to this microphone in the front as well. I have a question. Oh, great.
[00:30:20] Speaker B: All right.
[00:30:21] Speaker A: In what year do you expect that the, you know, we'll see the first decoding of a non trivial memory connectome?
1987.
I think this is a good hazard. A guess please everyone.
I think this will happen once we can reconstruct the whole HPC region, which is I think possible now. So I would say within the next five years we should be able to see that.
[00:30:50] Speaker E: I'll say the same within the next couple of years, provided we actually start doing this. I'm talking about it.
[00:30:57] Speaker A: Randolph, you have to guess.
[00:31:00] Speaker G: Okay, I'll guess.
[00:31:01] Speaker A: I'm just agreeing. I think that yes, the whole point is to just get started. I thought it was wonderful to just say why not try?
So yeah, two to five years is my guess.
I'm going to be really optimistic.
[00:31:13] Speaker F: I agree with the five. I go with the five years.
[00:31:15] Speaker A: Good God.
[00:31:18] Speaker C: I mean.
Okay, I'm going to ask the question that I think needs to be asked, which is non trivial is doing a lot of work here. What does non trivial mean? And actually I would love if some folks, you know, came up to the microphone and offered their perspective on what does non trivial look like. For example, you know, we know in like the retinotectic system that you can sensitize that to a moving bar and you can read out the like change in like the compound synaptic currents from that.
And probably if you had the connectome, you could see that in that system. Have we already decoded that memory or is that too trivial? It probably is. But what is the threshold? Can we get some discussion around that? I'll say five years.
[00:32:00] Speaker E: This by the way, is what I think we should have this quantified in terms, in information period terms. So number of business and we can argue about the number of business. One trade or has become a threshold problem.
[00:32:12] Speaker A: I feel like I could make a lot of money from you guys if we just place a wager about this. Because my guess is I'm much more pessimistic than all of you. I mean it's awesome and why not try and guess rainbows and all that great stuff. But I mean, I would guess 30 plus years.
We even have a whole mouse brain connector before then.
Yeah, well, I mean like it was mentioned, non trivial is doing a lot of work. And I think that I don't know semantically what that human agility needs.
[00:32:43] Speaker E: I would also say no rainbows needed. Custom door EM versus grayscale, no rainbows.
[00:32:52] Speaker F: It's probably just a comment, just trying to think about what a memory is. Is the memory indeed just written in the weights of synopsis and the structure or is memory needs. Does memory needs to be experienced? Like do you have to feel, do you have to live through the memory? I'm talking about this thinking about astrocytes. If you know astrocytes modulate the synapse, so what is the substrate? So if this is, if this is true, then your, your synapse sizes and weight and whatnot is not enough. Then you need to have the substrate of the astrocytes and how they inter, like how they influence. And the question is, are we going to find the substrate of how or whether this influence is included in the structure of the astrocyte in any way, whether they can actually see it in the structure.
And is it necessary for the experience of the memory? Yeah. Is a memory experience or is it just something written down?
Yeah, that's something to think about.
[00:33:55] Speaker A: Anyone want to comment on the nature of memory?
Yeah, so, I mean, obviously there are a lot of details you could get into about what constitutes memory, what contributes to memory, what modulates memory.
But again, you could probably start by just counting synapses, or looking at the size of dendritic spines, or looking at postsynaptic densities, because that's enough to begin to look at at least some memory and to I guess identify a non trivial set of different memories.
Yeah, so. So take for example, receptive fields. We know that we can identify receptive fields just by looking at synapses without taking astrocytes into account.
Then if you combine receptive fields into feature detectors, you can see that as well, just looking at synapses. So I believe there's enough there that you can decode something even if you can't decode everything, even if later on you discover there's more you need to know. Does anyone else want to comment before we move on to another question? Okay, so I wanted to come back to understand a little bit more. What people want to focus on manner is because there are so many types of manneries with that discuss their episodic memories where one can talk to them and say, hey, I actually remember walking in this room or walking towards the microphone and say something which is incredibly difficult, might be much closer to 30 years, but there are going to be memories which are going to be interesting, which are fundamentally on how the system is built. And from that perspective, I wanted to bring a very obvious question. So let's say that there was a prison, there was an element to say that if we can suggest, for example, the satisfaction. So it run those into right now into microbes. And we have the capacity to predict how the neurons are going to respond.
[00:35:45] Speaker B: To.
[00:35:48] Speaker A: Stimuli without looking at how the neurons are responding, just looking at their connectoma. And let's put a fraction of an explainable variance. What would be a fraction of explainable variance which would Satisfy the equinox 30%, 10%.
Neurons are quite variable, by the way.
Very concretely, what would be, what would the committee say from here? What would the induction of explain variance that would quantifiably say.
[00:36:19] Speaker E: We are not a committee, we're just a bunch of people.
In the end, in the end, we don't decide this. Maybe we should Do a vote in the audience. Let's see what people think.
[00:36:31] Speaker A: Oh really?
[00:36:31] Speaker E: Because you know, but here's one thing. When we, when we do the memory readout, ultimately, you know, this is going to be judged not by the five people here, not even by the people in the room, but by the wider audience. Right. And by that means both the lay audience and other scientists and so on and whatever we come up with, it has to be convincing to that group of people.
[00:36:52] Speaker A: Basically.
[00:36:53] Speaker C: Yeah.
[00:36:54] Speaker B: I just want to say that is exactly the right question. Okay. That is exactly the right question. And it does not make it trivial that it's like, oh, you're just trying to explain a certain amount of variance. It's like we recorded from 10,000 cells calcium imaging and we have the connectome and how much of that calcium imaging could be predicted based upon the connectome. If we are right about the theories about how structure and codes function, then that experiment will get a high level of for nexility. If we are wrong, which we may very well be, then it won't. It tells us something that other experiments have not told us. And that's the whole key to this non trivial, the word non trivial is moving forward the neuroscience of how structure gives rise to function.
That's really, I don't know what the criteria, what the level would be. That is something for like a committee to decide. And it is something for kind of to say, you know, this is where we are. We're not going to decode a memory of a childhood experience in a human. Okay.
And we've already decoded to a certain extent receptive fields of single neurons really puts our theories to the test. And that's what non frictional means to me at least. Although I'm super curious about other people's. Just to let you know that Fiji.
[00:38:34] Speaker A: Does a relatively simple linear network to Microns data, yet you slightly below 40% of the explained variance.
So the type of theories we should expect, what would be kind of like the standard model of artificial neuronal networks doesn't get you that as much as what you'd expect, at least in our hands. Maybe somebody could read that.
[00:38:57] Speaker C: Yeah.
[00:38:58] Speaker B: So I have another question for the committee and for the audience.
[00:39:04] Speaker A: So when I think of a not.
[00:39:05] Speaker B: Trivial memory, I think of, I think of something that I experienced.
[00:39:11] Speaker A: And when you think about how neurons.
[00:39:13] Speaker B: Were responding in my brain.
[00:39:18] Speaker A: You need to think about what the neurons represent.
[00:39:20] Speaker B: And so when I relive that memory, we think those neurons that were active when I was experiencing it are sort of replaying it for me.
[00:39:32] Speaker A: In that.
[00:39:32] Speaker B: Case, I'm glad we're talking about microns because microns is a data set where we have function structure. And that's like really critical to me.
[00:39:42] Speaker A: When I think about memory. Right.
[00:39:45] Speaker B: Because a memory should, a non trivial memory kind of requires that we know what neurons represent.
And so is it cheating then to measure the function first?
[00:39:57] Speaker A: Because for me, I can't really see.
[00:39:59] Speaker B: A way that we could decode from the connectome if we haven't, if we don't have a sense of what each.
[00:40:04] Speaker A: Neuron represents in the living theory.
I 100% agree and I think that is partly to my point from earlier, which is of local anatomics where you can get away from having functional experiments because you can associate neurons much better with behavior.
I think in lieu of that you need functional experiments so you can actually associate those neurons with a memory, with a behavior.
So I think drifting ratings is a readout that is certainly convincing. For me, memory is much more similar to what a hippocampal experience looks like because we know it is replayed in sleep. I think that to me puts it to a point where I believe that this is actually something that constitutes much more like a convincing memory. If I would tell that to my friends, they would kind of see that and think that this constitutes memory that we can decode with a neuron response to a moving bar. That seems to be not to pass the dinner test.
Hi, I'm also very interested in the topic of memory versus behavior.
As you probably know, there's the recent critique on LLM and AI can can't be conscious. There's argument that because they are just predicting a machine reacting of stimuli of input text and predicting output text. But then what is human but also predict machines that react some more complex inputs and produce more complex output than text. And in that sense perhaps memory would just be behavior in patronized and guess another side of that question is what do you think would be the role of arch neural network in trying to uncover the mechanism of organic memory and vice versa. What would be the role of uncovering organic memory in developing Bev?
Be brave and just try. Just try.
[00:41:59] Speaker E: I can comment on the experience. I have maybe a strong argument, but we can think about and then you guys can tell me if this makes sense or not. Imagine the zebrafish example.
So in the extreme you could imagine a system which is basically what I would call em to MP3. You input a big EM volume of the whole song pathway, but there's big enough that allows you to predict the sequence of the HVC axons. It's deep enough that you see the connections from the array to the muscles. So you can map actually individual axons to frequencies and so on. So you literally input into a big computer program. It runs many GPUs. At the end you get an MP3, you play the MP3, same as the birds.
[00:42:44] Speaker A: Great, it works.
[00:42:45] Speaker E: But there is no birth to experience it. Is that a problem? Probably not, right? We did decode the song.
So I would say, you know, the experience is not actually needed. What is needed is the information we are extracting from the system.
And I can try to tackle the AI question, but you have to repeat it because unfortunately.
[00:43:02] Speaker B: Yeah.
[00:43:02] Speaker E: Can you say again briefly, what was the AI question?
[00:43:07] Speaker A: Because AI and brains are both neural networks and you could make a pretty big league in assuming that there could also how AI encode memory is similar to organic brains?
Hugely, I don't know. But could there be link in uncovering one that would help the events of discovery, the other? Something like that?
[00:43:30] Speaker E: I guess in some sense there is a analogy. It would be the current theories of memory, right? If you assume that the memories are encoded at the synopsis between.
Between access and dendrites, then both correspond in some sense to the waste in the neural network. So it's not changing the ways, it's kind of like changing the memory of the network. But that's a very high level analogy. I'm not sure if that's what you're.
[00:43:55] Speaker B: Looking for, so I want to try again to put you on the spot, Helena.
And this is coming more from my ignorance.
So whereas I understand, or at least I think I understand the HVC to RA songbird system, and it's pretty straightforward. And I think I understand the visual system where I say the receptive field of this cell is built up by the sum of the receptive fields of the cells that come before it and with maybe some lateral connections. What I'd really love to hear is if I'm a mouse that has learned a path. Now that's a good solid memory that people can kind of get to wrap their heads around.
Even better than a bird song.
Do we have a theory of how that occurs? I would assume we have several.
And given that theory is a decoding of a learned path location T maze. As you were saying, can you imagine an experiment in the next five years that could decode a path that was learned in a past?
[00:45:23] Speaker F: So I think my approach to this would be.
The thing is, we don't know.
I think this is the answer.
That's why we are going to get the whole connectome, right? To see the connections and so on. There are so many models.
[00:45:38] Speaker B: Obviously you're about to say, I mean we don't know, but we have a bunch of models that could be right and could be wrong or we just don't have models.
[00:45:48] Speaker F: No, there's a bunch of models, for example, the feature model that has like precise connectivity predictions between grid and places that are fixed, non random or random and a pathway that is learned where they expect, where the learning is happening, have been learning or plasticity.
But we need the connectome to see if it really is the case.
[00:46:13] Speaker B: So I mean, could you explain to me, and I think other people are interested in this naively, how could learning a path, learning a spatial arrangement of this room be encoded in synapses?
I think as neuroscientists that are really deeply involved in these models, we take for granted that that's obvious. It is anything but obvious to most people, including myself.
Is it too complicated to explain this is how this memory of a spatial path is encoded in these dots in these synapses?
[00:46:59] Speaker F: Well, you would expect some sequences, right.
Of neuronal sequences that you would probably see. But to be honest, we don't know what connections to look in exactly.
[00:47:11] Speaker B: CA3 and CA1.
[00:47:13] Speaker F: Well, you.
My guess would be CA1. Yes, but still connectivity is really low in CA1. So.
Yeah, okay.
[00:47:23] Speaker B: Okay.
Hi everyone.
[00:47:28] Speaker A: So I wanted to ask you about something that was requested in the beginning, which is to understand what each of you saying is the biggest bottleneck in getting to the next level.
And what I was surprised about is I didn't hear any of you mentioning proofreading.
[00:47:51] Speaker E: Right.
[00:47:51] Speaker A: And like when I have to spend the money, it's still like a thousand dollars per new on the last time I checked.
So I'm curious on different takes on that. Or maybe you like all uniformly think that that's all solved problems.
I think you I'm the last person you find that would say that this is a soft problem.
I think though it's interesting to think what if we don't have to think about proofreading for a moment of what we can do without that. But I think you're right that manual proofreading to this day is the main bottleneck. We have recorded over 10 million manual edits across many data sets, including the fly brain. The recent bank data set microns has 1.5 million manual edits. You can just imagine what manual work that was done and it even reconstructed the entire Data set. We are at a 2% also of the entire data set. So this is clearly the main bottleneck. But I think between me how, and maybe disagree a little bit how far along we are at this path.
Though I still think that many of us are working towards this goal of reducing the need for proofreading. And I think we're making big progress. I think the irony is right now that we have been scaling our data sets faster than even proving our methods. And the consequence of that is that we need to do more proofreading today than we have in the past, simply because our data sets are larger and require more manual labor.
[00:49:17] Speaker E: I'm actually not going to disagree with this. What Stan said is of course correct. And the problem that Anton brought up is the bottleneck. Proofreading is costly and takes a lot of money.
But what we have to keep in mind, and this is again to not be too pessimistic, you have to keep in mind that the data sets that you see published today, both are data sets that were acquired maybe five years ago and have been processed over time. Better technology exists today. If we do the memory reading or any other experiment today, this is most likely going to be done with this better technology where the programming cost can be two orders of magnitude power. And that's a big difference.
[00:49:53] Speaker A: My question is sort of drilling down into what non trivial memory is.
So the, the simplest experiment that I could think of is in a fruit fly. You give the fly some sugar on its legs, it will typically extend its proboscis.
You can punish the fly by giving the fly sugar on its leg and in fact presenting the fly with a bitter substance. And it will learn no longer to extend its fly its proboscis to.
This is something like four or five synapses. The century motor circuit is five synapses. There's 20 defined neurons that we have genetic access to. And we could do this experiment, I would guess, in three months and find the structural basis of this particular behavior. So my question sort of is like, is, would this be a non trivial behavior? And if not, are there any non trivial behaviors in flies at all?
I think that is a, it's a great point, of course, but you can argue that any olfactory memory that a flight has built is already rising to that level. And we can, we have shown we can read out a lot of that. And so maybe we already achieved that by the work in a martial body that has been done.
I'm not sure it rises to, hey, what is a non trivial memory? I think is the Question. I think it comes back to the point of is this something that can convince your friends at a dinner table? And I'm not sure that making this argument to those would pass that test. That said, I think you can totally make the point. You can read it out of the connectome, you can see the changes and that is certainly decoding.
[00:51:47] Speaker E: I have a question kind of about.
[00:51:49] Speaker B: Also about the definition of memory in.
[00:51:51] Speaker A: A sense, but in comparison to what's.
[00:51:54] Speaker E: The difference between a memory and learning. So you know, memories are like has been pointing at pointed out earlier, can be episodic or memories can also reflect as learned behavior to like skilled behavior.
[00:52:08] Speaker A: So you know, just as examples. Right. Probably everyone remembers their first like you know, like confidence rejection or like a.
[00:52:19] Speaker E: Reviewer 2 Comments Specifically as a vivid memory of that. But it's not a learned behavior. But everyone, you know, people who know how to ride a bike, for example, they don't think about that as a memory.
[00:52:30] Speaker A: So when you decode these two things.
[00:52:32] Speaker E: From a connectome, would that be fundamentally different? How do you think about memory versus learned skilled behaviors?
[00:52:42] Speaker A: I don't have a good answer to that, but I think it's an excellent point.
Yes, I'm sorry Dennis, I can't give you more.
[00:52:49] Speaker B: Yes, thanks.
[00:52:51] Speaker A: No, but I think of course in the simplest way is there Jennifer Aniston neuron that we can pull out, that we can point to and that relates to memory. I think for us we can't look into the flies or the mouse's head to see what it's thinking in that moment when he sees activity. I think ultimately that is part of the problem here that we may not know what the mouse experiences in that moment.
[00:53:19] Speaker F: Sorry, not going to go easy on you since I have a similar question and I think a lot of us maybe do, but it's sort of a two part question about going back into how you maybe would define a memory as well as thinking about multiple modalities for memory and how you might explore them. But coming from the cerebellar background, I know thinking about things like vestibul ocular reflex, which is a reflex but you can sort of train to adapt to sort of skew that reflex. Would you consider on like more of a molecular scale, could that be considered a memory of the sort of training that needs to be done.
And then my other question is how would you maybe think about designing experiments that are not just episodic but other kinds of procedural like a Proustian memory in the olfactory system? Would you design experiments differently to try to or would you think about evaluating the data differently to decode those different types of dumb memories?
[00:54:11] Speaker A: These are very all very good questions.
Okay, we'll keep that one in mind. You guys crunch on those questions and maybe you'll come to an answer. Go ahead.
[00:54:27] Speaker E: I think I'm last in line here.
[00:54:29] Speaker A: So going back to.
[00:54:30] Speaker E: So there have been several structural features.
We sort of were concentrating on them readout of the non trivial parts, but we left for a moment the structural parts.
So there have been several structural parts that have been mentioned like the synapses, the strength of them, maybe the size, the length of dendrites, other neurites, whatever. And I admire the optimist that let's just go for it and try.
If reading those structural features is enough, but if it isn't for mammalian sale.
[00:55:06] Speaker A: What would you be your next bet?
[00:55:08] Speaker E: Like, what other structural feature do you think would be like the next thing that might need to be concentrated like plan B in case it doesn't work.
[00:55:21] Speaker A: For mammalian system specifically?
[00:55:25] Speaker C: Yeah, I mean just to throw something out there, I will bring us back to the epsilon system with the pulse J sti. The reason for that is because you can figure out exactly which synapses participated in a memory formation.
This work from Adam Cohen was published in Nature Neuroscience earlier this year. Definitely check it out.
So there is I think a, a lot of implicit value in knowing, okay, which specific synapse that's participated, that doesn't decode the memory, but it gives you a way to like narrow in much more quickly on which parts of the representation are important.
[00:56:05] Speaker A: I think trying and failing is a very important step. And this to see if we can do with the synopsis and the connectivity that we have right now. I am personally extremely excited by the foreset of having molecular annotated connectomes that like ones might bring us. And I think having annotations of neurotransmitters especially has been proven indispensable in flight connectomes. So I hope that adding these kind of labels, molecular labels, will bring us closer to being able to better model phonic doms in mammalian systems.
Nobody here thinks that neuromodulators are an important factor. We have no idea how to deal with them right now.
[00:56:48] Speaker B: I mean, I think what you're trying to get to is that there wasn't, there wasn't a necessity for the size of a dendritic spine synapse or the size of the PSD to be well correlated with its function. Now people have shown this, showed this beautifully, that it than it was in this particular type. But for instance, in the mushroom body, it's not immediately clear, at least to me, that there is a structural signature of a memory. And yet we know that there's memory. But what we do know is that there is some physical mechanism that, that activates these neurons. And so the next level down, I think your question, and I think, Sven, you were saying this is the molecules, the neurotransmitters, but also the receptors.
If you're counting, and I think, Andrew, you were saying this, if you're counting the ampha receptors that have been inserted into a spine, that's got to be the true functional connection because those are the things that are opening up that give the current that, that connects one neuron to another, even if the structural size of that synapse is uncorrelated.
[00:58:13] Speaker A: Another question.
[00:58:15] Speaker B: Yeah.
[00:58:15] Speaker A: So we saw in the Spires Jones presidential lecture on Alzheimer's last night, 10 seconds ago, that even with 4% synapse loss, we were seeing pretty catastrophic issues with memory and maybe near fatal pathologies. 4%, right. So that kind of gives us an accuracy envelope that we need to fit inside of if we assume that that's like causally linked to the memory loss. Right.
So do you, do you think that we're on track now? Maybe we there now, we're on track now to kind of fit within that, you know, accurate plus or minus that last 4% to get maybe catastrophic memory loss reconstructions and then marching forward to accurate reconstructions where we're maybe conserving those memories and synaptic reconstructions.
[00:59:07] Speaker E: I have a question about the lecture which I unfortunately did not catch. Was there any information on the synapses being the 4% missing?
[00:59:14] Speaker A: Was it random or specific cells?
Probably not random. I mean, I remember there's like this possibility that memory is holographic. Right. And like kind of diffuse across the brain. And then there's of course this notion that like memory is probably not holographic and it's not like uniformly distributed. Right. And it's probably halfway between.
I think 4% is an extremely low bar. And I would be very much worried that we don't pass that bar right now. And I think one of the problems is that we don't know if you have passed this bar. Like, because codatomics is treated as the ground truth right now. Testing to itself is hard. So I am very much worried about the 4% bar.
Ed, it looks like you're. Were you.
[01:00:00] Speaker B: Yes.
[01:00:01] Speaker C: What he said.
[01:00:05] Speaker G: Okay. I would like to venture an answer to One of Ken's question and I have a question to ask. This should be easy to answer whether for that for the committee.
But I'll venture my answer First I'm going to propose a result that I think will make the decoding of the place cell memory.
Let's suppose I have an animal, I recall a bunch of places. Let's say I just record 10 and I know that hay still.
And then I do a connectome to this place cell. I reconstruct all the connectivity and I think the test would be that if you tell me only the place view of one of those place cell and I can tell you I can predict the place view of the other nine and if those are right and I would say that's a non trivial decoding of the spatial just a minimally variable spatial memory. Because I can decode the other night meaning that and from the relationship from in from the connectivity I can decode, you know, the room basically. So that's my, you know, my two cents to your. To your question.
[01:01:20] Speaker B: Can I ask what regions.
This is a hippocampal expert in connectomics.
What regions of the hippocampus do you think you would have to have connectivity for to pull that off?
[01:01:36] Speaker G: I would give a minimally viable answer and I would give a.
[01:01:43] Speaker E: More, I.
[01:01:44] Speaker G: Don'T know, more complex answer or whatever. Minimally viable. I mean I think it's probably some part of CA3 that have enough numbers of place cell.
But I actually don't think that's going to be sufficient because these neurons projected to contralateral hemisphere. Who knows that this kind of place sells. And the prediction that I'm calling is actually going to the other side. So it might well need the whole hippocampus. And hippocampus happened to be really big and you know, has this weird banana shade that span pretty much the whole brand.
So I guess we're talking about recording a bunch of play cell and pretty much reconstruct almost half if not the whole brand.
[01:02:30] Speaker B: But it could be done.
[01:02:31] Speaker A: Excuse me, it could be done by any model I think possible.
[01:02:38] Speaker G: Okay, I have a totally unrelated question.
If you have to say which model organism is going to be the first to be decoded A non trivial memory. And you can only give one answer. Whatever your definition of non trivial memory, what model organism you would vote for Just.
[01:02:59] Speaker C: Just to echo Sven. I mean the mouse is going to pass the dinner table test and maybe it's just like knowing left or right but you know, that I think might classify as the. As the solution.
[01:03:11] Speaker E: But.
[01:03:12] Speaker F: Well, I would go for the trust come true, but it's hard to do.
[01:03:19] Speaker A: I think the songbird is closest at this point and I think it has the most well established example of what we, what would constitute everybody.
[01:03:29] Speaker E: I feel like some controversy is needed here.
[01:03:31] Speaker A: So just something that's all right. Then I'll add zebrafish because we shouldn't forget about the fish. And as it's so nice and easy to, to, to ground or validate what you believe that you found using calcium imaging. So I will go pterodactyl.
[01:03:55] Speaker B: So I wonder if like there was a year in genetics that was kind.
[01:03:59] Speaker A: Of like where neuroscience is at, you.
[01:04:03] Speaker B: Know, maybe 1970 where you hadn't had.
[01:04:05] Speaker A: A, you know, a human genome project.
[01:04:09] Speaker B: You know, you knew there were base.
[01:04:10] Speaker A: Pairs and you know, all that stuff.
[01:04:13] Speaker B: And so I wonder, right? The question is to really understand how brain works, are we just going to.
[01:04:19] Speaker A: Have to decode some human brains and.
[01:04:24] Speaker B: Just have full connectomes? Because, because otherwise, I guess my question, can you really understand how human brains.
[01:04:30] Speaker A: Work until you do that?
[01:04:33] Speaker B: And, and if so, is it inevitable.
[01:04:35] Speaker A: That we'll have to, you know, decode some human brain? So we have that because how else can you get this full knowledge without doing that?
I think that would be like with the genome there will be a, at base at first a whole brain, maybe it's a multiple ones that kind of leads us to anchor. But. And then you start comparing to that and you will not acquire a whole new one after that. Once you understand what regions are relevant to your question, you may start just doing those regions and comparing and comparing and comparing. I think that's we can learn from genomics. However, I think brains in viable. We have parallels between cornotomics and genomics. There's a huge differences and I think the brain is in the genome, you know, somewhat what your entities are. We start, we talk about is it the synapse, is it a size of synapse? Does it have two states? Is it a molecular level? Like what is our fundamental level that you actually have to understand to start comparing. So I'm not. I think at some point a comparison to genomics will break down for core atomics.
[01:05:41] Speaker G: Yeah.
[01:05:41] Speaker C: And just to echo Ken, if the thing that you're interested in is testing a hypothesis around how grid and place cells encode information. Like we don't need to map the human brain in order to start testing that hypothesis, when we have the connectome of the hippocampus, of the mass hippocampus.
[01:05:58] Speaker A: Or the tree, does it matter That a memory is a process, it's not a thing that memories change. When you actuate a memory, you're actually changing it by reliving it, you do change it and until the fish is bigger, etc.
[01:06:14] Speaker E: So that's not always the case for the zebra finch. Just dwell on this. The song is learned once for life doesn't change.
[01:06:23] Speaker A: Sorry, what is learned once?
[01:06:24] Speaker E: The song, the song of the zebrafish.
So I mean it's a specific example, but you know at least one where.
[01:06:31] Speaker A: I mean, would that make it trivial then?
[01:06:33] Speaker E: Perhaps you guys told me. But there's the EM to MP3 example is what trivial? If you can literally play back the song. I think it was very hard, but in principle possible.
[01:06:43] Speaker A: I think it's trivial just because you, you asked process versus static. And it seems to me that for. For the decoding price at least the process doesn't necessarily matter. It's very interesting to know how learning establishes memory and how memory changes over time. But for the decoding challenge itself, that shouldn't really be a convinced I think. So maybe we're not decoding memory the way I understand memory, but memory the way a decoder understands memory or something.
[01:07:15] Speaker B: Kind of a naive question that's maybe.
[01:07:17] Speaker A: A little different from the structural connectome, but about maybe functional connectomes. So if you have a clear zebrafish and you could do maybe holographic stimulation of every neuron, every pair of neurons and see what lights up, is that going to be enough physical memories? And would some combination of structural and functional like that be enough to do? I think to qualify we need the structure because ultimately I think the question is given the structure, a snapshot of it, can you read out a memory? And so I think that is needed. But I think having all the activity for the neurons is certainly something that will help us, I think, get at that question. So I think also to your point, a process and a memory, I think we look at a snapshot of a brain in the end. So we look at a snapshot of memory just in the same way. The memory might have been like your taste of spaghetti might have changed over time. But we asked the question at the point when your brain was looked at, what did you think about spaghetti? And I think that is what I think we talk about when we talk about your memory that we want to decode, but then you never actually have a memory. If every memory is an instant in time, right?
[01:08:33] Speaker B: Memory is an instant.
[01:08:34] Speaker A: I would argue a memory is an instant.
[01:08:37] Speaker B: Memories can change, right? Sure, yeah, yeah, I agree so I've just been wondering about how much of the problem of decoding a memory is actually going from structure to a simulation.
So, for example, if you think about the songware, if you want to go to an MP3, you have to basically simulate how these neurons interact and what sequence they're active. And then you also have to somehow ground that in how they are connected to the muscles and then simulate that. So there's a big question of do we have the good enough model to infer the function from the structure? And like, if we. For example, if you find strong synaptic connections between certain neurons, and you say, okay, that's my memory, but it could be that these are unrelated to the memory.
Maybe there are experiments where you can show that these are the synapses that change. But still, does this prove that this is storing the memory? I think you have to actually, like, if you ask a human, do you remember this or that? Or if you, if you ask an animal, do you remember that? What you do is you have the animal perform some motor action, like the human response to you. Okay, it was like that, or the mouse turns right or something like that. So you need this model which infers the behavior from the structure.
And let's say we assume that synaptic strength encodes the memory and be simulated. And we find, okay, we didn't get the right answer. So then maybe our model is wrong and it's still in the synopsis, but we have to have a really good model to simulate how the activity results from the connections. Is that the actual problem of decoding memory?
[01:10:27] Speaker A: I agree with you that that's super important, whether it's a memory representation that we were talking about memory right now, or any other. Like say, the dots of ink on a page of paper don't mean anything without a decoder, right? The retrieval, how you retrieve it determines what it means.
But for a decoding experiment like this, these things, while very important, don't need to be in the same experiment. So if you have a system for which you already understand from other experiments, say functional experiments to some degree, how the decoder works, how you can interpret some aspect of what these synapses here may mean, then I believe that you can use a static snapshot to come up with a decoding experiment without having to do that for each of those memories as well. Correlate. At least I hope so.
[01:11:19] Speaker B: A couple points that caught my attention. One has to do with this idea that once you've got connect them, you can actually have a place on map My understanding is that's non paltry that place on mass is remap based on contents. So given any hippocampus that only is a connectome in context. So I think so.
[01:11:46] Speaker A: So that's part one.
[01:11:48] Speaker B: So you know, that kind of addresses this question of what kind of memory can be addressed or what can be coded without understanding the decoding language to pull out.
This just happened there.
The other question is what year are we in with respect to genetics?
And we're not even close to the segments.
I think that there's one really fundamental question that is not answered which at an information level has to do with the information parallel is we don't have a language to understand the dynamics if there happens to be a parsed semantic of neurodynamics.
[01:12:34] Speaker A: You know, just the way we have.
[01:12:35] Speaker B: A machine code for computer science or we've had the code on. We don't understand in behavioral neuroscience right now what the caller is, you know.
[01:12:49] Speaker A: What genetics calls proteins.
[01:12:51] Speaker B: You know, we're beginning to get the idea that there is in DNA we understand sequences and firing things but we don't understand the way in which those are contextualized in a separate building blocks.
[01:13:09] Speaker A: For actually what being sort of.
[01:13:10] Speaker B: And the remapping of bit of a campus is a great example. You can have exactly the same.
[01:13:17] Speaker A: And.
[01:13:17] Speaker B: The need to completely get offense based on our projects. Anyway, so for comment.
[01:13:24] Speaker A: If you go.
[01:13:25] Speaker F: Regarding your first question, I absolutely agree. So those experiments would have to be in the context and probably one would have to do.
Well, one would have to record the activity of the of a place so like some functional recordings, not just static connecting.
[01:13:46] Speaker B: So I'm told.
[01:13:46] Speaker A: We are wrapping up the panel.
Does anyone have any final thoughts? Has anyone changed their optimism or pessimism in any regard over the past hour?
[01:13:59] Speaker D: Okay, well thanks for all the questions.
[01:14:01] Speaker A: And thanks to the panelists for.
[01:14:12] Speaker D: Brain Inspired is powered by the Transmitter, an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives written by journalists and scientists. If you value Brain Inspired, support it through Patreon. To access full length episodes, join our Discord community and even influence who I invite to the podcast. Go to BrainInspired Co to learn more. The music you hear is a little slow jazzy blues performed by my friend Kyle Donovan. Thank you for your support. See you next time.
[01:14:59] Speaker E: There.