Episode Transcript
[00:00:05] Speaker A: This is brain inspired. Hello, everyone. My name is Paul, and welcome to another special panel discussion episode. I was recently invited to moderate a panel amongst six people at the annual Aspirational Neuroscience Meetup. Aspirational Neuroscience is a nonprofit community run by Kenneth Hayworth. So Ken has been on the podcast before on episode 103, and you'll be hearing from him shortly. Discuss with me what the panel was about, what the meetup was about, and so on. Because there were six panelists from a broad range of backgrounds and interests and approaches, I will let them introduce themselves to you after my chat with Ken. But the goal, in general, was to discuss how current and developing neuroscience technologies might be used to decode a nontrivial memory from a static connectome. It's a mouthful, I know, but the discussion was about what the obstacles are, how to surmount those obstacles, and just in general, what approaches should be taken, and so on. Just a heads up, if you're watching, there isn't video of the event, just audio. And because we're all sharing microphones and they're being passed around, you'll hear some microphone type noises along the way. But I did my best to optimize the audio quality, and it turned out mostly quite listenable. I believe I do link in the show notes to the information for all the panelists and also aspirational neuroscience. And Ken. All right, so here is Ken, followed by the panel discussion. I hope you enjoy it.
Long term memory encoding and connectome decoding. So, listeners are about to hear the panel that I moderated with six guests. And I thought that, as the aspirational neuroscience person, I thought it would be good to maybe just have kind of a quick introductory conversation with you, Ken, and talk about kind of the background of how the panel got formed, whether and how it's going to continue into the future, and what aspirational neuroscience is in general. So, again, just. Well, thank you for inviting me to moderate the panel, and I enjoyed the discussion. Six people on a panel. It was kind of a tough slaying in terms of just ensuring that everyone had their time, but they all took a good amount of time, I thought. Anyway, so thanks for having me do the panel.
[00:02:42] Speaker B: Thank you.
[00:02:43] Speaker A: So what is aspirational neuroscience? What is this panel all about? Why did we even have this panel?
[00:02:49] Speaker B: Yeah, so, aspirational neuroscience is an outreach project.
Essentially, the main goal is to bring neuroscientists together to talk about a particular possibility that is presenting itself in the near future.
I'm involved in connectomics. I usually hang around with people that are involved in electron microscopy connectomics. And there has been just this incredible revolution from the point of view of being able to map relatively larger pieces of brains than we ever have in the past at the synaptic level. For example, over the last few years, there have been an entire fruit fly brain, actually two entire fruit fly brains that have been mapped at the synaptic level. And the Drosophila community are now poring over that very detailed synaptic level connectivity map and really tying that in with, for example, the attractor dynamics and the central complex that allows the fly to know its orientation when it's walking relative to other space, and decoding the visual system of the fruit fly, et cetera, et cetera.
This connectomics revolution is really not just fly.
There's a cubic millimeter of mouse cortex, visual cortex that the Allen Institute has painstakingly imaged and is again, kind of poised to revolutionize our understanding of the mouse visual cortex. One of the panelists, Anton Arkhapayov, is working with that team of people that generated that connectome at the Allen Institute to ground his simulations of the visual cortex.
This is only getting started, right? So there is now the NIH just funded two groups, one from Harvard and Princeton collaboration, to do ten cubic millimeters of the hippocampal areas.
[00:05:16] Speaker A: Is that in mouse again? Sorry, yes.
[00:05:20] Speaker B: Another Allen Institute has been funded to do ten cubic millimeter connectome, ten cubic millimeters of the cortical basal ganglia thalamic loop. These are incredibly interesting projects, and they're getting to the volumes that neuroscientists have dreamed about having this information.
And I should say that there is yet another revolution that's about to take over everything. So, electron microscopy is the reigning king right now for mapping connectivity at the synaptic level. But expansion microscopy out of several labs now using pan staining of proteins, has virtually demonstrated that you can do connectomics using light microscopy. And that has two advantages. It has an advantage of being potentially faster and cheaper, which means that instead of doing ten cubic millimeters, maybe you could scale this up to a whole mouse brain. Maybe you could do multiple animals. A lot of people in the panel was saying that doing multiple animals and having it in line with a whole bunch of experiments would be the key to being able to really decode these connectomes.
But the other thing is that this extension microscopy approach means that all of the, and I really mean all of the key biomolecules that people in the panel, that we're going to hear, the people in the panel were saying, well, gee, I don't know if you could decode things without having ion channels. I don't know if you could decode things without having more detailed information about the synapses.
This is actually on the horizon. There are papers out there that have given conectovic level information using expansion microscopy and then gone ahead and done antibody staining for the presynaptic and postsynaptic proteins. So that sets the stage. So why are we having this aspirational neuroscience meetup at SFN? Why are we bringing these people together? Well, the idea is that these technologies are poised to map the nervous system in a way that has never been essentially dreamt of. The key question is, can we use that to test our theories of how learning and memory occur at an individual level, not just to say what the general animal does, but to say what the specific animal? Can we read the algorithm? Can we read memories off of these connectomes?
The community has been very divided over this. Some will say, of course, that's what the textbooks say.
We need a lot of information. We need a lot of more theories. But of course, others will say, no, this is too complex. This aspirational neuroscience outreach is really to bring those people that are doing the connectomics together with the researchers that have the information that would potentially be needed to decode the connectome. People like Thomas Ryan, who's a leader in the memory Ingram literature, people like Anton Arkpayev, who is doing really detailed biophysical simulations of neurons. It's to bring those people together.
[00:08:55] Speaker A: We don't need to really detail how the conversation went. But do you have any reflections on how you felt the panel went and how it might. Presumably, you're going to do it again next year, and it's supposed to be an annual thing now. So do you have a clear direction of how you might want it to go in the future? Thinking about how it went this past.
[00:09:19] Speaker B: Is if our donor base remains where it is right now, we will be able to have this conference every year at SFN to bring people together. I should also say that we have something that is, I think it brings everything together.
It focuses, it's designed to focus our thoughts on what is necessary, and that's the idea we're putting forward. The aspirational Neuroscience Outreach Project is putting forward a hundred thousand dollars prize for the first decoding of a nontrivial memory or learned function from a static connectome. Do I know what nontrivial means in this category? No, I don't. This was one of the things that the panel is discussing, but I feel like this is the thing that could focus people's minds. And as part of that, of course, whatever nontrivial means, it hasn't been done yet. Almost by definition, it has not been done yet. But if that is the goal, we want to focus on all of the pieces that are the sub milestones, if you will, that are being achieved on the way to that goal. And so every year, until that goal is reached, every year, the hope is to give four aspirational neuroscience awards to different research groups that have really made a significant contribution toward that. Overall, the four awardees this year, they all did fantastic research. Let me just give one as an example.
There was actually two papers out of Tony Zadar's lab that demonstrated the decoding of a trivial, learned task. They could decode from essentially a measure of the connectivity in a particular part of the brain, whether each mouse was trained to lick left to a high tone or licked left to a low tone. And they could decode that training based upon looking at the synaptic connectivity from the auditory cortex to the stritum and saying, if these synapses are stronger over here, then this individual animal learned one of those tasks, and if it's stronger over here, then the individual animal learned the opposite task. And so if we're listening to this panel discussion, there's this idea of, well, is there any possibility that we can decode things? The idea behind these prizes is to give people a very clear example of where we are today in our theories and our attempts to decode trivial memories, to kind of see where it might be leading to decode a nontrivial memory.
[00:12:30] Speaker A: Yeah, very good. Yeah, there was a lot of talk about what is trivial and what is nontrivial, because there were different organisms being represented in some of the different researchers. There was a lot of talk about what might be trivial or nontrivial in various different organisms and what might count. And so I found it to be an interesting and fruitful discussion. I know the discussion continued a little bit after the panel as well.
So, yeah, anyway, I think that serves as a pretty good introduction to what people are going to hear. Any final words or thoughts before to guide people into the discussion?
[00:13:07] Speaker B: Well, I will say that I think this is just the beginning of a discussion. So as you listen to the panel discussion, how I was listening to it, I was thinking, okay, there's a lot of people that are talking about a lot of different things. And so this has not coalesced, but that's exactly what we expect. And the idea is to put forward this challenge prize to really get this conversation going and to get it eventually to be at such a detailed level that we're not talking about, well, gee, what does a memory really mean? Or could this only be done in Clegons and not in a fruit fly or something like that, but to get down to particular experiments that are either going to work and tell us something, or they're not going to work and tell us something.
[00:14:03] Speaker A: Very good. Well, thank you for spending the time to help me introduce this. We'll catch you soon, Ken.
[00:14:08] Speaker B: Thank you.
[00:14:11] Speaker A: We have a lot of panelists, so I'm going to ask each of you to briefly introduce yourselves and why you think you may be here, what you're studying, what your area of expertise is, because there's a lot of different expertise here. And then just in light of what Ken was saying, maybe what you think would be a convincing, non trivial memory demonstration that would satisfy aspirational neuroscience. Anton, do you want to start?
[00:14:42] Speaker C: All right. Hi, everyone. So, I'm Anton Arjipov. I'm an investigator at the Allen Institute.
So what I do is modeling, biorealistic modeling of brain circuits.
I guess that's why I'm here, to have some modeling perspective on this question.
I'll say something in terms of what could be an interesting, nontrivial memory that could be reconstructed, there might be better ideas. There are probably a lot of different possibilities, but I would suggest try to decode images in the visual system. So we study mouse visual cortex, for example, at the Allen Institute, among other things.
Can we try to decode images that a mouse is familiar with from the connectome?
[00:15:38] Speaker A: And what percentage should the decoding algorithm get right for it to be considered a success.
[00:15:45] Speaker C: Above chance?
That would Be a good start.
[00:15:51] Speaker D: Hello, everyone. I'm Jihao Teng. I'm a postdoc in Princeton University.
I think I'm here because I've been working on high throughput TM imaging for acquiring the kind of data set for connectomic for many years. In my PhD, I image a whole fly brain. It sounds small, but I think it's the biggest at the time, my postdoc, I sort of continue to increase the imaging efficiency of TM transmission electron microscope and also acquiring data set for hippocampus. So that's brain regions that closely related to this topic.
For decoding a memory, I think the first minimal that it has to reach is what I would call a beyond one to one correspondence between physiological changes and the behavior outcome. Right. So I wanted to draw contrast to some of the earlier, very successful studies, and you can go all the way to Eric and Dell. Right. Gear withdrawal, reflexes, everything is worked out. It's amazing. You have the molecule, the exact sicknesses, and you have the reflexes, the habituations, behavior, everything is worked out. And so that's an example. And the other example is, for example, fly teammates and fear conditioning. But I think all of these is great success. But one thing Commons about this is that there's a one to one correspondence between the synaptic changes and what we call memory, which is the behavior outcome. Right?
You have a fear conditioning, your synapse are strengthened. You reduce the strengthening of the synapse, you have less fear response. And I think we need to make the next step forward. I guess the example, I would say is that I come into this room, my memory immediately form with the discussion topic. The people sit. No, people have known that sitting here, the context in which it was happened.
It just seems like you can't explain that with just a synaptic potentiation and reduction. So, I mean, I guess it comes down to it's a representational issue, is in how the information is encoded in the brain in a systematic manner. And I think the next step forward is to break away from the one to one correspondence and you can decode more than a binary hope. Sorry, I got a little bit too long.
[00:18:43] Speaker A: No, that's okay. So, following that, Conrad, what would be non trivial in CL elegance?
[00:18:48] Speaker E: So I guess I'm here because I'm really interested in simulating a whole clue. If you ask me what's non trivial, say, learning about aspects of foraging feels nontrivial. But I also want to suffer something that would feel trivial to me, like I can. For human beings decode their memories by looking at the muscles that they have, because the muscles that they have tell me what activities they typically do, and I can distinguish a swimmer from a dancer just based on how the body is made. And I think we would all be not happy with that. So for me, it's a bit of a continuum. One way of decoding is if you give me a simulation, I'll be happy. Sure, that's decoding. And if you give me just the muscles, probably I'm not happy. And I don't know how to think about the space in between. So that's why I'm so happy about the discussion we'll be having here.
[00:19:40] Speaker A: Well, can you introduce yourself as well?
[00:19:42] Speaker E: Well, I'm Conrad carding from UPenn, and I'm quite interested in causality.
[00:19:51] Speaker A: Tomas, you are our resident Ingram expert. I can lead off by saying that. Thanks for making it, by the way. So, yeah, who are you, and what will be non trivial in your world?
[00:20:02] Speaker F: Good evening. I'm Tomas Ryan. I'm a behavioral neuroscientist at Trinity College, Dublin. I believe I'm here because I work on what we call memory and gram ensembles. I participated in this discussion at SFN 2019 before the end of the world, and I think this is a fantastic space because it really allowed us to have a conversation, to think outside the box, and to be critical about many of the assumptions that we work with pragmatically in neuroscience. We recently published a paper, myself and my student, Fiona Sullivan, entitled if Ngrams Are the Answer, what is the question? And we were trying to assess our assumptions of what we think about when we think about memory Ngrams. And I'm interested in NGRAMs as ensembles that may be embedding information in a wider connectome. And the challenge that we were really putting out there was that what we're currently intervening with or manipulating as memory NGRAMs may not actually be the information itself, but may be the vehicles of the information, the biological vehicles of the information. And the challenge, really, is to understand what aspects of these vehicles are carrying the information so that we can then design experiments that would allow us to decipher what that information is. So, to answer the question, what kind of experiment do I think would be appropriate? I think we need to take inspiration from the only successful empirical science of information that we have, which is genetics. And the way we figured out what was carrying core genetic information was to transform the putative substrate from one bacterium to another, showing eventually that it was DNA that was carrying the information as a vehicle and not the protein. And then we knew where to look. We knew how to then chase down what the information might be. I don't think we should be swapping NGRAMs or ensembles between different organisms to find the substrate, in this case. My own personal view is that we should look at the relationship between memory and instinct and how that happens across evolutionary time with the hope of finding the minimal necessary components of what a latent informational structure might look like in the brain.
[00:22:25] Speaker E: So, my name is Dongsong. I'm a professor at University of Southern California. So the reason I'm here now, I don't do anatomy, right, but I do a lot of the E phase in behemoth animal. And also my group, we do a lot of modeling, modeling of the hippocampus with different approach. One, the biological approach. We try to build full scale, very realistic model of hippocampus, and at the same time, we also build, like, input output, machine learning kind of model of the hippocampus in the context of developing hippocampal memory prosthesis for enhancing or restoring memory functions. So in the context of hippocampus was a nontrivial task. So, first, I must confess, I have been working on trivial tasks for very long.
[00:23:12] Speaker A: Right.
[00:23:13] Speaker E: For example, the banner discrimination task, in terms of decoding, in terms of retrieving, we literally train the animal to remember left location versus right location and try to use the model to predict the activity, the representation for each location, and then try to use the prosthetics to restore that memory. So we have been doing that for a very long time. So I don't think it's really trivial.
[00:23:39] Speaker C: Right.
[00:23:39] Speaker E: It's still a good proof of principle. But in the end, we also acknowledge that hippocampal function is not for binary discrimination. Right. It's really for the formation of episodic memory. So, for me, in the hippocampus, a non trivial task should include some kind of episodic memory. There should be location involved. There should be time, there should be objects. There should be this sequence. So it's not just a band rated terminal, it's a memory or memory trace of the episodic of events.
[00:24:14] Speaker A: Thanks. Before you start, I'll just say, when you finish, we're going to have a broad discussion, and I don't need to speak at all. If you guys want to talk amongst yourselves, that'd be wonderful, but I'll lead off with a question, so go ahead, Srini.
[00:24:28] Speaker G: Yeah. Excuse me. I'm Srini Taraga. I'm a group leader at Genelia Research Campus.
I guess I'm here because I've been working on modeling connectomes. Before that, I worked on machine learning methods for mapping connectomes. Fun fact. Deep learning algorithms were originally invented in their new Renaissance and applied to kinectomics.
I'll one up Conrad here and say that I think we can get to simulating an entire fruit fly now that through Zhao's work and Ken's work, we'll have maps of entire connectomes, the entire nervous system. For the fruit fly, we're working on models of. So far, we've done the fruit fly visual system. We've also built a body model because you got to embody these simulations. So I think we have the tools. We can do this. And then to answer the question. I mean, I'm trying to think about this from the perspective of scientists looking from the outside at a simulation or an animal. The first thing that we strive for when Conrad and I build these models is to get the average behavior of an animal right, and that's where we're working right now. But what would be a memory? A memory would be some mechanistic basis for the individual response of an individual animal. So individuality, but in a way, that's experience dependent. So if we can kind of, I think, use the ideas that you guys are discussing, but phrase it in the form of, if I wanted to read it, then it has to be some measurement that I make which constrains something in the model, which then allows me to predict the behavior of the animal at the end of the day in a way that reflects its individual experience.
[00:26:31] Speaker A: Okay, thanks, everyone. That didn't take the whole panel time to introduce everyone, so that's good. So I guess I'll just remind everyone that the central focus of the panel discussion is whether we can decode a memory from a static connectome. And what I want to ask everyone, and you can just jump in when you want to, but the question is, what in your own research and your own expertise would be holding you back currently would be the obstacle, or an obstacle or a challenge would be holding you back from doing that.
You look like you're going to jump in.
[00:27:09] Speaker E: I'd love to jump in here. So, from my perspective, causality is a big problem. And I think we heard that earlier in your point as well, Thomas, which is, if we want to interpret something, we want to interpret it in terms of what it does, not just it being a correlate of something. And this, like, in terms of what it does, is incredibly difficult, because neuroscience, by and large, outside of connectomes and perturbation studies, doesn't really tell us much about the flow of causal information in it.
[00:27:42] Speaker A: Does anyone want to respond to that?
[00:27:47] Speaker G: I mean, I'll say if you want to get it from the level of a connectome, then we need correlated measurements of connectomes, not just one connectome, but now multiple connectomes, to see what differences there are in the connectomes of two different animals with two different experiences. And we're still at the stage of mapping one connectome. We're now going to have a male fruit fly and a female fruit fly. But as we make this faster and faster, maybe we can have two different fruit flies with two different experiences, and we'll probably need more than just two. But that's what we need.
[00:28:27] Speaker A: But eventually the richness of our memories is a very individual thing. So we would need 7 billion connectomes perhaps, or am I overshooting?
[00:28:38] Speaker G: I mean, that's where models, I think should be able to help it.
[00:28:42] Speaker D: I guess I can help by saying that the goal is that hopefully you have tens or 20 and you have some kind of statistically extracting understanding of the principle behind it. So you could say something about the 7 million animal. So I was going to respond to the answer, but I'm going to essentially saying the thing that Sereno is saying as a sort of EM image specialist, I think we need to increase the efficiency and the scales of acquiring Connecton. And the first step is acquiring data. And I guess there's subsequent steps about that. There's reconstruction and then there's proofreading. And Ken was pointing out this big grant in the next five or ten years. And I think the goal is exactly to increase. People have realized that we need to be able to image a whole mouse brand and maybe image many mouse brands, and we need to image many fly brands, right? And we need to do experiment with the fly and image that fly brands in EM. And these grants are to advance our technology such that we could do it at will. And I think that's what I would think the man.
[00:30:02] Speaker E: What we mean by decode the memory, right? Is it decoded specific memory from a specific individual, or we also need to decode individuality of a specific subject, right. To me it's a totally different question.
So for the first possibility, I think it's much easier. I think it's much easier to decode a particular memory for a particular subject. But can you say that memory belong to this individual person? Right. I don't want to go to the question of consciousness, right. But obviously it's related. So we really need to define what do we mean by decoder memory.
[00:30:47] Speaker F: I'm going to be slightly obtuse, and I remember once Noam Chomsky remarking that filming all of the traffic in Manhattan and then quantifying it and modeling based on it would not be the most efficient way of deriving Newton's laws of motion.
And similarly, I think that with the conctomics approach, which I completely endorse in itself, if we take genomics by analogy, we didn't start sequencing whole genomes until the 90s. It became high throughput around 2000. That's not how we worked out the genetic code. And it wouldn't be the best way, I think, of working out the genetic code. I think the same is true for neuroscience and memories. And if I ask myself, what is the thing that's getting in the way for us to make progress on that?
I don't think it's the tools, although the tools are always improving. We want better time windows of capture for memory nGrams. We want better tools for manipulating, damaging, or reactivating them, and so on.
But we're going to need a coherent and testable theory for what the informational structures are once we have confidence of what the vehicle is. And it's not going to come from decoding an entire connectome. It's going to come from some kind of ideas about what are the topological or topographic submotifs that lie within the connectome.
We don't just want to know what is necessary for learning or what is necessary for recall. We want to know what the NGrAM does or what we think. The NGrAm, what is it doing for the organism?
And it seems to me that we should be doing that ultimately in clogs, if we really are, or similarly simple organisms. If you don't understand a mouse until you can build a mouse, we don't understand an NGRAm until we can build an NGram. And surely we'll want to be doing that in very simple organisms.
What is it about simplicity. 302 neurons because you have the connectome? Well, it makes the connectome more easy to manage, but I don't think it needs to be about the whole connectome. You want to identify particular memories that have particular informational content in a subset of the connectome. Maybe you don't need to go as low as cligons. Maybe you don't like all the neuropeptides. So there's another animal 1 may want to use. But we're always going to be limited when we're with mice and rats, I think.
[00:33:26] Speaker E: So, Sweeney, I want to push you a little bit. I think you feel that sea elegance and fly are kind of almost at the same level, and just don't think I buy this at this point of time, and here's why not. So, yes, your work, and, like a few other people are working on it, show that you can do better than chance at saying things about neurons in the fly by looking at the connectome. But in reality, the connectome just shows a structure, and there's things happening. There's lots of molecules. We don't know what they are. There's clear that there's some things that are missing. But if you make statements like, oh, yeah, what we have the connectome on the fly. So we're at that same level. The cool thing about C elegance is that we can do the perturbation so that we can find out what the functions are that neurons compute based on its inputs. And I don't see us being able to do that on the fly anytime soon.
What do you mean by perpetration? Okay, so in you can. Andrew Lifer has been doing that for a bit. You can stimulate every individual neuron, and there's tech development happening in multiple labs I'm currently collaborating with that is aimed at basically being able to stimulate subsets of the neurons. You can say the output of each neuron, if we follow like neuron doctrine, is a function of the inputs. And arguably, we don't know how complicated that function is at some level. If the function is very simple, then we can get very far with the approach of just looking at the structural connectome. If that function is very complicated with timescales and nonlinearities and all kinds of channel effects, then we need much more, at least in the structural connectome. Maybe if we have the molecules, we can get there, we can do a stimulation of the flies. But the power analysis suggests that it gets very difficult as you increase the number of inputs that each neuron has. So, Clin, it's not only that it has only 300 neuron, it's also that it looks like it only has about, like, order 30 inputs per neuron. And that means that they live in a much smaller space. And that means that it's much more realistic to currently figure out what the nonlinear aspects are of the functions that they compute.
[00:35:42] Speaker D: I want to push back a little bit more on you. I think if you can agree that the fly does something more interesting that we wanted to understand beyond the client. And a connectong is a good starting point. I mean, if it is a possibility to perturb and recall every single name, that's great. In the fly, that's great. But I think Connectong is a starting point that raise the water, raise all boats for everyone, for all technology to move to the next step.
[00:36:13] Speaker E: But the question is, what does that mean? No, it raises the boats.
Will we be able to predict rich behaviors of the fly based on the connect home, of which we only have structure?
[00:36:28] Speaker D: I think you don't know until you can do it.
[00:36:35] Speaker E: This way. So what you say is we won't need sea elegance because the fly project will just work out?
[00:36:42] Speaker D: No, I think Clegon is equally important for its set of behavior that it does. But I think fly is a different organism. That is equally important. If somebody is interesting in studying the fly, which I believe a large fractions of the neuroscience community are, then the connectong is useful.
[00:37:05] Speaker E: Flies are wonderful, don't get me wrong.
[00:37:10] Speaker A: I guess the question would be, is any C elegance memory nontrivial? Right.
[00:37:18] Speaker E: So C elegance has a remarkably rich behavioral repertoire, and C elegance can learn things. For example, it can learn temporal rhythms, it can learn a circadian rhythm, despite the fact that by default, it doesn't have that. It can learn aspects about food sources in its environments. I would call that nontrivial. But the question is kind of, where does nontrivial start and trivial end?
[00:37:48] Speaker C: So I want to mention something slightly different, but related to this discussion. First of all, I think we need 8 billion connectomes, but maybe for a different purpose than what we are discussing here in the future.
All right, so like I said in the beginning, I think a good starting point would be to try to decipher some kind of perceptual memory, not necessarily from a full brain connectome, but it could be just the correlates of that memory in, let's say, a particular brain region.
And I want to suggest something that might be even simpler than that, perhaps on the way to that. And I was inspired by Thomas'point about Ngrams there.
So, in the last few years, many labs are pursuing these brain machine interfaces, right, where they use optical tools to let mouse train. Let's say mouse. It can be done in other animals as well, but let's say mouse train to perform some kind of simple behavioral task by learning to manipulate the activity of a few of its neuron, right? Just a few neurons, let's say, in the motor cortex that the mouse somehow learns to activate on demand leads to a reward. Right? So can we decipher that from the local connectome of that piece of tissue and do it consistently for a few mice? And that also goes back to the question, is that transferable? What is in common between those? They're the same species, but different individuals. What is in common there?
So this is not going to answer all the questions, obviously, but I think that would be a good start.
[00:39:43] Speaker A: What currently is holding you back from doing that in your own work?
[00:39:47] Speaker C: Exactly. So what is holding us back? I don't think anything is actually holding us back. Something like that could be done now. The existing data sets simply didn't do that experiment before the electron microscopy was done there. Right, but it could be done.
Mice can be trained that had been shown. A piece of a visual or motor cortex can be taken out and processed, and you probably don't even need a full reconstruction of that connectome. You could focus on some identified neurons.
[00:40:18] Speaker A: So you just need to work faster. Is that's the take home?
[00:40:22] Speaker C: It's just go ahead and do that experiment if someone wants to do it, I think it's doable. Right now.
[00:40:28] Speaker E: I want to add something to it.
When you talk about memory, keep in mind, memory is a behavior, right? It's a very high level function. A connectome is a very low, no offense, but a very low level kind of mechanism. There's definitely something missing in between. I would argue that would be E phase. So before we move to the whole connectome, to the whole emulation of the behavior, how about we simulate a single neuron based on this 3D reconstruction of a single neuron, see whether we can replicate a functional property in terms of E phase, in terms of when you receive a lot of inputs, how many, right. What kind of output spikes you will generate.
[00:41:18] Speaker G: I like the spirit of this question, and I think it also relates to what the level at which Conrad is sort of asking about the clients connect him as well. Let's try to understand, at the microscopic scale, how do the components work. While I like that, I think our goal has always, at least my goal is to try to understand how this network as a whole gives rise to behavior.
And that's a network phenomenon.
And then there's, of course, the body itself and so on. But this collective phenomenon, you can ask the question which of the details of that single neuron really mattered to that? And how important are the details of the single neuron versus the connectivity of the network and the structure of that?
And my bet has been that it's the connectivity that matters more. And we have some recent work that tries to investigate this. But when you think about the kinds of, in machine learning, thEre's the architecture of the network, and then there's the components single neuron properties that you use. And what matters a lot is the architecture. What matters a lot is how you combine them together in a network. And I think we need to sort of go from the microscopic level to the macroscopic network level and the behavior level. And I don't think making more measurements at the microscopic level alone are sufficient to transcend these scales. And so a lot of our work has been about how do you bring in constraints or measurements also at the macroscopic scale, not just making more measurements at the microscopic scale, how do you combine these measurements at different scales into a single model and then use that? And that may not give you the information about which neurotransmitter is being used, which molecular mechanism is being used by that circuit to generate the behavior that it is.
But you may have effective parameters that can allow you to bridge these scales. And to me, that's more important.
And this is kind of why I think it doesn't necessarily matter whether we can do single saddle perturbations in the fly connectome.
If we can measure lots of behavior and if we can measure lots of sort know network dynamics at some larger scale.
[00:44:11] Speaker D: I have a sort of experiment that I think can support that. And I would use the fly central complex as an example. I mean, Ken mentioned about attractive dynamic in the hippocampus, but I think a simple chapter dynamic that we all know is the fly heading direction. And this work by Vivek Jaraman, Gabby Mammon, you have the dynamic and we have the theory. For many, many years we have the connectone. But I think one more thing that need to just make the bridge from single neuron to that connectone is biophysics.
We feel like we haven't totally nailed out the mechanisms of the attractor dynamic in central complex because we don't know what's actually contribute to the excitation and what's contributed to the inhibition. And how does excitation inhibition interplay to generate the attractor and to maintain the attractor. And I think more biophysics could sort of bridge this understanding from single neuron to a network. You just measure how much of the output input is contributed by each component, and that could also totally nail down what's the modeling mechanism that's responsible for this attractive dynamic in a real living system.
[00:45:42] Speaker E: With the bounce of machine learning, right. People don't have to do very detailed biophysics modeling. People can just get many, many samples of 3D reconstructed neurons and also get a lot of eface data from those neurons and let machine learning to figure out what's mapping it. Maybe that can be used as a translator, translating single neuron to functional units.
[00:46:12] Speaker C: But ultimately, if we want to do.
[00:46:13] Speaker E: That, the central thing that we need to know is how complex is that thing that we're trying to approximate. With machine learning now, that's what determines how much data we need. And I think in the panel, we probably had a lot of disagreement on how complex different animals are. So I think humans are just mind bogglingly complicated. That just might be because I'm not very good at understanding them.
But still, flies just seem like unbelievably complicated and even already a sea elegance. I'm not sure if I have any chance to understand it. And then the question is kind of how many parameters do we need? And if there's significantly more parameters than we can do measurements, then there will always be a set of models that are all compatible with what we have now. That leads us a little bit away from decoding memories, but it's still kind of, if we talk about understanding how a system computes, it leads us down that path.
[00:47:11] Speaker A: But getting back to the decoding memory, let's say Cleons. So there's a constraint on the types of behaviors that they do perform, and you mentioned a handful of them. And does that give you some hope that you could, despite not being able to understand clegance, whatever that means, that you could decode that those twelve, what you call nontrivial behaviors, memories.
[00:47:36] Speaker E: Yeah, with clegance. And this is, please don't quote me on it, but I'd imagine that kind of the number of dimensions that sea elegance can learn might be like 100 dimensions or something. I don't know. I mean, please. I might be many orders of magnitude off, but there's no doubt that humans can learn an absolutely mind boggling number. I mean, people can learn box by heart and things like that. So in that sense, how hard it is to read out behaviors depends on how many parameters kind of go into those behaviors.
[00:48:10] Speaker F: In this forum, we all seem to agree that the connectome is the implementational substrate that we want to be looking at. There are people outside of this room who would argue it may be other levels of biology, but within the connectome world, on the one hand, we're studying synaptic properties and what is necessary for the plasticity that might underlie a specific NGrAm. And then we have this wide map of different extents to which we can model the connectome. And the problem, it seems to me, is that we don't have any good way of defining the questions that we want to answer in between a whole connectome and what's happening at a synaptic level. We don't really know whether whatever the NgRam is, whether it's truly distributed and fragmented, or whether it exists in modular components in different parts of the connectome. More than that, we don't really know if whatever components of the NGRam we want to be dealing with are intrinsically storing information somehow or contain information in a way that is entirely relational and that only has any meaning with reference to the world from the perspective of that organism. And unless we start to get concrete about these different possibilities and how they map on to different alternative possibilities. We won't be able to design testable experiments to make sense of this. And I worry that when we talk about mapping in vivophysiology, synaptic input output, at whatever level of complexity onto the connectome, what we're doing is we're just studying biochemistry in parallel with the phenotypes of individual cells in order to find how latent information is stored there. But that didn't work. Again, for genetics. We needed to have more concrete ideas that were testable in a different way.
[00:49:58] Speaker E: So just briefly to push back. You said we all believe that connectome is like the correct description. I believe that if you don't also give me the molecules, you can learn very little about it.
[00:50:10] Speaker F: But could we go without the molecules if we had an artificial connectome that could behave the same way as a biological connectome, but without those molecules? Because the question is whether the molecules are simply there for operational activity in the connectome, plasticity and so on, but that the information is solely embedded at the level of the network structure.
[00:50:32] Speaker E: I absolutely believe that things like channel densities carry information and contain significant parts of the memory.
[00:50:40] Speaker C: Well, so in my opinion, if you want to simulate the brain or part of the brain faithfully, you need more than the connectome, right? You need all the electrophysiological properties of neurons, their connections, ultimately how they are modulated by the mind boggling complexity of neuropeptides and everything. Right? So in principle, we need to know all of that, and the connectom is probably not enough. It's possible that we might be able to learn associations between, say, from the connectoma morphological reconstruction. I can tell what cell type is, what that neuron, which cell type it belongs to. And if I characterize those cell types previously across all the types of conditions, I might be able to use that in my simulation. But just connectome by itself, I think we probably all agree, is not enough. However, for this particular question we are discussing here is what can we decode from the static connectome that probably can be acquired right now? Yeah, personally, I don't know. Actually, that's an open question in my mind, too. I suspect that a few of the things, like what we were discussing, are possible to decode, at least above chance.
But it's also quite possible that some of that information is really at the level that is not resolvable with the current technology. Like you're in a phosphorylation states of proteins or something.
[00:52:10] Speaker A: Let me just interrupt and just remind the audience to ask questions at will, whenever you want to come up to the mic or raise your hand and I'll yell at you.
[00:52:20] Speaker G: Yeah, I want to ask a question.
I'd like to understand this idea that in the genome, we didn't really need to understand and sequence the entire genome to understand how it works.
How do we bring that idea down to memories? Because the way I think about it, a memory doesn't exist in the abstract. There's some representation of the sensory input that's potentially unique from animal to animal, individual to individual, and what a memory is. The delta between where you were when you started and where you ended up after that experience, similar to maybe like SNPs, single point mutations that might be different between your genome and mine. And so we needed some baseline to figure out where that difference is. And then that comparison helped us. Now, for connectomes or for brains, if each individual has a different baseline, then we need to figure out how that baseline works. Or if we know how to map an input all the way to the memory module and compute from the connectome of the rest of the brain what that sensory representation might look like, then we may have some way for that individual mapping what that memory might look like of a particular stimulus and try to decode that. But I just want to get a feel for what I'm missing. I'd love to learn.
[00:54:11] Speaker F: So I would argue that we can learn a lot from the genetics analogy, because we were able to learn that there are modular structures in our genome which are genes which are implemented in the vehicle of DNA, but within that structure, within the chemical structure of DNA, what emerged in evolution is an arbitrary informational code that was not determined by anything about the chemistry or biology of that molecule. The RNA sequence is directly complementary to the DNA sequence, the mRNA sequence. So that has a physical mapping. You go from DNA to RNA because of a structural complementarity. But then something else happens, because the way that protein amino acids are connected to tRNAs is based on an arbitrary mapping that is not structurally determined by anything to do with the chemistry of the tRNAs or the amino acids. It is ubiquitous. So the chemical connectivity between an amino acid and a tRNA is generic to all of the combinations that the genetic code triplet has with the amino acids. We just keep them consistent for the arbitrary reason that we need to be able to reproduce with one another as organisms, we need to have the same genetic code. But you could rewrite the genetic code any way you want. It's plug and play. So at some point, we had this semiotic informational transformation, where it stopped being just about chemistry and structure, and it started to become about a biology of information. And it's not that all of genetic information is embedded in our genes. They're not because, of course, we have developmental systems, and what happens innately is a product of cells interacting with one another. So we also have dispositional information in our genome, but at a very basic level, we have these units of segments of DNA that are coding for protein as latent informational structures that can be read by aliens from another planet, even when we're all extinct. And it is objectifiable information based on a subjective code that evolved for specific reasons in our evolutionary history. Now, in order to take some of this reasoning to how we study the brain, and Michael Ghazaniga has made this point very nicely in his book, the Consciousness Instinct, we need to be able to come up with ways of interpreting, in my opinion, subsets of connectomes in a way that we can make predictions about what they are doing for behavior and then test those predictions in functional experiments. I completely agree with you. The NGRAm is the delta between your brain before an episode and your brain after the episode. The problem is there's a lot of delta going on in your brain because you're learning lots of different things. Your brain has to do a lot of work to stay alive. You're aging, degenerating, and so on. So the empirical challenge is separating out all of the irrelevant delta, all of the delta for other memories, and all of the delta that's just homeostasis from the true changes that are, in fact, laying down information. That is the Ngram of interest.
[00:57:38] Speaker E: I don't understand the RNA analogy. In the case of RNA, the decoding is the same across all our cells. In the case of neurons, the decoding is different for every single cell that we have. Why does the RNA analogy apply?
[00:57:52] Speaker F: The point about the RNA analogy is that we go from structural determinism to a degree of semantic arbitrariness for how the code works. There is nothing physically determined about the relationship of our 20 optional amino acids with the triplet codes that code for them.
And something like that is going to be happening at many more levels in the brain by completely different mechanisms. We don't currently have any kind of a theory for how we would even go about doing that, but it doesn't seem to me plausible that the way our ngrams are expressed and interpreted physiologically, not behaviorally and cognitively, is a physically deterministic thing. At some point, we evolve internally restricted rules that became consistent in animals at some point in our evolution. We need a way of understanding that.
[00:58:52] Speaker A: Okay, we have a couple questions. Can you hold on 1 second? Ronald, there's a question right here.
[00:58:56] Speaker E: Right.
[00:58:57] Speaker H: So, actually wanted to raise two questions.
First, one relates to the C elegance. So how would you define an untrivial memory? While we know that, for example, a behavior like circadian rhythm that you mentioned could be delayed with a very simple chemical reaction, like redox reactions, for example, it had been shown before red blood cells that where very simple reaction code could be done.
[00:59:27] Speaker G: At the circuit level.
[00:59:28] Speaker H: How would you define a non trivial memory?
[00:59:33] Speaker E: I can't define non trivial very well. I have the problem also artificial scotoma, which was like the big thing that was about decoding relevant information from conic Toms when I was a young PhD student. I can't find a criterion that cleanly cuts between that and more interesting things. So, yes, I take back, I don't want to strongly say it's non trivial.
[00:59:58] Speaker H: So the second question is related to the deep learning parallelism.
So, from the work from Anthony Zado, for example, the genomic bottleneck and learned behavior. So I wanted to hear your opinion about that. So when you have could be that there are two different learning algorithms, one with optimization for a circuit with a certain motor sequence, and then shuffling between these motor sequences and learning them, how are you going to draw from a static connect on this shuffling?
[01:00:33] Speaker G: Yeah, I think that's a great question. I think it does point out that we need to understand first, what's the baseline behavior. We have to correlate the connectome and the behavior to the genome and figure out which things are genetically defined and which aspects are learned through experience. And until we have lots of measurements, we won't be able to decorate them. We just need more measurements RENDELL all.
[01:01:04] Speaker H: Right, I've heard some really interesting debates between all of you about what is a nontrivial memory and about having a connectome that isn't too complicated to deal with.
And I'd love to know here, just as an idea, so how wrong is this? Or how right is this? How beneficial, how not beneficial, if instead of Clegons fruit fly, Mouse, let's say you take neurons cultured on top of an electrode array, where you can see everything through a microscope, you can record, stimulate, and now you take, for example, your connectome data, and you're trying to deduce what are the things we see here? Either you could use the correlational approach to an entrivial memory, or the decoding circuit you can figure that out as well.
What could you get from that? What could you not get from that? Maybe perhaps you can figure something out about what matters, what doesn't, a sort of scale separation issue. I would love to hear your opinion about that.
[01:02:08] Speaker C: So I think that that's a great idea, would be very interesting and useful.
But ultimately, we want to be doing this with real brains. Right. And I think the big question there is how well it translate to a real brain, because I don't think we know very well how in such cultural. In ergonoids, the connectivity is organized. It might be similar to real brains. It might be vastly different.
Personally, I suspect that it's vastly different. And also, you don't have a lot of the systems, neuromodulators, for example, that orchestrate a lot of function, including memory, in the real brain.
I think it would be a very interesting research question, and a program, and ultimately comparing that with real brains would be fantastic, in my opinion. We don't know currently whether it's really going to translate.
[01:03:07] Speaker D: Well.
[01:03:09] Speaker E: To me, I think it's less important to have the cell in culture or in the body. Right. To me, I think it's more important to find out whether there's a clear relationship between the structure, 3D structure of the cell and its functional property. So I don't think that will change a lot from culture to now, even it changed, as long as a single, like one to one mapping it solved the problem. You can always translate structure into function, but I really think that's a very necessary step, mainly because I'm doing E phase, right. And I feel very uneasy about going directly from connectone to behavior.
But I want to give this to Reynold.
If we can produce small set of neurons, say three neurons interacting with one another at that size, we can probably identify how they interact with one another. And we could probably use a definition of learning where you can say, to which level does the connectome reflect the learning processes that have happened before? Maybe the inputs we get give to one neuron. If we could do that, I would be very satisfied with it, personally. My belief is if you just look at the connectome, you will have much less than if you also give me, maybe, say, the densities of various molecules. But indeed, this is something that we could absolutely do on a small circuit. And I, for one, would be very excited about this experiment.
[01:04:38] Speaker H: Thank you.
Okay. I might be naive in saying this, but I think there's some overcomplication a little bit. So could you take, for instance, CL again, which is like this simple model organism, and then look at circadian rhythm as a biological function, and then see how the connectome changes over time for different light effects. Could you get a memory of how, I guess, ClGAns is performing in a specific light situation, and then that would be a source of where you find an ingram.
[01:05:16] Speaker E: I'm not sure I understand. We could have animals that have long circadian rhythm versus those that don't. And we can look at the connectome, and maybe it's different, but where's the extra level of complexity that we can take out of it?
[01:05:30] Speaker H: I guess I'm thinking of the simplest situation, right.
To start to actually have a verified weight of saying this is a memory that we're able to derive, and it's without question related to a difference in a light condition, for instance.
[01:05:52] Speaker F: Right.
[01:05:56] Speaker H: Obviously, the more complex ideas are really good, too, but to start small before getting bigger, because obviously, when you get to a higher level organism like a human, the level of complexity is just enormous.
[01:06:11] Speaker E: So I'm not 100% sure I understand the question. There's two pieces. One is, can we establish that light makes the change in the connectome that we could see? Yes. Versus the second one does the change in the connectome that we see is that ultimately, what gives rise to behavior that we see at the beginning? The first part is much easier because we can randomize the light condition. The second part is very complicated because it's a very nonlinear, recurrent system.
[01:06:41] Speaker C: Okay, so a comment on this one, just in general. Sea elegance. There was a paper in nature a couple months ago. I think, to my shame, the name of the authors escapes me. What's that?
[01:06:57] Speaker D: Andrew Lifer.
[01:06:59] Speaker C: Yeah, right.
[01:07:00] Speaker D: This author is Randy.
[01:07:02] Speaker C: Thank you. Perfect. Yeah. So what they showed is that not even thinking about behavior, but thinking about neural activity, their experiments and modeling showed that connectome is not enough. There was a lot of bulk transmission.
[01:07:22] Speaker G: Right.
[01:07:22] Speaker C: So non synaptic communication between neurons, that, according to that study, really changes how the activity is organized very substantially.
And so, yeah, according to them, connectome is not enough, to our earlier point, physiology, molecular diffusion, and things like that are really important, I think. It doesn't mean that memories cannot be decoded from Cl guns or whatever other connectome. It just means that a lot of the things may be much more complicated, but maybe some things can be decoded.
[01:07:56] Speaker E: I'm still thinking about, how do we define decoding memory, right. This fundamental problem. So I think sometimes when we say that we literally mean we decode memory, almost imply some kind of discriminative model, which is much, much easier.
[01:08:12] Speaker D: Right?
[01:08:13] Speaker E: But sometimes we mean something like a creator memory. I think it's older, more complicated than decoding a memory.
So that question, can we decode the memory from connectome? Based on what Ken presented today, I think the answer is already, yes, of course, those paper already indicated some of the simple memories can be reliably decoded from the connect home. But whether we can create in the real simulation or real animal, that's a totally different problem.
[01:08:45] Speaker A: Ken, this better be good, or I'm kicking you out.
[01:08:47] Speaker H: Okay, that's fine. So I want to call to question kind of this idea that every molecule is precious and we don't know anything about neurons. I mean, I've seen Anton's work, and I believe you do biophysical modeling of whole neurons to put this in perspective. So there's this great paper that was looking at monkey brains and looking at the response of monkey brains. Everybody's seen this figure. Where they can do 250 milliseconds, they can respond to a complex visual stimuli. That means they only have ten milliseconds for each stage. Essentially, this means that it's electrophysiology. This is not memory. This is not gene expression or diffusion or anything like this. It means it's electrophysiology of neurons. It means dendritic integration, which might.
Obviously, ion channel densities are incredibly important. Do we know nothing about ion channel densities? Because if you were to say we really have no idea about the ion channel densities of particular cortical neurons, then how the heck do you ever get a visual neuron simulation to give you responses? If I look in the literature and I see people actually decoding trivial memories, like the receptive field properties of a visual V one neuron, and they're not like, holy crap, it has nothing to do with the synapses. It's like, yeah, it's being built up by the synapses. So I want a concrete question of what are those specific ion channels that we don't know that are varying in a way that are unpredictable?
And if we don't have that, then what are the experiments that we can do to get those? Because I think that we can, especially with the expansion microscopy.
[01:10:51] Speaker C: All right, so I'll start. So I think we know a lot about iron channels and their densities, although as far as I know, we don't know enough. There is a lot that we don't know. There is a lot of complexity.
For one thing, Iron channels come in all kinds of combinatorial assemblies. Right?
But we know, it's very complex. Also, what we know, so, for example, for modeling work, is that you can get the same answer for the activity of a neuron given a stimulus with many possible combinations of parameters. And honestly, that's how it works at this point, right? We are not saying, well, there is this much of this iron channels, that much of that iron chaNnel, this is their distributions along dendrites, because we learn it for hundreds of them.
We assume like ten different conductances with some properties that are roughly known, and then we optimize them and we can get multiple solutions that can give you the same result.
So we get a working model that probably has some relationship to reality, but it's not necessarily a one to one relationship.
But all of that being said, I would totally agree with you.
We know that a lot happens at the synopsis. We know, like that paper from Kevin Martin's lab that you showed that got the pricing, it's great. And things like that have been done before. But as I really sort of nailed it, there is a relationship between how strongly these neurons are connected and what we can learn from the connectome. Not even full connectome, just partial electron microscopic volume.
So, personally, I believe there is a lot that we should be able to decode. But there are molecules, right? There might be protein aggregates and assemblies, or maybe even free floating proteins with some kind of phosphorylation states that determine modulation of that synapse, which may very well have an effect. We just probably don't know yet.
[01:12:56] Speaker E: Let me say a few things about channel density. So the first one is, traditionally, we don't have the means at all to see how they're distributed over a given cell. Therefore, all the old biophysical modeling assumes that channel densities are constant within a given cell type. That assumption has never been properly supported by data. So that's the first one. We don't know how complex they are. The second one is, I want to make a normative argument. So once neurons figure out how a synapse, strength should change, like, what's a channel? Just like a synapse that's always active. So in that sense, we should strongly expect that channel densities are actually something learn. And there's some evidence that they are. And so in that sense, yes, I do believe that channel densities carry significant information.
[01:13:47] Speaker G: It's not that none of these things are important.
It's whether we can build simple, effective models of the network without making all those measurements at the microscopic scale. Can we get macroscopic behavior cheaper?
This is not the first time. People have built multiscale models. Chemistry is full of it. There's quantum mechanical simulations. You need them for certain kinds of things. There's molecular mechanics, models of just atoms.
You don't model the electrons. There's simple models of that. There's models, effective field models of water for hydration. There's all of this. We have these models at different scales. We don't need to measure everything at the finest scale.
[01:14:38] Speaker E: Yeah, let me steal men and straw men. That argument. There is a scenario about the brain where interactions between neurons basically stabilize things. So if you get individual neurons wrong, the fact that there's many of them that all interact will basically fix thing. There's another set of scenarios where any mistake you make accumulates as it's recurrent in the system. I don't think we know really where we are on that continuum at this time.
[01:15:04] Speaker G: I agree with that. But we can make measurements at the macroscopic scale and constrain the macroscopic properties. We don't just need to make more microscopic measurements. We have combined microscopic measurements with macroscopic measurements to make effective models of single neurons and synapses.
[01:15:22] Speaker E: But then the question is, if you get the individual neurons kind of wrong, how much can you correct it by, say, having models of measurements, models of behavior, or measurements of behavior? But at the same time, you could say, if your models of individual neurons are too bad, then why do we even need the neurons? We can just look at behavior and fix everything on the behavior modeling level.
I totally agree with that, actually.
For example, you can do your type of mapping at a functional imaging level, right? Functional imaging can also tell you a lot about your memory, but that's not what we are trying to do here. We still try to go to the kinetom, go to the molecular level, look at the computational principle, try to not only decode, but sort of reconstruct this machinery of the brain. It's not simply classification.
It's almost like building a generative model of the brune.
[01:16:20] Speaker A: In the spirit of having a little over ten minutes left, let's move on to another question.
[01:16:25] Speaker H: Hi, I'm Justin. I think that we can simulate the Drosophila connectome for decoding memories. And one thing that encourages me is that the biggest place we see inter individual differences, both the larvae and the adult, is in the mushroom body, which is strongly associated with episodic memory. And now that we have two adult connectomes, we see the largest differences are in the memory center, the mushroom body. However, there's a potentially confounding variable that homeostatic differences can lead to large changes that aren't necessarily related to subjective differences in experience. So I wanted to ask, how do we draw the line between what is a large homeostatic difference caused by large rearing conditions and what is like a specific inter individual or intra individual difference over time that you would consider a memory?
[01:17:16] Speaker F: I think that is an articulation of the problem that those of us in the functional neuroscience of memory have to either engage with, or else we're just not making progress.
We're usually dealing at multiple levels anyway, when we start perturbing a system and we have problems of causality that have already been alluded to by Conrad on that topic.
My view is that we're dancing around an elephant in the room, that we haven't quite defined, that there is an in between level that we need to be dealing with, where we may find that semantic content. And I don't mean that the in between level is in vivo physiology, and we simply need to map that on to the connectome at any of these levels. What I mean is that at some point, an individual animal is making a subjective use of a representation. The representation, I mean the internal, informationally rich version of a representation. I don't mean a correlation is drawing on whatever latent information you have at the relevant level of your connectome. And this is being operated on, not in a deterministic way, behaviorally, which is why just simply mapping conctomics onto behavior is going to lead us down. I think the wrong kind of corridors emerging from that representation, from that information is a representation that an individual can use subjectively. And in neuroscience, we are used to looking at hypothetical or putative substrates of representations in a third person format. And we need to be thinking more and more about neuronal representations from the first person perspective. And there we have to consider multiple realizability, that it's very different between individuals, and it can be very different within an individual at different moments. This point has been made by Romaine Bret. It was made very nicely at this Society for Neuroscience meeting by Andre Fenton in his special lecture. And the point is that looking for something that's always going to best predict the behavior is missing the point, because you can get there by multiple different ways, depending on the state of the organism and depending on the individual.
And of course, we're going to be tied to particular substrates of analysis. But unless we have a way of conceptualizing how that connectome is expressed in a meaningful way as a versatile representation, we won't be able to explain how that informational substrate in the connectome leads to adaptive behavior, we'll only be able to predict typical behavioral outcomes, which is a very different thing.
[01:20:10] Speaker A: Another question.
[01:20:12] Speaker H: So my question is, what is the minimal memory unit that we're looking for, and can we find that in a connectome or not? Meaning, are we looking for an ensemble or sub ensemble of Angram? Are we looking for a set of specific molecules? Are we looking for a set of specific pattern connections? What exactly are we looking for in a nontrivial memory to be able to decode it? And can that be embedded into a specific model?
[01:20:41] Speaker A: I think that what we've been looking for, and you guys can correct me, is the behavioral output via experiment of the encoded memory, whatever that minimal unit is, that's just above nontrivial, wherever nontrivial.
[01:20:55] Speaker E: Is, one extreme number would be single neuron. Right. In many cases, like a single neuron can decode nontrivial information, like a location, for example, if the behavioral outcome is just a binary, and you may get that information from a single neuron, I.
[01:21:11] Speaker D: Mean, I would say that we have to make the next step forward.
Next step forward to sort of get away from binary and one to one correspondence. And we need to have minimal models.
I think it's a system level question, right? You need to have computational. And an example, I would say, perhaps in light of the study, would be, we all know, play cell for a room, right? And if you can decode the location of the room, representation of the room from a connect home and VR play cell, and from the connect home, that would be nontrivial. That seems not too far from the understanding we're having now.
[01:22:03] Speaker A: One of my questions to you all was going to be, is decoding the right question? Because just because you can decode the location in the room from a given cell, you have to have known that that location was visited before, and decoding it is kind of trivial.
So something nontrivial would be to decode which city someone visited in 1978 from that one cell.
[01:22:24] Speaker E: Right.
[01:22:25] Speaker D: I would just say that that's the next step. Questions after the first step. Okay. And I would say that the step that I'm talking about is already nontrivial, and yours are equally much more nontrivial. I would say.
[01:22:40] Speaker E: Yeah, I mean, look, if we had a simulator, then it would be very easy. We could just ask the simulation what happened on that day. I mean, if you still, provided you still remember it.
[01:22:51] Speaker A: I was not born.
[01:22:52] Speaker E: Okay, that solves it. But I wanted to push the organizers of this price a little bit here's a very easy way to store a lot of iNformation. I go into someone's retina, I have a very bright laser, and I kill a bunch of neurons, and I can draw a whole picture of things in the retina. Isn't that more nontrivial than anything we've been discussing the whole time? And I have no doubt that I could decode which neurons I zapped with my laser from laconectome.
So you don't mean non trivial information. You mean, like, advancing science?
[01:23:28] Speaker A: Okay, let's end up with one more question from the audience.
[01:23:31] Speaker H: I'm Edwin Rolls. I work on the vertebrates hippocampus, and everything is totally different there. What we need to know is the rules of operation of the system rather than what happens at a particular synapse. So the crucial things we need to know there are connectome related, but they're, for example, the number of synapses that you have on a single neuron from recurrent collaterals in CA three, because that sets exactly the memory capacity of the system. So that's a bit missing. We need that.
It turns out that we have diluted connectivity in CA three. And one advantage of that, we think, is that then it decreases the probability of having two synapses, a pair of synapses, or two synapses between a single pair of neurons. It turns out that if you have a theoretical model of how attractor networks operate, that kills the memory capacity if you have them. So what I'd like to encourage us to think about is things in a slightly more statistical sense, that we shouldn't necessarily be worrying too much about an anatomical change, a particular synapse in a single organism. Vertebrate brains work slightly differently to that. You've got very large numbers of neurons. You're interested in the statistics of the thing, and you need to know things like how many connections you have onto particular neurons. So doesn't mean that the connectome is not important. It's crucial, but we understand how it's crucial in setting something like the memory capacity through the theoretical physics approaches. So it's just a slightly different way of looking at some of these qUestions.
[01:25:17] Speaker G: I really like that. And I think I was thinking a lot about Justin's question, too, which, know, let's say, how do you distinguish between an animal just growing and preferring a particular thing just from birth versus some experience causing it to like a particular substance? And one would be, we'd call that a memory. The other we'd call innate, and it could just be an individual bias. That particular individual just decided to prefer a particular thing. How do we make that distinction? I mean, what matters is in this case, A, the experiment experience, but B, the mechanism of forming a memory. You know, it does matter how we're making these memories and how there's this correlation between experience and whatever changes there are in the anatomy or other parts of the nervous system.
So that means that the anatomy by itself is not enough. We need to know something about the formation process in order to distinguish between the memory versus a non memory. And it also means it's going to be nontrivial. To be able to answer this question in the first place.
[01:26:51] Speaker A: We have to wrap it up. So both of you make it rather quick.
[01:26:54] Speaker E: Like Dr. Yu's comment, right? But I think there's a small difference here. So what you talk about is more like a model driven statistical understanding to the connecting. In that sense, it's a raw data. Right? But what you are trying to replicate is the memory function. And I think what we are trying to talk about is a particular memory. So even we can replicate that memory function. That doesn't mean we can replicate a particular memory. So I see a subtle difference here.
[01:27:27] Speaker F: I completely endorse what's being said. The rules of connectome interpretation, whatever they are, are not going to be the same in every brain region. The rules are going to be different, and the rules are going to be based on the innate information that's there that are built for the affordances in the world that that brain region evolved to operate in.
And maybe there's nested hierarchies that are based on a fundamental connectome interpretation logic. But this is part of the problem, certainly the problem for those of us who work with rodents. And I think it's one of the reasons why working with sea elegants and other organisms of that size is so important.
[01:28:06] Speaker A: Okay, thank you for the questions from the audience. Thank you to aspirational neuroscience for putting this together and to the panelists for a great discussion. Thanks everyone.
Produce Brain inspired if you value this podcast, consider supporting it through Patreon to access full versions of all the episodes and to join our discord community. Or if you want to learn more about the intersection of neuroscience and AI, consider signing up for my online course, Neuroai the quest to explain intelligence. Go to BrainInspired Co to learn more. To get in touch with me, email Paul at Braininspired Co. You're hearing music by the new year. Find them at the New Year. Net. Thank you. Thank you for your support. See you next time.