BI 091 Carsen Stringer: Understanding 40,000 Neurons

December 03, 2020 01:28:19
BI 091 Carsen Stringer: Understanding 40,000 Neurons
Brain Inspired
BI 091 Carsen Stringer: Understanding 40,000 Neurons

Dec 03 2020 | 01:28:19

/

Show Notes

Carsen and I discuss how she uses 2-photon calcium imaging data from over 10,000 neurons to understand the information processing of such large neural population activity. We talk about the tools she makes and uses to analyze the data, and the type of high-dimensional neural activity structure they found, which seems to allow efficient and robust information processing. We also talk about how these findings may help build better deep learning networks, and Carsen’s thoughts on how to improve the diversity, inclusivity, and equality in neuroscience research labs. Guest question from Matt Smith.

Timestamps:

0:00 – Intro
5:51 – Recording > 10k neurons
8:51 – 2-photon calcium imaging
14:56 – Balancing scientific questions and tools
21:16 – Unsupervised learning tools and rastermap
26:14 – Manifolds
32:13 – Matt Smith question
37:06 – Dimensionality of neural activity
58:51 – Future plans
1:00:30- What can AI learn from this?
1:13:26 – Diversity, inclusivity, equality

View Full Transcript

Episode Transcript

[00:00:04] Speaker A: I see it as we partly as we're going to make these advances in AI potentially faster, and we already have, because I think neuroscience isn't answering these questions. Regardless, it's going to help to be able to build these circuits in a machine and see if we can learn some principles from them that we can then can use in the brain. So right now we're recording around 40,000 neurons simultaneously. And so now the question becomes, what do you do with that data? [00:00:36] Speaker B: Yeah, you're right. What you do is really quite simple. Right? [00:00:45] Speaker A: This is brain inspired. [00:00:59] Speaker B: Hey, everyone, it's Paul. So there's been an explosion in the number of neurons that we can record simultaneously in animal brains. As Matt Smith and I discussed a couple episodes ago, Matt records the spiking activity of hundreds of neurons in awake monkeys performing tasks, and he relates the population activity of those neurons to task related behaviors and cognitive functions. Today I have Carson Stringer on a group leader who runs her lab at the Janalia Research Campus at the Howard Hughes Medical Institute. Carson deals with neurons in the tens of thousands range. It's a little different, though. Whereas Matt records the precise timing of all of the spikes of neurons, Carson can't tell precisely when spikes occur, but she can tell whether one or more spikes happened within a small window of many milliseconds. She analyzes data collected using a technique called two photon calcium imaging, where basically you shine a light onto the brain of an awake mouse. In Carson's case, it's a mouse, and that mouse has been genetically engineered so that its neurons, when they're active and the light is shining on them, will emit their own light that can then be seen and recorded through a microscope. So you get a massive amount of data about large populations of neural activity. And Carson spends her time developing tools to visualize and analyze the data. So we talk about that, and she spends time figuring out what information we can extract from such a large population of neurons and what it means. One thing we discuss is what they found out about two properties you would want in a population of neurons that are responding to all the different things in the world. One thing you would want is an efficient or orthogonal coding scheme where neurons would collectively respond as differently as possible to different things in the world. For example, you don't want overlap and confusion between a pet dog and an angry rhinoceros, but you also want a robust or smooth coding scheme where the collective response would smoothly overlap to things in the world that are more similar to each other, like a pet dog and a pet cat. That's an insufficient explanation, of course, but I will leave it at that for now because we talk more about it and just say that Carson's team found a specific relationship between how much the neurons varied in their responses and the things in the world, although that too is an insufficient explanation. It's more subtle than that, but we discuss what it means. We talk about how it relates to current AI and how their results may be inspiring improvements in deep neural networks and multiple other topics, of course. With a guest question today from the aforementioned Matt Smith, I link to stuff in the show notes@BrainInspired Co Podcast 91 On a side note, I've been getting a ton of emails lately with requests for future guests, which is wonderful. But I just want to say if you have suggested someone to me, just know. I promise I've added them to my list of future guests, but that list has grown close in size to the number of neurons that Carson analyzes at a time. I will say if if you're a Patreon supporter, the people you've suggested are way up at the top of that list or have already been on the podcast, as you know, because you deserve it for your support. So thank you so much. Okay, last thing, Carson is looking to hire grad students and postdocs, so good luck. I was actually giving a talk at your institution just yesterday. It was a little 15 minute spiel on my background about careers outside of academia in science communication. So that was just a mere coincidence that you and I had scheduled this for the same time. So anyway, thanks for coming on the show, Carson. [00:05:13] Speaker A: Yeah, definitely. Thanks for giving a talk at Janelia for science communication. That's awesome. [00:05:18] Speaker B: Yeah, well, yeah, I don't know how it went over. We'll see. It was pretty painful to revisit my past, to be honest, but at least now I have a little presentation on it. So, Carson, I had Matt Smith on the podcast a couple episodes ago and he records he uses Utah arrays and he records tens to hundreds of neurons, single neurons, at once, and you record over 10,000. So I really just want to start off by talking about the fun issues of recording so many neurons. Where do you see in the very broad zooming out, like the very broad picture, where do you see we are and where are we in terms of how useful that data is to us? [00:06:03] Speaker A: Yeah, so that's a great question. So in terms of what this kind of big data can give you relative to smaller data is first off, neural responses in general are noisy. So you record 10 neurons and there might be Some noise, whether it's the behavior of the mouse or it's just noise in the circuit. And you can't really tell what those neurons are doing on a single trial level. If you record, you start recording 100 to 500 neurons, then you can start to say, what are the correlates between this neural activity and external things like behavior and stimuli? And you can make these, you can figure out these relationships much more easily and build models much more easily once you get into higher and higher numbers of neurons. [00:06:47] Speaker B: But we have, I mean, what I was really wanting to know is just this is a very new thing that we're, you know, well, very not to you now, it probably feels old, but in the field at large, it's a very new thing to have access to so many simultaneously recorded neurons and their activity. And I'm just wondering, like, how you feel about how far along we are. Are we just at the beginning of being able to get meaningful information from that data, or do you feel like we have the tools that we already need? And I know this is kind of a trick question because this is part of what you do is developing the tools to extract meaningful information from it. So I'm just curious, like, how you feel. Are you at the beginning, are you in the middle? Are we ready for 86 billion neuron recordings or what? [00:07:38] Speaker A: Yes, that's a good question. So right now we're recording around 40,000 neurons simultaneously. So that's sampling a good fraction. It could be anywhere from up to like 70% of the neurons in a given area, depending on where we're recording. So we're getting a very dense sampling. And so now the question becomes, what do you do with that data? And what kind of stimuli do you show with that data? And what kind of behaviors do you have the animal do? So I think in terms of the recording techniques, those have been amazingly developed. And we have these amazing proteins we use so that these neurons light up when they fire. And that's how we capture this activity so that all of that protein engineering and the microscopy is at a really great place for us to capture these large recordings. And now the field needs to kind of explore different behavioral paradigms and think about the types of, for instance, visual stimuli you want to show to be able to understand what these patterns of activity mean. [00:08:34] Speaker B: So let's, let's talk about the, the two photon calcium imaging for just a second, because that's how you capture this really high neuron sample data. So maybe you can just describe a little bit more what two photon Calcium imaging is, and the kind of data that we actually get from it, Right? [00:08:51] Speaker A: Yeah. So it's a technique that allows you to basically take pictures of the brain across time. And you're taking pictures of these neurons, are expressing this protein that lights up whenever the neurons fire. So you're taking pictures of neural activity time point by time point. And now you're able to, from these recordings, you can extract each of these neurons and extract their neural activity across time. And there's advantages on top of electrophysiology, like you talked to Matt Smith about, which is that you're literally seeing the neurons so you know their spatial relationships. You can have genetic lines where certain neurons are different colors depending on what kind of cell type they are. Like if they're inhibitory, things like that are easy to do. [00:09:36] Speaker B: You're not seeing their connectivity though, but you're seeing their spatial relation to each other, right? [00:09:40] Speaker A: Yeah. So you can go even further. So people also have started doing two photon stimulation. So now you can see these neurons and you can also stimulate them. With lasers, you can focus on specific neurons and stimulate them and that can get you closer to having a kind of connectivity graph. But it's never going to be the perfect case where in electrophysiology you're never going to be able to. You don't have the temporal resolution to specifically say, say two neurons are connected. [00:10:07] Speaker B: In calcium imaging, you mean? [00:10:09] Speaker A: Yeah, yeah, yeah. [00:10:10] Speaker B: So that's that there's always this trade off. Right. So the, you know, previous. I don't know if it's still the gold standard of extracellular electrophysiology recordings is that you have this super high temporal resolution and you can get every single spike if your, if your electrode is right next to a neuron. And on the other hand, the worst temporal resolution is with something like fmri, where it's like seconds go by and pretty good spatial resolution, but really poor temporal resolution. And calcium imaging is much closer to the temporal resolution of recording single electrodes. But like you said, you're taking like little snapshots. So what is the bin of the window that you're looking at? [00:10:52] Speaker A: Right, yeah. So to record so many neurons, we go even lower. So often people will take 30 millisecond bins, they'll record at 30Hz and we go at around 3Hz. So every 300 milliseconds we get a sample from a neuron. So we get three samples per second. [00:11:07] Speaker B: Okay. So you can kind of see whether it spiked within that time within that window, essentially. [00:11:12] Speaker A: Yeah, I think. And Even if you record at 30 hertz, you still can't really tell how many times a neuron spikes in calcium imaging. That's another downside. Basically, different neurons have different size amplitudes of their calcium of these calcium spikes that we'll see. And so a single spike in one neuron can correspond to a different size spike in a different neuron. And then one spike even within the single neuron, that one spike can have a very variable size of calcium activation across time. So, yeah, we generally don't claim that we're recording spikes. [00:11:43] Speaker B: Right? Yeah. Okay, but you're recording neural activity. Is that the claim? [00:11:47] Speaker A: Yeah. [00:11:48] Speaker B: Yeah. So one thing that I found frustrating throughout my little academic career is that I went into neuroscience in graduate school and wanted to ask these really big questions. The brain, how does consciousness work? What is a mind? And all this stuff. And what I found was just a series of asking smaller and smaller questions, and more so than that, troubleshooting technical issues and all the details. And so I ended up focusing a lot more on methods and analyses and what they can tell us and which one's better and why they're better instead of. And so you kind of, like, lose the big picture questions. So I'm wondering, what are some of the challenges that you face when analyzing these sorts of really large recordings, first of all? And do you have that same sort of. I don't know why you got into neuroscience, but presumably you have some. Well, I can let you respond. Do you have that same experience at all? Because a lot of what you seem to do is very technical. [00:12:51] Speaker A: Definitely this happens. You want to answer some big question. Like, I started being interested in kind of brain states and how neural activity changes whether you're. I mean, conscious or unconscious, and anesthesia and stuff like this. And then when you go in to record this data, I think one of the biggest compounds we see in calcium imaging that people don't correct for is actually that the brain moves up and down parallel to the plane throughout the recording. Yeah. Physically is moving relative to the microscope. I mean, it's very small movements in microns. [00:13:23] Speaker B: Right. Well, that's huge when you're looking at the scale of microns. But you should say that you're recording in mice. Right? [00:13:30] Speaker A: Yeah. Yeah. Sorry. Yeah, I apologize. [00:13:31] Speaker B: And just to back up, you do the surgery where there's like a kind of a window onto the brain where you can point the microscope, and that's how you image the brain while the mouse is doing whatever it's doing. [00:13:42] Speaker A: Yeah. And that's another advantage of Calcium imaging is that we put these windows into the mice and they basically live their normal lives. They go live with cage mates. Then day after day, we can come in and record their neurons. And because we're taking these pictures of the brain, we can see are we taking pictures of the same neurons across days? And so you can do this alignment across days and study learning in these circuits as well, which is something that's also harder to do with electrophysiology. [00:14:07] Speaker B: Yeah. Okay. So what are some of the challenges? I mean, there has to be a million challenges. [00:14:14] Speaker A: Yeah. So I work. So there's a lot of challenges on the microscopy side that I don't deal with. But on the software side, kind of, we've created pipelines to basically correct for. Move the microscope when the brain is moving, to make sure that it's always in the same place relative to the cells in a closed loop. So things like that really improve our data quality. [00:14:36] Speaker B: How long did that take you to do? I'm going to guess. I'll guess a year. [00:14:41] Speaker A: I spent a long time trying to correct the signals without moving the microscope. So. Yes. So several months. [00:14:46] Speaker B: You're a last resort, huh? To move the microscope to. [00:14:49] Speaker A: Yeah, yeah. When we realized we weren't going to be able to correct the signals well enough, so then we decided to move the microscope. [00:14:55] Speaker B: Yeah. I mean, so what about just recording, like. Well, we'll get into the analysis of all these things. I had a listener question asking about how you manage the interplay between pursuing your own scientific questions and developing the tools and the software that you've had to develop to analyze the data. So how do you manage that balance? Or is there even a balance? [00:15:20] Speaker A: Yeah, that's a great question. And I think I am in a. I'll preface this all by saying I'm in a fortunate place. So I'm a group leader at Genelia Research Campus, which is a nonprofit institute. And so I have funding for five years. So I'm basically able to continue to work on these projects as a group leader. Whereas in other. I mean, the mechanism of funding in the US Is not always conducive to working on tools rather than working on science in terms of getting grants and so forth. So I'll preface this by saying that then the choice becomes how do I balance my time? If that makes sense. Yeah, yeah. So then I am able to choose how much time I want to spend doing tools and doing science. And it's probably around maybe 20 to 30% of my time working on tools. [00:16:08] Speaker B: Oh, really? Wow. Okay. [00:16:11] Speaker A: And I Think it's very useful for the community to work on these tools. And I enjoy working on this kind of software. So I hope that rather than everyone having someone in their lab that spends 20 to 30% of their time working on this tool. We've worked on this tool, Other people contributed to this tool through pull requests on GitHub and we can all kind of work towards creating a better tool. We get hundreds of user comments, whereas if an individual is making something in a single lab, they're not going to get this feedback and be able to improve the tool as well. [00:16:45] Speaker B: This particular listener says that you're a pro at both, but is curious, sort of the dynamics of it. Right. So do you usually think of and develop the tool first and then go on to apply the tool, or do you find yourself asking the scientific question and gathering the data and then think, oh, now I have to make a tool or software to analyze this? [00:17:09] Speaker A: Yeah, it's usually driven by the latter that we have a problem that requires a tool and will then develop a tool. The most recent tool that I developed was cellpose, which is for anatomical segmentation. And that was we needed to segment. Actually in this case, we were trying to segment calcium imaging data in an anatomical way rather than a functional way. And so we tried some of the out of the box tools that other people have developed. They weren't working as well. So then we tried to develop our own tool. [00:17:40] Speaker B: How frustrating is that? I mean, you must find that at every step of the way, like, oh, is there something for this? No, we have to create it. I mean, that must happen over and over. [00:17:49] Speaker A: Yeah, I think the big problem is thinking about these questions in a small scale. And that's the way that science has moved forward in general, that everyone's working in their own lab and they say there are great cell segmenters for certain types of data. Like for instance, for nuclear data. There are great pipelines in place, like for instance, from Ann Carpenter's lab. But then they might not work for all the different types of cellular data you have. And so thinking about things that are applicable for many people rather than a single use tool kind of is, I think, the difference between where I'm coming at with these problems rather than where several other people are coming at. [00:18:25] Speaker B: Yeah. When I was in graduate school and beyond, everyone was kind of in their own little individual bubble. You wouldn't even share stuff between lab members because everyone had their own very idiosyncratic specific problems that they were then developing things to deal with. And that shift, I think Is a really, that shift to more open source and collaborative. And sharing these sorts of tools, I think is obviously only a positive thing for the community. This listener also goes on to ask whether there have been occasions where the tool itself, where building the tool or developing the software, developing the analyses has actually inspired you conceptually, inspired your science conceptually. And if so, what's an example of that? [00:19:09] Speaker A: Yeah, so this is definitely still work in progress. But working on segmentation, in terms of object segmentation, particularly we're looking at segmenting cells, has kind of shaped the way I'm trying to think about how the brain does segmentation of images. And thinking about images, you can think of images as segmented objects or as textures which are made of many objects. And thinking about how the brain conceptualizes those different things and the different types of architectures, like for deep neural networks, for instance, that work well for segmentations and thinking about how those might be applied in the brain and is there. [00:19:43] Speaker B: Insight that you've gained from it or is it just sort of the way that you approach. The question has changed. [00:19:49] Speaker A: Yeah, so we've been. So there's this kind of long line of work where people compare neural responses to responses in deep neural networks. And those have been sort of confined to more of these feed forward models and not these models that are trained and that are trained on image recognition. So it's kind of thinking about expanding this problem and thinking about what about a model that's trained on object segmentation. And these models that do best on object segmentation actually have these sorts of these skip connections which you can think of as recurrent connections which integrate local and global features. So if you want to for instance, segment an object, it's helpful. You need these kind of global cues to know kind of around where the object is. And then you also need this fine local information to say exactly where that edge is. So you kind of would need to combine information from maybe high. You could think of combining information from higher order visual areas with low level primary visual areas. And how that integration takes place would. [00:20:46] Speaker B: Be cool to figure out what is image segmentation. [00:20:50] Speaker A: Oh yeah, sorry. [00:20:51] Speaker B: Just be clear. [00:20:52] Speaker A: Yeah, yeah. So I'm talking about particularly what people often call instance segmentation. So we have an image, for instance, of a bunch of. You could think of cells. We actually have rocks as well. In our cell pose database we have circle. You basically circle every rock in that image and you train the deep neural network to figure out where each of those rocks are. [00:21:15] Speaker B: Okay, very good. So we'll talk just a little bit more broadly about these sorts of data sets here. Before we get into the actual science that you've been producing, Wayne Ansoon has asked if you could compare and contrast the pros and cons of your favorite unsupervised techniques for data analysis. [00:21:36] Speaker A: Yeah, that's an interesting question. I would say we still don't have the best unsupervised algorithm for analyzing large scale neural data. So I'll preface my answer with that. But that being said, I think still kind of the state of the art for unsupervised dimensionality reduction, nonlinear dimensionality reduction would be T sne. [00:21:56] Speaker B: Okay. [00:21:57] Speaker A: And so there's this problem with, with dimensionality. Sorry, sorry, I should explain what T SNE is. [00:22:05] Speaker B: Yeah, what's T sne? Sorry, I mean it's. [00:22:07] Speaker A: Yeah, yeah, yeah, sorry. So you have this T distribution stochastic neighbor embedding algorithm. That's what all those letters stand for. But the idea of, so these unsupervised dimensionality reduction techniques, particularly nonlinear ones, the idea is that you have this really high dimensional space we're recording from. We have like 40,000 neurons and they have many thousands of time points, but we can't visualize this really high dimensional space. And so what these techniques do is they take this high dimensional space and they smush it into maybe a couple dimensions and they smush it in like a nonlinear way to kind of put neurons close to each other and kind of break some of the linear relationships between neurons to try to get a better, to try to get a better picture that we can actually visualize in 2D. [00:22:52] Speaker B: He also asked if you could explain raster map, because this is very related. [00:22:57] Speaker A: Yeah, so that's actually kind of what I was going to. Yeah. So T sne, I think, is still the state of the art in terms of getting kind of the global structure Right. Of the manifold. So you kind of want to globally capture how the activity change, how neurons vary maybe across the recording. And I think T SNE is still kind of the state of the art for getting this global picture. And then in terms of there's always this trade off in these nonlinear dimensionality reduction techniques. Like, however. Well, you capture the global information, you're going to lose information in the local structure. Like if I, if I try to perfectly capture the global information, I can't. There will be neurons that will be moved around that should have been closer to each other, but they just can't be smushed into the map. And so there's, there's always this trade off you're doing with these embedding algorithms that you're, you're going to destroy some of this local structure while trying to get this global picture. And so there's this trade off that you do in these algorithms. And raster map is an example of an algorithm which tries more to preserve this local structure. [00:24:00] Speaker B: This is something that you developed. [00:24:02] Speaker A: Yeah, yeah. [00:24:03] Speaker B: So it's just looking at the finer grained structure of the manifold. [00:24:08] Speaker A: Yeah. So it's also this embedding algorithm where it's this idea. You have neurons in this high dimensional space and I'm going to put every neuron I'm going to define as having an XY position basically in 2D. So that's what these algorithms do. So raster map is another way to kind of get XY positions for neurons. And it has this way that it tries to more heavily weight these kind of high dimensional components of the neural activity rather than T sne, which is going to get more of this global. [00:24:39] Speaker B: More of the global. Why is it useful to rearrange things like this and create images like this? [00:24:46] Speaker A: Yeah, so that's a great question. So once you've rearranged things, you can start to look at these. Maybe you'll get clusters out of it of neurons and then you can look at what these clusters of neurons are maybe correlated to in the external world. So I might have a cluster of neurons that comes out in this low dimensional representation that maybe corresponds to the whisking of the mouse, or maybe it corresponds to the confidence of the mouse in some decision making task. It could be something more abstract like that. Like the advantage of having this representation is you're able to kind of average over neurons that are noisy on a single trial, but then look at single trial information once you've averaged over them. [00:25:30] Speaker B: Because they're collected among other neurons that on average are similar coding for the same thing. I just said coding, which doesn't mean. Yeah, try to avoid the word coding, but yeah, are similar in their responses to stimuli and cognitive manifestations, I suppose. [00:25:44] Speaker A: Yeah, exactly. And that is really the big problem with neuroscience and what most of the field has done in the past is we study averages across many repeats of the same image, for instance. And we really, if we want to understand the behavior of the mouse or any species, we want to know what it's thinking moment by moment. And so ways to better be able to characterize this moment by moment information are really useful in these cases. [00:26:14] Speaker B: We've used the term. You've used it once, I believe, and I've used it once, manifold. And I've been asked, because manifolds are fairly new in neuroscience as well, I guess they've been around a long time for high dimensional analyses. But what is a manifold with respect to neural activity? And why is it useful to think conceptually in terms of a manifold? [00:26:37] Speaker A: Good question. So first I'll define what a manifold is mathematically. So a manifold is, it has this property that locally the points in the manifold resemble a Euclidean space. So let's take that in the case of neural data. So neural data, we're going to think about every single point in this high dimensional, 40,000 dimensional space. Every single point is a different response to maybe a visual stimulus. Like one could be a cat or a dog or a hawk or these other things. And each of these points in this really high 40,000 dimensional space might create some kind of like wavy surface inside of this space. So it's not going to explore this giant 40,000 dimensional space like there's many, many places. Right. That activity could be, but it's generally going to be constrained, we think, to sort of a subset of this space, and we think of this subset as being a manifold, that it is going to be curvy and complex. But then if you zoom in, if it is a manifold, then the activity will change kind of linearly. So maybe we zoom into a part of the manifold that codes for cats and maybe small changes in the cat nose or the cat ear pointiness maybe change the neural activity in small linear ways. Like you have these perturbations that change the neural activity in small ways rather than moving to a completely different part of the manifold. [00:28:03] Speaker B: So it's like just traveling along the surface of the manifold. [00:28:07] Speaker A: Yeah, that we would think of that. Small perturbations in image space. There will be perturbations in image space that travel along this manifold that exist, that change activity in such a way that it's small relative to the perturbations you're making in the image space. [00:28:25] Speaker B: So at heart is a manifold a way to reduce the super high dimensionality of your data into projections onto lower dimensions and that defines the manifold. [00:28:37] Speaker A: You can think of a manifold as a low dimensional representation, but you can also think of the neural activity itself as a manifold. So what I'm trying to say is that we think that the neural activity is constrained to this space, which is generally lower dimensional. And if you have, I guess I wasn't totally clear. Right. So there will be small perturbations which will move you along what you think of as your neural manifold. And then there might be perturbations you make that might jump you to other parts of this curvy manifold. And like, we don't know how curvy or smooth it is. Like, if it's pretty, if it's very smooth, then lots of image perturbations will kind of keep you in the same place. Whereas if it's really curvy, then there will be some image perturbations that keep you in that space. Like, I change the pointiness of the cat ears, and the neural representation doesn't change that much. But maybe I changed the length of the nose and now all of a sudden it's a dog or something and I move a lot. So, yeah, we don't really know how smooth that representation is and in what dimensions. [00:29:37] Speaker B: Am I wrong to think of a manifold then? And I'm really pretty green about manifolds. So am I wrong to think of it as akin to an attractor in dynamical systems, but just a surface attractor? [00:29:52] Speaker A: Yeah, you can think of it that way. And it might be the case that if you perturb neural activity, for instance, it will come back to the original manifold that we think is of the responses that we see that consists of the responses to images when you don't perturb it. [00:30:12] Speaker B: So there are places along the manifold that are more often visited, or you could think of as having lower energy, that it settles in this lower energy area, and then you perturb the system and it traverses various parts along the manifold. [00:30:27] Speaker A: Yeah, there definitely will be places that it sits more often. But that's also another question. There's also the manifold of neural activity with respect to behaviors. And so this neural activity is also changing in this kind of orthogonal space in terms of what behavior the mouse is doing in addition to having this manifold of responses to visual stimuli. [00:30:50] Speaker B: Okay, let's go ahead and jump in then. Let's talk about. Because we can revisit all these topics as we talk about what you've done. So, like I just mentioned, I had Matt Smith on the show, and we talked about his recent finding what they call slow drift, which is like this global fluctuation in the brain of actual neural activity over the course of minutes, tens of minutes, while an animal is performing a task that really tracks also the internal cognitive state and the behavioral aspects of the animal, like pupil size and various other behavioral measures. And one of the recent things that you found, this is the 2019 science paper, I suppose, is that even when you're recording in visual cortex, there is a signature that. Well, much of the activity can be attributed to the spontaneous behavior of the mouse, in this case. So you guys had a mouse sitting in a dark room. You weren't asking it to do anything, but you were recording its pupil and its facial features, its whiskers. And so you would know when it's whiskering. Is it called whiskering? [00:31:55] Speaker A: Whisking. [00:31:56] Speaker B: Whisking. Geez. Yeah. And dilating, for instance, I know it's called dilating with a pupil, but what you found is there's all sorts of these spontaneous behaviors that the mouse was. Was performing well, was undergoing enacting in that dark room when it wasn't being asked to do anything, that there was a lot of activity in visual cortex that correlated with these spontaneous behaviors. And I'm going to pause here then. So Matt has a question for you, and then you can unpack it for us, probably before maybe along with answering it. Hi, Carson. Thinking of your really impressive pair of papers last year in Nature and Science, I had a question for you. You find these really rich behavioral signatures embedded in visual cortex, and you argued that having this kind of activity as early as visual cortex might help the brain integrate sensory inputs and motor actions. So what I'm wondering is if you thought about drawbacks of this kind of system, maybe particular actions or perceptual events that might be confounded by having these signals all sort of thrown together, even in the early stages of visual processing. [00:33:04] Speaker A: All right, that's a great question. Yeah, I'll unpack it first with a simple example. So we have these signals for running in mouse visual cortex. So whenever the mouse is running, there's neurons that increase their activity. And these neurons also code for stimuli. So they might also respond to cat stimuli. [00:33:24] Speaker B: And I'll just interrupt you and say that's actually just off the bat, that's fairly surprising. Right. Because we think of visual cortex as just processing visual information, sensory information. So it's fairly surprising to just see all of this behavioral activity also in the visual cortex. [00:33:41] Speaker A: Right? Yeah. And I guess it's especially more surprising for people working on primate and human literature, like the mouse literature, actually. So this first finding was from 10 years ago now from Chris Neal. Yeah, I apologize. I'm talking about it like it's. It's pretty standard, but yeah, I guess it's. [00:33:59] Speaker B: 10 years is standard. That makes it standard. [00:34:01] Speaker A: Yeah, but it's definitely less well known how pervasive these signals are in primate and humans. [00:34:08] Speaker B: Right. Well, we're not Doing calcium imaging in primates, and that's one of the advantages of doing this in mice, is that they're available for the calcium microscopy, right? [00:34:18] Speaker A: Yeah. So we're able to get these many neurons and we're able to get these correlations with these behaviors and create models from. From behavior to neural activity and predict the neural activity from these behaviors. So, yeah, so the confound he's referring to is this fact that if a neuron is responding when the mouse is running, and it's also this cat, it responds to cats, then its response is going to be different when it sees a cat when it's running versus when it sees a cat when it's not running. And so how does the brain then tell that it saw a cat if it can't use this neuron basically to tell it? If I say I set some threshold, then sometimes you'll think you'll see a cat, sometimes you won't, depending on whether or not, for instance, the mouse is running if you set a threshold on that neuron. But you have the advantage that you have many thousands of neurons in cortex, so you're not just going to be using a single neuron to figure out cat. So say I have several hundred neurons that respond to cats, and as long as some of them respond to when, as long as only some of them are modulated by running, and maybe some of them are down modulated by running, they decrease their activity with running. Then on average, if I take the average of those hundred neurons in response to a cat, whether or not I'm running, that average response will be similar. [00:35:36] Speaker B: But I thought what you found was that the majority of the variance in the visual cortex could be explained by the behavior, and therefore the minority was actually explained by incoming sensory stimulation. And I'll just go ahead and say that because you alluded to this earlier when we were talking about manifolds, that what you found was that the variance in these activities were orthogonal between the behavior and the visual information. So maybe you can touch on what that means as well. [00:36:05] Speaker A: So you bring up a good point. So when we talk about a third of neural activity being explained, we're referring to a third of neural activity in the absence of visual stimuli. So when visual stimuli come in, there's this additional information in the neural population, which actually contains around twice as much variance as the behavioral information. [00:36:23] Speaker B: That makes a lot of sense. [00:36:25] Speaker A: Yeah. So then you have this information in there along with the behaviors. And this idea of orthogonality is that the neurons that respond to cats have various responses to different behaviors. They're not all running neurons. There's neurons correlated with running, anti correlated with running, correlated with whisking, anti correlated with whisking. So then on average, their responses to cats is going to be the same whatever behavior the mouse is doing. [00:36:53] Speaker B: But you can only tell that by looking at the population. [00:36:55] Speaker A: Yes. So if you're looking at a single neuron, you're going to see, you're not going to be able to tell the image depending on the neuron. So some neurons will be more or less modulated by behavior. [00:37:06] Speaker B: Gotcha. So that was sort of the first pass at this, when you had mice sitting in a dark room. And like you said, you did start showing them visual stimuli eventually. And then the question is how the images, the visual stimuli that you're showing to the mouse, how they're processed in the population of visual cortex. So I also recently had Chris Eliasmith on the show and we talked about his cognitive architecture, Spawn, and one of the things that he uses, he developed what he calls a semantic pointer architecture. And the idea, and this is the way that information gets transformed from one area to another, for instance. But the idea is that in visual and in motor, so sort of like the input and output of the brain in those areas, there are these really high dimensional states. And then this semantic pointer processes the information from, let's say, visual in this high dimensional state and reduces the dimension, reduces the dimension to whatever cognitive process an animal is doing. And then it has to be what he calls dereferenced to bring it back to another high dimensional state to perform a motor action. And you make the point that for visual information, or really other sorts of sensory information as well, it's actually coming in in a lower dimensional state than is possible in the visual cortex. Right. So it comes in through the retina, goes through the thalamus. And these are fairly low dimensional in that there are lower numbers of neurons that it actually then it projects to back in V1 in early visual cortex. And there are two sort of classic theories, I suppose, about how the brain might usefully process information within a population. I don't know if they're officially called the efficiency versus the robustness theories, but maybe you could just talk a little bit about the two different ways that it's thought that information could be processed that would be useful. [00:39:08] Speaker A: Yeah, so that's a great question. Yeah. So ideally you have all this information from the visual world and you want to compute as many features of it as possible and have as many of them in visual cortex and primary areas. So then you can do these complex tasks like object recognition. So then the most efficient way to have all this information in your brain is to have each of those features be orthogonal to each other. That's how you're going to store as much information as possible about the visual world. [00:39:34] Speaker B: Completely separate. [00:39:36] Speaker A: Yeah, but then you have this issue. If there's any kind of noise or there's small changes in the inputs, then your representation might change drastically. Like different neurons are going to be representing things, even if you've just added a small amount of noise. And so you're not necessarily going to represent the visual world in a smooth way. And you also might not be. You would be more sensitive to neural noise or maybe neurons dying or things like that. And so the other far end of things is to be as robust as possible, which is to only represent a few features, but represent those with many hundreds of neurons. And what we found was kind of more the in between that there are these features that many neurons represent, that many neurons code for. But then there's also these. There's many hundreds of features that only a subset of neurons code for that kind of have these finer features. [00:40:30] Speaker B: You found a specific relationship actually between the dimensional number, which I'll ask you to unpack in a second, and the variance. Maybe you can just explain what you found, I suppose, the power law relationship and what it means, Right? [00:40:45] Speaker A: Yeah. So I can explain what we think it means, sure. [00:40:49] Speaker B: No, explain what it actually means. [00:40:51] Speaker A: So what we found empirically is that there's this decay in variance across dimensions. And so you can think of the first few dimensions as maybe like the contrast in the image, maybe if there's edges somewhere or not. And those are the kinds of dimensions that have the most variance in the neural population that drive the most neurons. [00:41:12] Speaker B: And when you say dimension, maybe you can describe how you not calculate, but how you come up with the different dimensions. So what we're talking about in terms of the variance, I suppose. [00:41:22] Speaker A: Yeah, yeah, sorry. So when we're referring to dimensions, we're referring to linear dimensions. And so we find these dimensions using principal components analysis. And we have a special way to do it, to ignore the noise that's coming from behaviors and other things. But it's basically principal components analysis to find directions with most variants in the neural activity. And then the result of that analysis is that there are dimensions with large amounts of variance, but then there are many hundreds of dimensions that have significant variance that create this power law decay of variances over these dimensions. [00:41:59] Speaker B: So it could be that There are. And the way that you often frame this is again between the two hypotheses, between efficiency and robustness, is that it could be that there are just a few dimensions of variance and then nothing else. And what would that entail? [00:42:17] Speaker A: Right, yeah. So if there were only a few dimensions and the rest were zero, it would mean that you have this low dimensional code and that there's only, maybe you represent contrast in the image, maybe you represent red versus blue or green versus yellow. And then you only have these few things you represent in the visual population. And it means that you represent them very reliably. You can kill a bunch of neurons and you'd still know what you're seeing with respect to those features. But then complex visual computations require these fine scale features of the image to be computed. Like if you want to do object recognition, there's much finer features of the image you might have to pay attention to, or object segmentation, for instance. [00:42:57] Speaker B: And then the other end of the spectrum there is, if you had, I suppose, infinite dimensions, if all of the variance was equal in all of the different dimensions, then that would be a very different coding scheme, right? [00:43:11] Speaker A: Yeah. So it would mean that every neuron is independent of every other neuron and that there's. There aren't these lower dimensions that we saw that are driving the whole population in these kind of global ways. And if every neuron is doing its own thing, you have a small perturbation of the stimulus and a completely different neuron is going to start firing. And so you're not able to. It decreases your ability to be robust to noise. It also decreases your potential to kind of generalize across stimuli. So if you have a small change in your stimulus, you might think like, say it's a small change in the cat, like I was talking about. Like the ears get pointier. You probably still want to think it's a cat, but then if there's completely different neurons firing, it's going to be harder for downstream decoders to be able to do this discrimination kind of in an easy way. [00:44:00] Speaker B: Are you a cat person or a dog person? [00:44:03] Speaker A: I'm kind of neutral on the question, but I'm more of a cat person. [00:44:07] Speaker B: Somewhere in between, but closer to cat. [00:44:09] Speaker A: Okay, well, yeah, I'm allergic to cats, so I would be a cat person if I wasn't allergic to cats completely. [00:44:14] Speaker B: I'm much less allergic to cats these days than I used to be. But I still use that as the reason why I deny my family getting a cat. Because I'm more of A dog person. But, but not shockingly then what you guys found is that the quote unquote answer, at least in visual cortex, is somewhere in between these two ends of the spectrum. And not only that, that there is a mathematical relationship between the dimensionality number and the variance. So I guess that's the first question I ask. So what is that relationship? What is that power law relationship? And what does it mean? Not what might it mean, what might it mean is the question. [00:44:58] Speaker A: Yeah, so this is a little hard to do without pictures, but yeah, everything. [00:45:02] Speaker B: On this podcast is hard to do without pictures. It's ridiculous. But so sorry, go ahead. [00:45:06] Speaker A: No, no, yeah, so there's this idea of like, how quickly does this power law decay? So a power law is like a straight line in a log log axis. So you can think of a straight line that you're looking at that has a slope of minus one. So that's our decay across dimensions. And we have this slope of minus one is what we found in the neural data. But this slope could decay more quickly, which would mean, say it has a slope of minus 2. Then you start to have these higher dimensions have less variance. So you have these kind of fine scale features have less and less variance, or you have fewer fine scale features that are encoded in the population. And so there's a question of like if you. For what we think is that this minus one is kind of at the. Is as high dimensional as you can be. You have as much coding of these fine scale features that you can possibly have before you kind of break. If you go, if you decay any more slowly, you're going to have too many of these fine scale features and you're no longer going to have this manifold representation. You're going to start breaking things and there's just, it's just everything is going to be too high dimensional and you're going to, you're no longer going to have this kind of robustness. [00:46:15] Speaker B: At that point you would stop calling it a manifold if it becomes too fine grained. I realized I don't know what the threshold is to call something a manifold because it still has structure even though it's super fine grain, right? [00:46:27] Speaker A: Yeah. So it still has structure, but then the local neighborhoods are no longer equivalent to Euclidean spaces. So they're just going to be basically these small changes we were talking about. They're no longer going to result in small changes in neural activity. There's going to be jumps and so forth in these small neighborhoods. So that's where. That's when things change. [00:46:48] Speaker B: Why Would that be bad? [00:46:50] Speaker A: Right, yeah. So that will be bad because then you no longer are able to have a local neighborhood that kind of smoothly, potentially generalizes over stimuli. Like you'd be jumping around whenever a stimulus with respect to small changes in the stimulus space. [00:47:05] Speaker B: So, like from pointy ears to less pointy ears, you would be in a totally different neuronal space, potentially. [00:47:12] Speaker A: Yeah. And it. Well, it wouldn't quite. It doesn't necessarily have to be a totally different place, but it still, it would be moving much more than what you would. Than the stimulus is moving. It would be larger jumps. You could still think. So you can think of an example of something that's not a manifold. This is an example my advisor Kenneth Harris likes, which is the coast of England. I don't know if you're familiar with that, but you have all these kind of little divots and you have these kind of fine scale structures. So you do have this. You will have these kind of movements in the local space which will be disjoint, but you still might have an overall global structure that could be preserved even in that case. [00:47:51] Speaker B: Well, I thought you were going to go into self similarity in fractals there for a second, but that is partially what you found also is that it has a fractal type structure. Right. Or is it close to it? [00:48:01] Speaker A: Yeah, so I. Okay, so let's. All right, wait, hold on. There's another thing I want to unpack here because there's so many. So many things going on. [00:48:08] Speaker B: Yeah, yeah. [00:48:09] Speaker A: So backing up. Yeah. So, yeah. So this fractal idea. So the power law, if the power law is minus 1, if it decays at least as fast as minus 1, then it can be a manifold. If it decays more slowly, then it will be a fractal. But we don't know for sure that it's a manifold. So it just suggests. The math suggests that it could be a manifold, that it decays fast enough to be a manifold. But we don't know for sure that it's a manifold. [00:48:33] Speaker B: Do we want it to be a manifold? What do we want it to be? [00:48:36] Speaker A: Yeah, I mean, we think that it would be ideal for it to be a manifold because then you'd have local neighborhoods that kind of have this kind of smoothness property. [00:48:44] Speaker B: So a generalizability and. [00:48:46] Speaker A: Yeah, yeah. That you'd be less like something people often think of in machine learning. It's kind of these equivalents to these adversarial images where you take a picture of let's do an eel instead of a cat and you maybe change the color of the eel's eye. To us, the eel would still be an eel, but then the neural network might completely think it's a different image. Or you could think of even adding there's different types of Gaussian noise you can add to those images and you'll get. And these tiny amounts of noise you add can totally change the perception of the eel. [00:49:20] Speaker B: So that the neural network would then classify it as a cat, for instance, or let's bring it back from cats, a shark, let's say, or something. But yeah. So I actually want to go back to what this tells us about AI and how it might be helpful considering these adversarial examples. But let's talk about the self similarity a little bit more. You were talking about the self similarity and the nature of that. [00:49:45] Speaker A: Yeah. So it could be the case that the idea of self similarity is something like you could think of a tree with branches, and as you zoom into that tree, the branches look similar as they looked when you're further away. And so you could think maybe that the neural manifold has similar structure as you zoom in further and further and that the global structure looks like the local structure. And that is something that we're studying now and we're looking at these components of. We're basically doing local image perturbations and seeing how the neural activity changes and seeing if those dimensions are similar to these global dimensions that we found. And we're not necessarily seeing this correspondence, so we're not seeing this correspondence. So it suggests that it's not a self similar structure. [00:50:30] Speaker B: Okay. [00:50:31] Speaker A: And it's not necessarily so surprising. I mean, there's so many different things that you might want to encode locally rather than that that might be different than what you might want to code globally in terms of the types of computations you want to do with local information versus global information. [00:50:46] Speaker B: I mean, what does this balance? I mean, it seems like always in nature and through evolution. It's like walking this fine balance between local and global and efficiency and smoothness or robustness in that sense. It's not a surprise. But I don't know, it seems very impressive. [00:51:06] Speaker A: It is really incredible. I think it's. No, I mean, I shouldn't use words like that in science, but we were really surprised when we got this result of this power law, that it is something that we weren't expecting and it's. [00:51:20] Speaker B: What were you expecting? [00:51:22] Speaker A: We didn't really know what to expect, especially particularly in mice. We wouldn't think it would Necessarily even be that high dimensional. We think Maybe we have 10 or 100 dimensions and then we're good. Let's fit a simple model and we'll be able to explain the neural responses. [00:51:40] Speaker B: Yeah. So what the power law shows is that it is very high dimensional and has this really broad capacity then to code global and local features. This same sort of relationship is also found in other networks like social networks and even images. They have this sort of balance between self similarity and these kind of global features. And there are these power law relationships, examples of them throughout, lots of different phenomena. So in that respect it's also not that surprising, I suppose. [00:52:15] Speaker A: Yeah, it's a question of if it's inherited from the statistics of the natural world that you experience, that it is advantageous to represent things in a way that you're always going to have more power in these low frequencies versus these high frequencies. I mean, I think that's kind of an open question in terms of the way that they learn these representations. And it could be that they are replicating the statistics to some degree, that there are more of these, that these global low frequency representations in the images are more frequent than high frequency things like small leaves and branches and that that's being inherited by the neural activity and the neural representation. [00:53:01] Speaker B: So you don't think it's necessarily something inherent in the brain itself, but just a capacity of the brain to put. I mean that's what you're saying, right? To potentially inherit the statistics of the natural world isn't the way the brain is built per se, but it is allowed by the way that the brain is built. Does that make sense? [00:53:24] Speaker A: Yeah, so that is totally an open question. So I don't know how much of this architecture would have to be learned or if to some extent it's already pre wired in a way that allows for these kinds of. That these global signals are kind of innervating many neurons in a feed forward way that allows for this power law to exist. It could be that it's from the architecture itself. It could be that to some extent it is learned. What? I guess it would be somewhat surprising to me if all of the high frequency features are there to begin with. Like I would think that maybe some of those might be learned from experience, but we really don't know. [00:54:06] Speaker B: Yeah, I mean we're at the edge of what we know and what we don't know always. Especially with like data like this. This is just the newest kind of data. No one's ever seen this kind of, I mean, you know, for it's been a few years, I suppose, but it's not like single neuron recordings that have been around for decades now, right? [00:54:23] Speaker A: Yeah. And it's also expanding the stimulus space and looking at natural images and kind of doing these experiments where you can do these experiments where you have mice that grow up without any visual information. They grow up in the dark. You can raise them in the dark and see are their representations different from representations in mice that have visual experiences. And these are questions that would be really exciting to know the answers to. [00:54:50] Speaker B: Oh, I was going to ask if you guys were doing that. [00:54:53] Speaker A: Yeah, that's some. So my collaborator is working on some of that at Janelia, but there's people at Carnegie Mellon working on it too. I think there are people working on it in the field. So we'll have some answers soon, hopefully. [00:55:08] Speaker B: That's an interesting question. Another thing that I'm sure you get asked this. So in this setup, right, you have a mouse and it is just being shown stimuli and what you found is this power law relationship between the dimensionality and the variance. Do you think that it's the same relationship, first of all, in different brain regions? So classically, like I was just saying with Chris Eliasmith, the way that he set up Spaun is to sort of reduce the dimension to a lower dimensional space. And one example you could think of this is forming a category, right. Of an image. Right. So you have like the dimensionality of eel. Let's go. EEL is very low compared to all of the visual features of an image of an eel being processed in V1. Right. So you might think that the dimensionality law would change as a function of where in the brain it's being processed. Does that make sense and would you think so? [00:56:18] Speaker A: Yeah, that is actually something. So the motor cortex people, I'm just calling them motor cortex people, but the people who study motor cortex are very much. There's a strong bias in that community to think of those representations as low dimensional. And I have discussed with them, they think that their power law would decay faster there than in visual areas where basically that motor output is constrained to only so many dimensions. And so that might mean that the activity in that area is lower dimensional. [00:56:56] Speaker B: What do you think about. I mean, a related question is just cognitive functions, right? [00:57:02] Speaker A: Yeah. [00:57:02] Speaker B: So specific behaviors and specific cognitive functions, eventually we have to think of them as low dimensional. If, you know, if we call something working memory, for example, that is a low dimensional name given to a function. And so do you think that the. How do you think the dimensionality Law would vary with respect to something like, you know, image categorization or processing visual images versus something highfalutin like working memory or some of these higher functions, higher cognitive functions. [00:57:35] Speaker A: Yeah, so that's a great question that we really don't really have enough data to answer. So I think a lot of the working memory tasks have been done with very few choices. And so maybe you have two choices and then the activity ends up kind of being constrained to those kinds of the manifolds of those two choices, like the responses to those two choices and the decision, what you might think of as decision signals telling the mouse to go left or right. There's been slightly higher dimensional kinds of questions posed to monkeys, decision making tasks posed to monkeys, like where there's multiple colors versus different motions of dots. And so that becomes a little bit of a higher dimensional problem. And then there they still see that kind of this activity gets sent to these lower dimensional modes. And it's kind of, I don't know if that's, if that's always going to be the case. Are there many of these low dimensional modes that you might find or in the natural world, when you're going about your day, it's really hard for me to know if we can really generalize from the results from those studies to an animal in the real world. I really don't have a good answer for that at all. [00:58:50] Speaker B: It's still so early on. Part of the issue is obviously just how to deal with, with data of such high dimension. Right. So my next question is completely unfair, which is, you know, if you have plans for, or vision for how, you know, to start introducing harder tasks for the animals to perform while recording these things and looking at how that activity changes over time. So sort of dynamics of it. These are unfair questions, aren't they? [00:59:21] Speaker A: Yeah. So I personally, I run a computational lab, so I won't be doing any of those experiments. But yeah, I think the field is moving towards more complex behaviors, more naturalistic behaviors. And there's new, it's not calcium imaging, but there are new probes called neuropixels probes which allow you to record many hundreds of neurons in freely moving mice. So that will really open the doors in terms of the kinds of complex behaviors that you can look at while the mice are, while you're recording many neurons. [00:59:51] Speaker B: And computationally, do you think that we're ready to meaningfully analyze such data? [00:59:59] Speaker A: That's a great question. So I am hoping that the inferences we can make using calcium imaging data can help us analyze These kinds of smaller scale recordings, that's something that I'm working on, is trying to better understand what these behavioral representations are. Then if you can figure out this mapping in this large population of neurons, will that help you to figure out what the mapping should be when you record maybe only a few tens of neurons in a single area? [01:00:30] Speaker B: So part of the, in AI, at least in deep learning, you know, there's deep learning theory, right, which, so the thing is we have access to every single unit at any given moment, right? So in a deep neural network, for instance, so you can actually see the activity of any unit and know exactly what it's connected to and the exact strength between the connections. And it's a big question right now. It's an ongoing field of research to figure out what information you can extract from that, what it all means, how is the network en masse processing this? And we're still not there yet with brains, obviously, but these high neural, high in recordings, many, many neurons that you guys are doing using the calcium imaging, for instance, gets us a little bit closer to that. So I mean, it's not like, you know, AI knows or neuroscience knows really what it, what it all means. But I'm wondering if you think, because you were mentioning the adversarial images and how like if you take an image and then you can just add some Gaussian noise in a very particular way, add some Gaussian noise to it, where. And then the end result is an image, let's say, let's say an eel, right? And then you add the Gaussian noise and the end result, when you and I look at it still looks exactly like an eel. But when a deep learning, a specific deep learning network looks at it might, might classify it as a cat or whatever, and this relates to the dimensionality, finding that and the power, the specific power law that you guys have found with respect to how it's coded. So I'm wondering maybe you can just talk a little bit more about that and then what that means for like what how we might change deep neural networks moving forward to maybe be, I don't know, more general or what it means for AI, if AI can learn anything from these results. [01:02:29] Speaker A: Well, I don't know what the right thing to do is, but I think one potential thing to do is to try to change what you're trying to learn, period. That learning classes is not going to constrain the problem sufficiently to allow you to create these kinds of smooth representations in the natural world. We're constantly seeing images of cats. And the cats are in different locations on the image. And we have to learn where the cats are. We might have to learn how the cats are moving, like optic flow information. And there's lots of different things we're extracting from these images, which are more things actually, that you can think, evolutionarily speaking, that lower kind of animals that are earlier evolutionarily have to do that aren't just object recognition. Mice have to be able to track objects in the world and be able to turn around corners and things like that without bumping into things. But they don't necessarily have to be like, that's a car versus that's a cat. [01:03:29] Speaker B: I've thought about this a lot lately and I've really come to this idea that it's classically kind of backwards. The way that we have approached even studying brains. Right. And asking image categorization. It's such an unnatural thing to start with, right? To start with like, well, there are these categories in the world and that's what our brains do. That's what our intelligence is, figuring these things out. But really what you perhaps would be better off starting with is the, you know, the behavior, like the ecological functions, the needs of organisms and stuff and building from that in that direction. I don't know. What do you think about that? [01:04:11] Speaker A: Right. Yeah, I think that's a really crucial thing to try to understand is what we're using vision for and what are like, we need to understand what mice are using vision for and what kinds of tasks they can do. And that's really kind of an underexplored space because we in mice and even in monkeys too, we do these very constrained tasks. And for instance, in a mouse, it's very hard for them to say, I show them two oriented gratings, one that's horizontal and one that's at 45 degrees. They have a lot of trouble with that task. They have to say which one's more horizontal. But their neurons code for those orientations with really high precision. So the information is in their brain, but those aren't the kinds of questions they need to answer on a day by day basis. [01:05:02] Speaker B: Yeah, that's kind of crazy to think about actually, because that's one case where it's at odds with the story that evolution is chugging along and doing the best thing, most efficient and all of that jazz. Because to have all of the information there and then be unable to use it is an interesting thing. Or to not use it or to discard it or whatever. Who knows if they could Use it, I suppose, but it's just not relevant, right? [01:05:27] Speaker A: Yeah, I guess it's been refined in a way that you have these many different features. It's been refined in such a way that it is high dimensional. But then it's a question of what dimensions of that space do you use to drive behavior? And that's a really exciting question and it's a really hard question to answer. [01:05:50] Speaker B: Do you think that we are at a right next to really big advances in AI, or do you think that's a long way away still? Because there was the deep learning revolution and everyone jumped out of their chairs and now we're all sitting back down in our chairs realizing the limitations, the many, many limitations, and really how far deep neural networks are from so many of the things that we consider important cognitive functions. [01:06:19] Speaker A: So what do you see as the most important thing kind of missing from. [01:06:23] Speaker B: AI, like with deep learning? [01:06:25] Speaker A: Yeah. [01:06:26] Speaker B: Oh, I would have to say. Well, I don't know. I mean, it would just be a guess. That's part of the problem is that we don't know what the important thing is missing. But I mean, for one thing, like you said, I would say embodiment and interaction with the world is just crucial. But there also needs to be something at stake for the AI agent. Right? It needs to. I don't know about if pain or emotion or death, you know, but there needs to be something at stake to force the issue of these same sorts of issues of layering on all of these different cognitive functions within this system. But I don't think we understand the system well enough to begin with, to know how to build it in. And that's where you come in, because you're doing this great work, really looking at this at such a high level, such a large population level, this work, I think it's really needed to move our understanding forward of even the right questions to ask. For instance. [01:07:28] Speaker A: Thanks. I think with AI, it's hard to say. So if we add things like embodiment though, and emotion, it might change what kinds of representations they learn. But still, the. I mean, at least in the case of computer vision, the AI is very good at these tasks, whether or not it has these representations. So I think it will be a question of kind of if we make them more like the brain. I think at least in the case of computer vision, we might be getting closer to a place where they're going to be more efficient and have fewer parameters and maybe be more robust, rather than necessarily solving new problems that they can't solve. [01:08:09] Speaker B: Now Yeah, I mean, this also just goes back to what the goal is in AI, And I'm thinking less and less and less that the goal is to make human level AI, because I don't know what human level is. And I think that humans are terrible at many things and that's just not what we would actually want AI to do. Why would we want to replicate are terrible visual category systems relative to an AI or something? [01:08:38] Speaker A: Yeah, I think there are ideas of, I think generalizing, there are instances of generalizability and AI has trouble with learning these many tasks and things like that in terms of capacity. And I think maybe if we better understand how the brain represents kind of these lower level features, it might help, at least in the case of computer vision, in terms of having maybe a more universal model that can solve all of these tasks, but in terms of a universal model for how to learn a sequence of actions and these other more complex things that require memory and that sort of thing, I don't think my work is going to necessarily help with that. [01:09:27] Speaker B: Do you think that it's unnecessary to look to the brain at all for building better AI? Because it's an open question. Right. So what you're saying right now is that what we might should do instead is just look at the best way to do, for example, memory to do visual categorization and have these as kind of separate components, expert separate components that then we can put together in a system rather than build them all in the same thing like a brain is. Right. [01:09:56] Speaker A: I'm not sure I see it partly as we're going to make these advances in AI potentially faster. And we already have, because I think neuroscience isn't answering these questions. And so I think regardless, it's going to help to be able to build these circuits in a machine and see if we can learn some principles from them that we can then can use in the brain. And it might be the case like you're saying that having these components as separate and learned separately is going to be the best way to do it, rather than learning them in a holistic way, the way that a human learns in the world. [01:10:35] Speaker B: I just have this hunch that in the long run, so neuroscience is super slow, right. And AI is super fast right now, but that in the long run we're going to end up building principles into AI that we discover through the brain and we just won't be able to make something better necessarily, depending on what we want. Obviously, like you're saying, if we want to categorize cats and eels, then what we have is great, for example, although there are the adversarial examples. I don't know. I mean, and it's only really a feeling I have real. I don't really have any principled reason for feeling this way, but it's just a bet I have with myself in the world. I suppose that in the end there will be contributions that the brain makes. Pretty fundamental contributions that the brain makes to AI. And I'm just really blowing in the wind right now, but blowing hot air. Right. But I don't know. That's just my feeling. You don't feel that way. [01:11:35] Speaker A: I think it's going to be a lot further down the road is my concern. [01:11:39] Speaker B: I totally agree with that. [01:11:41] Speaker A: So I am fascinated by the visual system and it's also a really relatively easy place to study. I can show images, I can look at the responses. Someone who's studying, like hippocampus, for instance, they have so much more to worry about. I just have this feed forward system from the relatively feed forward system from the thalamus to. From the retina to the thalamus and so on. If you're looking at something like memory and integrating these things in hippocampus, these brain areas are getting inputs from everywhere. And so really teasing apart exactly how they're doing this computation, I think is going to take longer than it's going to take the visual field to have a better sense of how these computations are taking. Visual computations are taking place. [01:12:27] Speaker B: Yeah, you're right. What you do is really quite simple, right? [01:12:31] Speaker A: Well, relatively speaking, yeah. I think those problems are going to take a lot more experimental work as well to tease apart how those computations happen. [01:12:41] Speaker B: Well, that is a huge problem because experiments are so slow. [01:12:44] Speaker A: Yeah. [01:12:44] Speaker B: So, Carson, what's going on in your lab? Are you still hiring? I know you're a pretty new faculty member and I know you had been hiring for some time. Are you still looking for people and what's happening moving forward? What are you guys doing? [01:12:59] Speaker A: Yeah, so I'm still looking to hire postdocs or grad students, and we're working on these visual computations. And we're also looking at these behavioral representations in visual cortex and other areas. And so I have a few people in my lab now who are really great that are working on some of these questions. Like in particular, we're working on invariant texture recognition in mouse visual cortex. [01:13:25] Speaker B: So let's switch gears in the last few minutes here. You know, so I have another listener question, and this is complete, completely orthogonal to our previous Conversation. But Gabriella, Gabriela, I imagine Gabriela asks. Well, she said she knows that you're very pro diversity, equality, inclusivity. And she's wondering, this is a very big question. How can we become a more diverse, equal and inclusive community? Do you have ideas about this? [01:13:58] Speaker A: Yeah, no, that's a really. I mean, that's a really hard problem to solve. [01:14:03] Speaker B: Can we start with how much of a problem it is? Because I think it's often just assumed these days. It's assumed, oh, yeah, that's a big problem. But it's never asked is it an actual problem? And that's where I get in trouble. Just asking the simple question if it actually is a problem. And if so, then, yeah, how do we move forward? [01:14:23] Speaker A: Yeah. So I can use the example of Janelia. I think we're around 30% women group leaders or 25% women group leaders. And we have one black group leader out of maybe 30 to 40. So it's kind of. We're definitely not at the. We're not equivalent to the population distributions in terms of people in positions of power in science. So that's one way to kind of gauge how diverse the community is. And then it's another question of how inclusive the community is. Which is. Which also can be. Which. I mean, if the institute is more diverse, then it may have a more inclusive community. But that's not necessarily the case either. [01:15:13] Speaker B: But most likely they probably do covary, I would imagine. [01:15:17] Speaker A: Yeah, yeah. And so asking how inclusive it is is kind of. You have to do more of these kinds of climate surveys, and these are done in the field and consistently show that people from underrepresented groups and women in science don't feel as welcome that they experience these microaggressions from members of the dominant race and gender of white male and these sorts of things. [01:15:44] Speaker B: Is that how inclusivity is measured is through surveys? [01:15:48] Speaker A: So that's one common way that I've heard of it being done. But I don't know if there's a better way to kind of measure kind of the atmosphere of an environment. [01:16:00] Speaker B: Yeah, I don't know if there would be any other way. I mean, I don't know. These are hard questions because they're also dependent on people's perceptions. And this is a realm that I'm really not familiar with or should really even be speaking my opinion. I understand because I'm part of the privileged white male group that is a current target of all these things. Right. And so it's hard for me to know how to Think about these things. So I should just stop talking, basically? [01:16:30] Speaker A: Well, it's not. I mean, I wouldn't say that you're not a target. It's more that. It's that we as white people in science have to acknowledge that we've had privilege going through our education and that we're willing to step up and kind of get the training to be good mentors and support trainees from many different backgrounds and acknowledge that we have biases in the way that we hire and how we've been hired, and that we've benefited from these systems. And so it's getting kind of to that point of understanding. [01:17:05] Speaker B: So is the idea to going back to how to improve this and how to become more diverse, do privileged people need to then go against their biases? I guess what you're saying is they just need to be trained better to be aware of their biases so that they can overcome them in making decisions like hiring and such. [01:17:25] Speaker A: Yeah. And in the way that they act in the community and how they interact with people. I mean, one common thing is people like to hire people that they get along with, and it's often people that in your. In group that you get along with better. So if you're a white male, maybe you're more likely to hire postdocs that are white male. [01:17:43] Speaker B: Right. [01:17:44] Speaker A: And you have to acknowledge that that's not going to be beneficial for your lab, that you need many perspectives in your lab, and that just hiring someone that you perceive as someone you get along with is not necessarily even the best fit potentially for your lab. [01:18:00] Speaker B: Sure, of course. But do there need to be institutional rules that ensure some sort of equal outcome, or do there just need to be policies that ensure equal opportunity? [01:18:13] Speaker A: I think both. So to get equal outcomes, you need to make sure that everyone has equal access to kind of mentoring once they're in the program, and that they're all given the same kind of treatment and then equal opportunity as well, in terms of getting grants and that sort of thing. So one of the big. So a recent study from Kenneth Gibbs was about this kind of that a big. There are many underrepresented groups getting PhDs now that that has become better. That as a field, we've done better to some extent on that front. But then many of them leave before they go on to get a postdoc, or they leave after they get a postdoc. And so it's a question of how to better advise students so that what they found in the study as well was that people from underrepresented groups were often less confident in their abilities. And that comes down to an issue of mentoring. 100% that the mentors were not giving adequate support to these trainees and not giving them the advice that they would need potentially to succeed in science. [01:19:25] Speaker B: I mean, these are really hard things. [01:19:27] Speaker A: I think, moving forward, but it's a hard thing because the big problem is none of us get training in how to be good mentors like many people. I mean, we become group leaders and lab heads and we aren't necessarily in a position to be the best advisors that we can be. So I think that is a step that every institute can kind of make. [01:19:57] Speaker B: But is there an objective good mentor? I mean, isn't there some subjectivity in that as well? [01:20:05] Speaker A: Yeah, there are definitely, like different mentoring styles, but it also comes down to being able to read the kind of student you have and changing your mentoring style depending on the student that you're working with. And I think if someone has training, maybe they'll recognize those sorts of things and recognize how they, how to better support the students they have. Not that everyone has to be the same kind of mentor, obviously, but yeah. [01:20:31] Speaker B: Do you think just personally, and maybe you do know the data on this, do you think that it is a few bad actors that are repeatedly, I don't know, performing these microaggressions, making people feel excluded and stuff? Or do you think it really is a much more systemic, low level thing that many of us, that the vast majority of us do without knowing? [01:20:57] Speaker A: So there's two aspects to that. Like if there are bad actors that are, then there are departments and there's lots of people in that department that are enabling them and are not changing their behavior. So that already suggests that there's an acceptance there in some sense. But the second point. Yeah, I mean, I think unless we confront the fact that we have these unconscious biases, I think all of us can kind of might potentially be treating people differently without realizing it. And I think it, particularly in the US where we often grow up in relatively segregated societies, depending on what city you grow up in, it can often be the case that you mostly interact with people of your own race. And so you don't have the experience and kind of the perspective to act as an inclusive person. And you might not realize it. [01:21:50] Speaker B: How far do you think we have to go? [01:21:52] Speaker A: I mean, until everyone feels like science is an inclusive environment, is that even. [01:22:00] Speaker B: Possible for that to be fully manifest? Is that a possible future even? [01:22:05] Speaker A: I think that. So people will always feel left out, but then it's a question of if people who feel left out, if there's a skew or a bias that people from, from various groups feel more left out than other groups, that's a big problem. And so I think we have to, if we can start to correct that, then we'll be in a better place. But then we also need data on it. I think if institutes have done these surveys and they need to keep up kind of with checking in on are the policies and the changes that they're making actually working and making a difference. And from studies that people do on these kinds of initiatives, they always say that kind of having numbers and keeping track of them kind of keeps departments accountable. [01:22:51] Speaker B: Right? Yeah. I mean, there are initiatives like Neurowatch Bias, I think it's called from Yael niv, and there are some other metrics like that. And I think that sort of external, viewable data probably goes a long way toward correcting a wrong, I suppose. [01:23:09] Speaker A: Yeah. And I think that's something that lots of places are starting to change too. For tenure, there are certain institutes that require that a faculty member get recommendation letters from their graduate students. So then that kind of raises the bar that they have to be good mentors to be able to progress in their career too. So there's lots of ways to make sure that people are accountable, but they're not necessarily used in many institutions and can often be neglected in some of our top institutions, even in the U.S. okay. [01:23:40] Speaker B: So, Gabriela, I hope that that. Do you have anything else to add, Carson? I mean, this is not something that I talk about or ask about much on the podcast, knowing that it's a sensitive situation also way outside of my wheelhouse, any sort of domain of expertise. So. Yeah. So is there something else to add before we move on? [01:24:01] Speaker A: Well, I should say one thing, that I don't consider myself an expert on this either. I've read about it, but I don't know the best solutions either. I just suggested some things that seem like kind of low hanging fruit that sort of all departments can do. And then I think the lowest hanging fruit, I would say, is that all of us can share our code and our data from our experiments. And that allows, like for instance, labs with less resources to use our data and do analyses and say, okay, I don't actually have to do another experiment for this paper. I can take data from someone else. I don't need the resources to do that. People hopefully then start to see things as in science as sort of like a collective pursuit rather than kind of a Competition where we're pitting each other against each other. [01:24:45] Speaker B: Do you think that, I mean, we're on our way toward that, toward these really larger across lab collaborative efforts? I mean, that's just going to continue to happen, don't you think? [01:24:56] Speaker A: Yeah, I hope so. And I think things will have to change in terms of the. The incentive structure to some extent too, in terms of what kinds of papers are accepted and how grant money is allocated. So there are steps that still will need to be taken to help this problem of competition. But I think there are steps in the right direction. [01:25:21] Speaker B: Yeah. I mean, even when I was in graduate school, I was invited to give a talk at Yale and I had collected all this experimental data and these were computational modelers that I was sitting around talking with and they were saying they didn't understand why experimentalists wouldn't just readily give their data to them. Right. So that they could then build models with. And to me, it was extremely clear because I'm trying to build my career. And if I give you my data, this is like a classic problem. Right. So this is a decades old. Maybe it was in the 90s, someone wrote this famous paper. I can't remember what it was about, but it's a shame that I felt that way, that if I give that to you, it's going to vastly decrease my chances at a good career because this is my data that I could build a model on. Of course, I never did, but there is that feeling. And that's the kind of thing that hopefully is just going to disappear more and more. [01:26:14] Speaker A: Yeah. And I think as long as if journals and people hold people accountable for sharing data, I think the labs that have the most resources are the ones that can kind of are the ones that should be doing it. Like if you get a paper in a big journal, then you should be sharing your data then. And you have the resources to share the data and to support your trainees. And they're already more likely to get a job because they got that big paper out. [01:26:41] Speaker B: They crossed that off the list. Yeah, that's true. All right, so Carsten, thanks and continued success in your young but blossoming career. And it's been great talking to you. [01:26:52] Speaker A: Thank you so much. Thanks for asking really interesting questions. And you've clearly done your homework in terms of the questions that I'm working on, so it's really fun to have an engaging conversation about them. [01:27:17] Speaker B: Brain Inspired is a production of me and you. I don't do advertisements. You can support the show through Patreon for a trifling amount and get access to the full versions of all the episodes, plus bonus episodes that focus more on the cultural side but still have science. Go to BrainInspired Co and find the red Patreon button there. To get in touch with me, email Paul@Brain inspired the music you hear is by thenewyear. Find [email protected] thank you for your support. See you next time. SA.

Other Episodes

Episode

July 28, 2019 00:59:35
Episode Cover

BI 042 Brad Aimone: Brains at the Funeral of Moore's Law

This is part 2 of my conversation with Brad (listen to part 1 here). We discuss how Moore’s law is on its last legs,...

Listen

Episode 0

January 16, 2023 01:35:12
Episode Cover

BI 158 Paul Rosenbloom: Cognitive Architectures

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord...

Listen

Episode 0

July 22, 2021 01:00:48
Episode Cover

BI NMA 03: Stochastic Processes Panel

Panelists: Yael Niv.@yael_nivKonrad [email protected] BI episodes:BI 027 Ioana Marinescu & Konrad Kording: Causality in Quasi-Experiments.BI 014 Konrad Kording: Regulators, Mount Up!Sam [email protected] BI episodes:BI...

Listen