BI 226 Tatiana Engel: The High and Low Dimensional Brain

December 03, 2025 01:36:18
BI 226 Tatiana Engel: The High and Low Dimensional Brain
Brain Inspired
BI 226 Tatiana Engel: The High and Low Dimensional Brain

Dec 03 2025 | 01:36:18

/

Show Notes

Support the show to get full episodes, full archive, and join the Discord community.

The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.

Read more about our partnership.

Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.

To explore more neuroscience news and perspectives, visit thetransmitter.org.

Tatiana Engel runs the Engel lab at Princeton University in the Princeton Neuroscience Institute. She's also part of the International Brain Laboratory, a massive across-lab, across-world, collaboration which you'll hear more about. My main impetus for inviting Tatiana was to talk about two projects she's been working on. One of those is connecting the functional dynamics of cognition with the connectivity of the underlying neural networks on which those dynamics unfold. We know the brain is high-dimensional - it has lots of interacting connections, we know the activity of those networks can often be described by lower-dimensional entities called manifolds, and Tatiana and her lab work to connect those two processes with something they call latent circuits. So you'll hear about that, you'll also hear about how the timescales of neurons across the brain are different but the same, why this is cool and surprising, and we discuss many topics around those main topics. 

0:00 - Intro 3:03 - No central executive 5:01 - International brain lab 15:57 - Tatiana's background 24:49 - Dynamical systems 17:48 - Manifolds 33:10 - Latent task circuits 47:01 - Mixed selectivity 1:00:21 - Internal and external dynamics 1:03:47 - Modern vs classical modeling 1:14:30 - Intrinsic timescales 1:26:05 - Single trial dynamics 1:29:59 - Future of manifolds

View Full Transcript

Episode Transcript

[00:00:04] Speaker A: Often we have situations where single cells are very complex, but activity is also very, very high dimensional. Then we can always find low dimensional projections of this activity which will show some lawful structure, but it will be very reduced picture of what actually happens in the entire circuit. Our thinking was, well, type scale seems to be this very simple metric, but such an incredible marker of organization, functional organization of cortical areas. Can we use this simple metric to understand the logic of how temporal information processes is organized on the scale of the entire brain? Technology was just being kind of. It was very new. It was just being created at the time when IBL was formed. [00:00:58] Speaker B: When was this? [00:01:00] Speaker A: I think they started in 2018. [00:01:02] Speaker B: Okay. All right. [00:01:03] Speaker A: Yeah. [00:01:04] Speaker B: Everything is changing so fast. [00:01:05] Speaker A: So fast. [00:01:14] Speaker B: This is Brain Inspired. Powered by the transmitter. Hey everybody, I am Paul. Welcome to Brain Inspired. Tatiana Engel runs the Engel Lab at Princeton University in the Princeton Neuroscience Institute. She's also part of the International Brain Laboratory, which is a massive across lab, across world collaboration which you will hear more about in a moment. My main impetus for inviting Tatiana on today was to talk about two projects that she's been working on. [00:01:48] Speaker A: One. [00:01:48] Speaker B: One of those is connecting the functional dynamics of cognition with the connectivity of the underlying neural networks on which those dynamics unfold. So we know the brain is high dimensional. It has lots of interacting connections. We know the activity of those networks can often be described by lower dimensional abstract entities called manifolds, for example. And Tatiana and her lab work to connect those two processes with something they call latent circuits. So you'll hear more about that. You'll also hear about how the timescales of neurons, their intrinsic fluctuating timescales, are distributed across the brain and how those timescales are different but the same, and why that is cool and surprising. And we discuss many topics around those main topics. Thanks for listening. You can learn more about Tatiana and her work for the shownotes at BrainInspired Co podcast 226. Thank you to my Patreon supporters and to the Transmitter for all of your support for this podcast. Enjoy. Tatiana, I'm going to ask you a question that you've been asked probably 20, 25 times. How was SFN? [00:03:11] Speaker A: SFN was great. I didn't go for the whole meeting. It felt less busy than usual. [00:03:17] Speaker B: Yeah, why was that? [00:03:19] Speaker A: Well, I guess because of funding difficulties. That's my guess. But that this year we had way fewer attendees than usual. [00:03:27] Speaker B: There was still like 20 something thousand. [00:03:29] Speaker A: 20,000. But in the past it reached up to 35,000 attendees. Yeah, but I wouldn't say it was a negative thing. It felt like as a fan, it's more enjoyable with your attending. [00:03:42] Speaker B: Oh, so you like the smaller. I like the smaller conferences, personally. I mean nothing. I love sfn. Nothing against sfn, but. Yeah, but your talk was great. The whole session was great. You were in a really good session. So how do you feel it went? [00:03:58] Speaker A: No, that was super exciting session to be in. It was organized by Chan Chandrasek Ran and Paul Cicek. Oh, sorry, no. And Chris Fetch. Now I misspoke. [00:04:10] Speaker B: Paul was there. [00:04:11] Speaker A: Paul was there, the speaker. But I guess Chris and Chant were the organizers. And I really like how it came together. The overall topic was there is no central executive. And the idea was to look in multi area dynamics which underlie decision making. And I like how it covered all different angles in how you can approach this problem. Thinking about designing very clever tasks which will reveal differences between different areas contributing to decision making. While if you look in a simple task, they all may generate very similar looking types of activity, but also looking in the modeling of inter area dynamics. So how models can help us to disentangle contributions of individual areas. And of course what I spoke about was work by International brain lab or IBL, which was a very large scale collaborations of 22 labs across the globe which joined forces with the idea to record from every single part of the amount of. [00:05:19] Speaker B: How did that get started? [00:05:20] Speaker A: So honestly I cannot tell you because I didn't join IBL from the very beginning. [00:05:24] Speaker B: Okay. [00:05:25] Speaker A: So I think IBL started in 2018 and there were few people whose idea it was primarily like. Now I may be not super accurate, but to my knowledge it's Zach Manin and Alex Puget who kind of had this vision and then they started to put people together. I joined only in 2021. Yeah. Because they had. [00:05:45] Speaker B: So you have to like sign a paper saying I'm part of it. Like how does that work? [00:05:50] Speaker A: So I guess they realize they can add one more lap to the collaboration. [00:05:53] Speaker B: Oh. [00:05:54] Speaker A: Because one of the original lab left the collaboration so there was space. So they had a search for a new theory lab to join IBL and I guess they interviewed couple people and I was super excited to be invited to the interview. [00:06:08] Speaker B: So they have like a whole. Okay. I didn't realize it was that formal. I thought you were just like, hey, I want to be so. So International Brain Lab is like I don't know how many labs you might know, but 2022. Okay. And it's. What is the mission of IPL? Do you. Can you articulate that. [00:06:27] Speaker A: Yeah. So I also want to clarify. There was original IBL and now we have IBL 2.0. So maybe first let me speak about original IBL. So the original group of people, to my knowledge, they all just found each other pretty much like thinking about who would be a good person to join this collaborative group, who would share the vision of doing large scale collaborative neuroscience. And I feel like my lab and Nick Steinman's lab were just two labs which were kind of younger labs compared to other labs in IBL who have been around longer. So at that time I was assistant professor, just recently started my own lab. So it was a great opportunity. So the mission of original IBL was really to take on for the very first time this incredible task to record from the whole mouse brain with the idea that usually they focus on very particular brain regions which we carefully select based on what we already know about the brain, based on brain anatomy, based on prior knowledge. And we refine our hypothesis and keep asking more and more detailed questions, but zoom in on just very tiny fraction of the brain. [00:07:41] Speaker B: Yeah. [00:07:42] Speaker A: And neuropixel technology was just being kind of. It was very new. It was just being created at the time when IBL was formed. [00:07:51] Speaker B: When was this? [00:07:53] Speaker A: I think they started in 2018. [00:07:56] Speaker B: Okay. [00:07:56] Speaker A: All right. [00:07:57] Speaker B: Everything is changing so fast. [00:07:59] Speaker A: So fast. Yeah, yeah. [00:08:01] Speaker B: Okay. [00:08:02] Speaker A: Right. So not many labs actually could use neuropixels or knew how. You know, you can collect data of a very large scale. And also I think they realized back then if they're going to collect data of this scale, it will be big effort, even beyond data collection, because the whole data processing pipeline, data management pipeline, needs to change because IDEAL didn't record obviously from all these brain regions in a single mouse. Right. That would be impossible. You would just like pierce the brain of this mouse. There would be no more brain. So the data had to be pulled from many animals. Which raises the question, okay, everything needs to be standardized. Behavior needs to be standardized across different labs. How do we do that? Recording needs to be standardized. Data pre processing quality needs to be very high. [00:08:57] Speaker B: That's a lot of constraints. It's a lot of like, I get itchy when I hear that because it means I'm going to spend all my time formatting my data and ensuring my submission to the pool of data meets the proper standard. It seems like it would just take all my time and I would not have any more time to generate new ideas, for example. [00:09:21] Speaker A: So IBL actually thought exactly about this issue. So besides postdoctoral researchers and graduate students, IBL also has a core of staff software engineers. So they are very skilled in software engineering and they not necessarily come from neuroscience background, although some of them do. And this was exactly their task. They to engineer data architecture, data analysis pipelines and help researchers, you know, to get to their questions sooner. So IBL Core is super amazing resource. They automated so many things. Not just kind of getting that data out of the rig and standardized format, but also all the histology pipelines because every recording was reconstructed to know exactly where it actually went in the brain compared to where it was targeted. In addition, IBL Core also works on behavioral analysis, video processing of animal behavior. [00:10:27] Speaker B: So how does. Okay, so I'm very curious. I'm sorry, like, as an outsider to the ibo, I see, when I see on a publication, International Brain Laboratory, and I see, oh, that's a lot of power is what I see. Like, oh, I'm part of this large organization. We are going to run over the rest of neuroscience. I don't mean that in a negative way. It's just like, oh, how can I compete with such a large institution? So in my mind, it's sort of angel on one side, devil on the other. There seem to be pros and cons of it, and I wonder whether I would be interested in joining. At the same time, I myself am thinking like, oh, how can I get my hands on some of their data? And should I. Is it easy to get it? Is it hard to get it? Will I spend all my. Do I need to hire a crew to get the data and get it to me in the right format, et cetera? [00:11:19] Speaker A: Okay, so this is actually the core idea of ibl. IBL is not there to compete with everybody else. Right? It's not this evil force joining together to overrun everybody. It's quite the opposite. Is this huge force to create and share with the entire community, all of it. IBL data architecture is publicly shared. The spike sorting pipelines are all publicly available. All IBL data is publicly available. And it is part of the IBL policy. Right. The data will be. Moreover, data according to IBL policy must be released now. I forgot. Six months or one year after data collection, even if it's prior to publication. So there is a very strong commitment to share everything, resources, tools and data with the entire community. But the advantage is that IBL Core put a lot of effort to make this data very accessible. They also do a lot of outreach activity, running hackathons at cosine and other occasions to actually teach people how to download, how to start using IBL data. So in my experience as a series, we use to work a lot with data from collaborators or just sourced from the Internet. And very often you spend a lot of time working the labs, working on data pre processing. [00:12:43] Speaker B: There was a graduate student who used to sit to the left of me in the lab and she got her PhD. She left. She sells data to be analyzed. She's doing something in industry and now I'm sorting through her code. She's a very lovely person and I have to read her code, I have to figure out her code. It's a lot of work. She had some coding principles that I don't ascribe to these days. People are different and that has taken a lot of time. But if I went to ibl, I could do it in a day, let's say. How long would it take me to get a data set and start using it? [00:13:20] Speaker A: I think so that a day would be now because it's also very well documented. One of the challenges, often there is a paper published and very cool data set and you download the data set and now you start the guessing game. Oh yeah, what is this field in this Matlab file named random number sequence. And there is no documentation that's not the case for ibl. I guess all this effort went explicitly to make sure that ideal data will be used by the community. [00:13:51] Speaker B: So what was your good idea? [00:13:53] Speaker A: I didn't submit. I'm actually on the IBL 2.0. What is it called? Advisory board. I don't know what's the proper name, but there is a set of original IBL PIs who continue to serve on the IBL board. So I will be involved in selection of the new partners for the IBL and also making sure things run smooth. So I think I have a conflict of interest. [00:14:20] Speaker B: Yeah, you definitely do. All right, well, let's. So we've talked a lot about the International Brain Laboratory and I don't remember how we got onto that. And that was a burning question that I had that I was going to ask you later. So I'm glad that we have already gotten to it. So where does that fit into the arc of. I mean, I have a lot of things I could ask you, but maybe just segueing from ibl. I mean, I guess you're a young. Are you a young investigator? It seems like you've been around a long time, unfortunately. [00:14:50] Speaker A: Right. Like I like to think about myself as a young investigator, but I feel at some point. Yeah, we are not that young anymore. [00:15:00] Speaker B: So. Okay, well, so at some point Maybe you can frame it. Like, what is the arc of your career thus far? Like, at some point you join the ibl. Right. How did that come about? And where were you when you did that? Like, really broad strokes. And where are you now? [00:15:15] Speaker A: Is ADL is very different activity from what my lab does usually. Right. It was really an opportunity which brings the theories out of the usual comfort zone, and that's when I like it. So what we do usually is far away from L. We do collaborate. So my lab is computational and theoretical, so we don't do any experiments. But what we work on is developing methods for neural data analysis and develop computational models which can help us understand how large neural populations wire to generate dynamics, which can ultimately support interesting behavior. [00:15:58] Speaker B: Which did you begin most interested in? The neural activity or the cognition behavior? Like, what drew you in and has that changed? [00:16:08] Speaker A: Right. So I got into computational neuroscience almost maybe by a coincidence, because in my undergrad I studied physics. I never took a biology class. I never actually came across in neuroscience books. So it was not the story that I was interested in the brain from the childhood. I was kind of disinterested in biology more generally. [00:16:33] Speaker B: I feel like stamp collecting, but. Oh, you were about to say why? Sorry, you were about to say why? [00:16:38] Speaker A: Maybe like this physicist chauvinism. Right. Like, you know. [00:16:41] Speaker B: Yeah, okay, there is total chauvinism from physicists. Why is that? But that's a separate issue. [00:16:47] Speaker A: But I am over it now. No, but honestly, I just, I guess maybe never came across the right books. Like, I don't know, just was not exposed. Um, but I was doing my PhD in Germany, in Berlin, and it was around the time, towards the end of my PhD. German government has initiative called a Bernstein Network for computational neuroscience. [00:17:10] Speaker B: Yeah. [00:17:11] Speaker A: So the government decided to establish computational neuroscience as a strong field in Germany. So they started to, at the time, give like, grants for physics professor to switch over. No, to go talk to biology professors. [00:17:29] Speaker B: And establish college and tell them how to do science. [00:17:31] Speaker A: Is that the thing? No, just to get interesting. So that was kind of my first serious exposure to what neuroscience is actually about. So I was doing PhD in physics, but I met Andreas Hertz, who was professor in Berlin at the time. And I started to work with Susannah Schreiber, who was his postdoc at the time. So she's now also professor in Berlin. And we were kind of thinking about modeling spiking activity of single cells. And that's all I knew about the brain. I didn't even know about cognition. So they just developed some Models to predict spikes of single neurons. Which sounded cool because. [00:18:16] Speaker B: Wait, when was this? Because there are already like there are Hodgkin, Huxley, there's integrating fire neurons. There are. [00:18:25] Speaker A: Yeah, sorry, yeah. Right. So the topic of my PhD was first passage time. How was it called? Don't remember exactly. But first non Markovian dynamics of single neurons as a first passage time problem. So the idea is that if you initialize a neural membrane and let it go and it's driven by random noise, at some point it will get to threshold and fire a spike just by chance. So there is some chance involved because there is ion channel noise. So there is this internal fluctuations in the membrane dynamics, but there is also a threshold and as soon as you reach it for the very first time, there is a spike and everything gets reset. So the first passage time problem is a mathematical problem which has many applications, not just in neuroscience. Right. You can think about financial markets using the same mathematical tools. Right. If you have a stock and stock price evolves, maybe you should sell it when it reaches some threshold value for the first time, et cetera. So I was working on this type of first passage time problems and then our collaboration with Andreas and Susanna then went on to apply them to explain the distribution of interspike intervals in neurons. We have interesting substratial dynamics. So where there is no just chance involved, not just noise, but also we were looking in cells which have sub threshold oscillations. And these cells can generate very complex interspike distributions because you can imagine this oscillation will modulate over time, how likely you are fire as spike. Right. And my work was to derive analytical results for the shape of this multi peak distribution. And then we applied it to explain firing of these single cells actually in adrenal cortex. I have no idea about bigger picture. Right. So for example, at the time. At the time. Right. So I feel it's. It was like shortly after, not that maybe just like couple years after Greece cells were discovered. I had no idea it was entorhinal cortex, you know, because they handed me this data. I didn't even know what cortex really is, to be honest. So. But I kind of found the purpose because before that my work was doing these analytical calculations, but I never really understood why. [00:21:11] Speaker B: So you saw it just as a purely mathematical endeavor or what? [00:21:16] Speaker A: This is how my PhD started. Right. So okay, so we have this interesting math problem. Can we look for analytical solutions to it? But then I always struggled because it felt. And if we solve it, then then what? Like, right. What is the Question. So. And that's when I got exposed to neuroscience application. I felt, oh, that's cool. It seems like this field has a lot of questions and this is what I was actually looking for. [00:21:45] Speaker B: Yeah, Join a field that has no answers and all questions. [00:21:50] Speaker A: So that was good. But also I feel it was also super good because I was so naive. So after that I decided to do postdoc in computational neuroscience and I joined Zhao Jing Kuang's lab at the time. And I remember like this first years being in neuroscience department, it was amazing. You go to any seminar and you're amazed. [00:22:15] Speaker B: You learn something totally new. Especially when you know nothing. [00:22:18] Speaker A: When you know nothing, it's just mind blowing. So I remember still it was how many years ago? I don't know. But. But I remember Earl Miller came to. [00:22:27] Speaker B: Give a talk back before he was oscillations crazy. Sorry, I'll edit that out. [00:22:34] Speaker A: That's fine. So and, and, and he was presenting this work. It was already many years ago. They were talking about like they record from it cortex and they teach animals to categorize like this morphs of cats and dogs. And Nirin developed this tuning for the learned category. To me it was just like mind blowing. It's so cool. Especially when you know anything. [00:22:58] Speaker B: No, I mean it's cool that you have that moment and you can like you remember that because it was so influential. So from. I grew up in an experimental background and for a lot of us, like in the non human primate world and others like recording single neurons, everyone remembers like the first time they're in the lab and a monkey or an animal is doing a task and you're recording neurons and you hear the neurons and you hear the pops of the neurons and you can hear them being modulated while they're doing the task. Everyone has that like that moment like, oh, that was the first time. So this sounds like that was the first time for you, but in a theoretical way. [00:23:33] Speaker A: Well, I guess it's slightly different. Right. Because what you describe is kind of your first moment of discovery, I guess. [00:23:41] Speaker B: Well, it doesn't even have to be. You don't have to be doing the experiment. But you know, you're visiting a lab and you go in and they're showing you around and stuff. And it's like going to a seminar kind of. And you're, you know, you're saying, oh, it's modulated based on the decision, you know, so it's okay. [00:23:55] Speaker A: I get. Yeah, yeah, yeah. When you just discover that there is so many interesting questions. You didn't even think about before. [00:24:05] Speaker B: Wow. So this is so cool. Your mind is like being blown, but it can be blown in so many different directions. Like, so. All right, so I interrupted you, but no, that's fine. So you had this experience, like you're going to these seminars. You're in Xiaojing Huang's lab. Xiaojing was just on this podcast a couple months ago. I guess I was so glad to have him because he's kind of a legend in the field. You got lucky being in his lab. Lucky and skilled. I'm sure you deserved it. This is a great story because you're so naive. And everyone has their naivete as they're coming up. Despite what you might think by the cohorts that you're surrounded by. Everyone's naive at some point. [00:24:45] Speaker A: So. [00:24:46] Speaker B: So it's interesting. So you're, You're. So now you're learning about the bigger picture. [00:24:51] Speaker A: Yeah, no, that was cool. So at the time, it was also like relatively shortly after Zhao Jing published his super influential work developing these spiking networks for decision making and. And working memory. And then together with Confat Wong, they also derived this mum field equation for the decision making model. And they use dynamical systems language to explain the mechanism of decision computation in that model. Which I guess nowadays is like bread and butter. Right? [00:25:26] Speaker B: Right. [00:25:26] Speaker A: A lot of computational neuroscience. So, yeah, so it was good time to be in the lab and. [00:25:32] Speaker B: But that. But with your physics background, that must have been like, really attractive to you because that's all physics. The dynamical systems comes from statistical mechanics in the physics world or whatever. So did that just make a lot of sense to you or like, did something click there? [00:25:51] Speaker A: But it was different from what I did in my PhD. [00:25:53] Speaker B: Sure, yeah, yeah, of course. [00:25:54] Speaker A: No, but it was very appealing. Language. [00:25:58] Speaker B: Yeah, that's what I mean. When people study biology. Like an undergraduate, like I had a molecular biology. I ended up with a molecular biology degree. And so like dynamical systems language, anything physics based was. And I really loved physics, but I didn't go that deep into it. But it was almost like, why are you doing that? It's all molecules, it's all DNA. It's all, you know, like, look at the wet stuff. That's a. That's a poor way of saying it, but. But I didn't feel at home in that kind of language. So it sounds like you felt at home, I see. [00:26:29] Speaker A: Yeah, I guess my experience very different because I'm so ignorant in terms of molecules. I have. As I told you, I never took proper classes in biology. I don't know. I feel like nowadays is so deeply intertwined with systems neuroscience research. Right. Like the tools move in that direction, like looking at the molecular mechanisms which give rise to all the systems level phenomena. So I don't think like any background is wasted nowadays in neuroscience. [00:26:59] Speaker B: Yeah, I think that it's still open to any background especially. And everyone really is naive. I mean, it's still like, gosh, how long will it be a wonderful field to join? I mean, I guess it depends on what you're interested in, but it seems like you can come from almost any background and there's a space for you in neuroscience. [00:27:21] Speaker A: I agree, I agree. And also what is fun is to interact with people who come from all these different backgrounds. Because clearly it's impossible for a single person to have knowledge and expertise across all these areas. But that is very enriching. Right. It's very satisfying to collaborate with somebody who doesn't know what you know, but they know something I don't know. Right. And then something bigger comes out of it. So that's always fun. [00:27:49] Speaker B: So when did you. I guess you probably knew all about manifolds before they became all the rage in neuroscience. [00:27:57] Speaker A: Yeah. [00:27:59] Speaker B: Someone asked me to ask you, what is a manifold? [00:28:02] Speaker A: What is a manifold? I guess the way it's used. Well, there is a mathematical definition of a manifold, but the way it's used in neuroscience is, I feel it's more loose way to say that the neural activity during behavior will not occupy the entire state space. There is a limited set of states which neural activity will explore. And loosely we call this neural manifold. [00:28:28] Speaker B: So that means in terms of spiking, if you have like two neurons and each one of them can, can spike at a rate of 0 to 100 spikes per second, they're never going to explore all the different combinations within, within that, that they could explore from 0 to 100. Both of you know, each of them, they're always going to explore a smaller space of combinations. And that smaller space is the manifold. [00:28:54] Speaker A: Right. So like if you talk about this range from 0 to 100, we get the entire square in this two dimensional plane. Right. So we will not get points everywhere inside that square. [00:29:07] Speaker B: Right. [00:29:07] Speaker A: We will get maybe like for example, like if system is oscillating, right. We will maybe trace out a circle, a manifold like one dimensional circle which will be placed inside this plane of two neurons coupled to each other. [00:29:24] Speaker B: How is that different than the mathematical definition? I'm naive. [00:29:29] Speaker A: Well, because mathematical definition would say that you have. Sorry now like I. Because like it requires that you have space which is locally Euclidean. [00:29:43] Speaker B: Oh, oh, okay. Oh, it's. Oh, it's dependent. Oh, okay. [00:29:47] Speaker A: So okay, so maybe like we cut this out because like I also like I'm not expert in differential geometry, like by any means. [00:29:53] Speaker B: I didn't know that. So it assumes a Euclidean space like. [00:29:57] Speaker A: Local, locally Euclidean, locally euclidian manifolds can be curved, right? So like it can kind of like bend but like in any particular point. [00:30:05] Speaker B: On the manifold it has to be linear around. [00:30:08] Speaker A: It has to be linear within some vicinity of every particular point. [00:30:12] Speaker B: That makes sense. [00:30:13] Speaker A: Okay, yeah. [00:30:14] Speaker B: All right. Any. All right, so but you already knew about manifolds when you, when you were getting into. Because study manifolds. I should say partially we study them now. [00:30:25] Speaker A: We study them now, but we didn't use that language. Now we can say oh, in Xiaojing's lab back in the day they also studied manifold, but we didn't call them manifold. [00:30:35] Speaker B: That's right. What did they call them? [00:30:37] Speaker A: What did they call them? [00:30:39] Speaker B: Didn't call them anything. [00:30:41] Speaker A: Right. Like, but it's even older idea. Right. Like for example, the ring model was proposed for head direction circuit. Right. Like to suggest how brain can keep track of direction in which you are facing. How can you have this persistent activity which was then also repurposed to model this working memory activity in very standard task when you need to remember location around the ring. So a modern way to say or kind of to describe what happens in this type of models is that activity in population state space reside on a ring shaped manifold. [00:31:24] Speaker B: Yeah, you just tack on manifold to it because before it was a ring, now it's a ring manifold. [00:31:30] Speaker A: Right. But like these notions like they existed already 30, 40 years ago, but people just didn't use necessarily the same language to describe them. And they existed more often in situations which were more simple, like for example ring bump attractor model. Right. But nowadays they got extended to very high dimensional spaces with high dimensional activity. For example, people study manifolds on which activity organizes in visual cortex in response to thousands of images. So these manifolds like how data points organize in this high dimensional space, they get way more complex. So I guess this, the advantage of having this language is that it kind of extends the area of applicability of these concepts beyond like the simple handcrafted examples towards more difficult scenarios. [00:32:33] Speaker B: It is like really convenient. I mean, so I'm thinking of work that I'm not going to name authors because there are so many, but you can be on manifold and then if like trying to learn a task. And if you push the neural activity, quote unquote off manifold, it's harder to learn like an updated rule or something. Whereas if you push it somewhere on manifold, it's easier to learn the new rules. So it's like so. Yeah, so it's a very convenient language to talk about neural activity in a population. Are manifolds real? [00:33:12] Speaker A: Oh, this is what I think. Right. Like manifolds is not an imaginary object coming out of vacuum. Right? So this is how we like to think about it. Manifold is still generated by the connectivity structure in this recurrent circuit. Right? So in that sense they're real. Right. Like often we identify manifolds without reference to this underlying circuit structure, just thinking about latent variables, which in statistical sense describes the structure of population activity. But I believe that it's super important to never forget that manifolds don't emerge, you know, statistically from cloud of points. They are really generated by recurrent circuit dynamics. [00:34:02] Speaker B: But the circuit is real. [00:34:04] Speaker A: Like the circuit triggers, Circuit is real, circuitry is real. Right. And the circuitry will control what states are accessible or not accessible to the system. Right. So in that sense, manifolds are very real because they're generated by recurrent circuitry. [00:34:22] Speaker B: But if you have a circuit of 10 neurons and you have a circuit of 11 neurons, they're going to each have a different manifold. [00:34:31] Speaker A: It depends how they are connected, Right? [00:34:34] Speaker B: Yeah, I know that's what you're saying, but this goes back to the. Are manifolds real? Like, because if you just add in another neuron, it changes the manifold. So how is that? [00:34:43] Speaker A: Well, well, depends. It doesn't have to. [00:34:45] Speaker B: It doesn't have to, but it can, Right? Right. [00:34:47] Speaker A: For example, like let's think about this very canonical decision making model where you have just two pools of neurons competing to represent the choice. And they are homogeneous pools, each has five neurons and they compete. Now we add one more neurons to the second pool and we rescale the weights. That all kind of adds up to the same dynamics we added a neuron. Manifold will not change. We will trace out the same trajectory through the state space. Right, because we didn't change dynamics. [00:35:18] Speaker B: But you introduced a task, right? So you have the circuitry which is real. You're saying the manifold is real and the manifold, or, sorry, the circuitry dictates the manifold, but so does the task. [00:35:31] Speaker A: Right? What is task? Task means you get some set of inputs, potentially also produce some outputs. Right. This is a task, Right? So you can have a circuit and it can have structure in its connectivity, and then input will Drive activity along specific kind of wires or connections in that connectivity space. Each particular task doesn't necessarily need to explore all connectivity modes which exist in the same network. Right. So you can see maybe just part of the full dynamical regime which this network is able to generate. But ultimately, yeah, like brain is real. There is activity going through buyers to generate any of those response patterns. [00:36:21] Speaker B: Okay, so maybe this is a good time to talk about what a latent circuit model is, because we've been talking about the physical connections and we've been talking about the manifold which is like the neural activity in a state space which are constrained also by tasks. What is a latent circuit then? [00:36:42] Speaker A: Yeah, so I guess one of the observations which was very influential and common in neuroscience recently was that during these tasks, which is very constrained behavior, right. You see just very limited set of inputs and you produce a very limited set of outputs in behavior. That activity during this task is relatively low dimensional. So you can record many neurons, but their collective dynamics are restricted. We already talked about the fact that they will not explore the entire full dimensional space. It will be restricted to some manifold. And the observation was that this manifold is often spans just a few dimensions. [00:37:23] Speaker B: And I guess to be explicit, we should say like a manifold by definition is lower dimensional than the possibility than the dimensionality capacity of the system. [00:37:34] Speaker A: Of the full system. Yeah, yeah, but I guess like this is also where it gets tricky because now we start to get into issue with notion of dimensionality. Do we talk about. But maybe we set it aside. [00:37:45] Speaker B: Oh, let's come back to it though. [00:37:46] Speaker A: So but okay, let's come back to it. So all right, so these states are low dimensional. And very often the way we kind of look in this population activity is by using tools which are encoding or decoding models, like maybe decoder or regression as an encoding model is some of the common tools where we take external task variables, right. And we regress neural activity or we try to decode, we try to find a direction in neural state space so that it separates neural activity according to the task variable which we are studying. Like one particular example, like what we studied in our latent circuit paper with Chris Langdon was this task which was introduced by Valerio Manti and Bill Newsom. So they ask monkey to discriminate color of the stimulus, whether it was red or green or motion of the stimulus, whether the dots move left or right. But the stimulus. [00:38:45] Speaker B: Yeah, so I'll just describe in language. So the random dots task is like a super well known task in neuroscience where it's Like a field of randomly moving dots, but they have some coherence in the direction of motion that they're moving. And the organism's job is to say what, which direction that they're moving. So we can say random dots task. And in this case there's the motion coherence aspect of the task. Sometimes you have to say which direction the dots are moving, but sometimes you have to say which color is dominant. Is that right? [00:39:19] Speaker A: Exactly. Because each dot is also colored red or green. And the same way you can have more green or more red in the pattern. So the stimulus has these two features, and depending on the context, which is cute to the monkey, like whether it's color context or motion context, you need to respond according to the relevant feature and ignore information provided by irrelevant feature. So in this task, for example, you can ask, okay, how is motion represented in my neural population? Using simple decoding approach, can I find the decoder which, across all trials will discriminate the strength of motion, coherence, motion of the moving dots. So, and if you think about this way to find manifolds, it's very different from how we think about circuit function, right? So going back to Jiang Jing's lab, what we were doing, we were at that time very often constructing circuits by hand to perform specific tasks. But what circuit does, it gets inputs, but then it needs to generate outputs. So on the inside of the circuit, something needs to happen. This incoming motion information needs to be transformed and maybe combined with context information so that you can, by this recurrent circuit dynamics, select which feature is relevant and ignore the irrelevant feature. So if you think from this perspective, then task variables and output variables are not the key variables for us to look for manifolds, because they represent just inputs and output of the circuit, but not this recurring computation which will generate potentially new variables, right? Which will be complex mixtures maybe of what is coming in and coming out. Right? Okay. So then we thought, okay, can we search for manifolds? But in this way, informed by this idea of recurrent circuitry, that we need to have a model which will still look for a subspace, low dimensional subspace of neural activity, but within this subspace, we want to have a model of dynamics, which is like a low dimensional circuit which receives task inputs, but is also required actually to perform the task to produce the correct task outputs. And the circuit dynamics will generate these internal variables which are necessary to accomplish the computation. [00:41:49] Speaker B: So the circuit is not the physical connections of the actual network of the brain. You're talking about a latent circuit, right? [00:41:58] Speaker A: This is how we started, this is how we started. This is how we started. Then we thought, okay, we can have like the same regression problem, but instead of using these external variables which are set by experimentalists, we let the model also learn these internally generated dynamic variables which are necessary to solve the task. And we will regress neural activity against these variables generated by the circuit. And when we feed the model, we simultaneously try to search for the subspace. So we both fit the subspace, but also we fit connectivity of this latent circuit so that it can both capture neural activity and, but also reproduce task behavior, right? So we kind of find these both parts in a single optimization run. But then it got interesting. So firstly we found that, well, if you just train recurrent neural networks to perform this task, and network is large, you can have this model fit its responses relatively well. But then Chris thought about it more and he developed some theory. And actually what his theory tells that if you can get a very good fit of this type of model, it would also predict then within this large recurrent neural network there will be a pattern of connections, a low rank pattern of connections which generates this recurrent circuit dynamics. [00:43:22] Speaker B: What does that mean? The low rank connections generates the dynamics. [00:43:28] Speaker A: I realize now, yeah, it's so. All right, so think about it this way. That let's say we learn a low dimensional circuit, right? It has just few nodes and we have a small connectivity matrix. [00:43:46] Speaker B: And the nodes here are task features, right? We're talking about the latent circuit. [00:43:50] Speaker A: Latent circuit. But they are not necessarily just external motion, right? Or external color, because they receive task inputs, but they also interact with each other. As a result of this interaction, each node will have some activity profile. It will vary in time and it will respond differently across task conditions. [00:44:11] Speaker B: They have their own dynamics, but they. [00:44:13] Speaker A: Have their own dynamics which by construction of the model is matching something in the neural activity. Want some mode of neural activity, say in a large network. But what the theory showed that even if your network is large, and we already talked about it, that big networks can have connectivity structure for many things, right? Maybe when you do task one, you engage one mode of your connectivity. When you do task two, you engage a different mode, right? And maybe they are designed in such a way that they don't interfere. So the input for one particular task will not necessarily reveal to you the entire structure of the whole high dimensional connectivity in the full network. [00:45:02] Speaker B: In the full latent. [00:45:04] Speaker A: No, in full large. [00:45:05] Speaker B: In the full learner. Okay, right. [00:45:07] Speaker A: So because activity is low dimensional, I feel it's very intuitive. Right. Imagine you have this network which has N neurons, so it has N squared connections. So it is very high dimensional connectivity space. But now imagine that during the task activity is low dimensional, right? So it's very intuitive that by seeing this low dimensional activity, you don't have sufficient constraint to reconstruct every single connection in this large connectivity space. So what we show that you can reconstruct a part of this connectivity which is actually truly latent connectivity within the bigger connectivity matrix. So the idea is that there is some structure which generates task dynamics in a large network. But in addition to this structure, there could be other connectivity patterns. [00:46:01] Speaker B: So when you say latent circuit, you mean a circuit within the circuit sort of does that. [00:46:07] Speaker A: I think that's fair, right? Because if you just took, let's say you train your network on a task, you took its connectivity matrix and you see, and it's like a lot of stuff happening there. Some is task relevant, some is task irrelevant. Right? So latent means that you don't see it immediately. Right. If you just took, if I handed you out this connectivity matrix, you look at it, the structure is not obvious. But by modeling dynamics during the task, we can recover this exact part of the connectivity which is responsible for generating task behavior. Right. In that sense, like it really exists in the network. [00:46:47] Speaker B: Right. [00:46:48] Speaker A: But there can be also other modes of connectivity simultaneously present. And unless you do something to partition them. Right. It all looks just like a big mess. [00:47:01] Speaker B: So is the partition probabilistic? So the way that I'm imagining it is like you have this big circuit, a thousand neurons or whatever, and part of that circuitry is sort of functionally used to dynamically solve a task or something. But then can't a neuron outside of that circuit sometimes influence the circuit? Right. Is the boundary super clean? [00:47:31] Speaker A: I'm not sure where you try to get into, but I feel it's super difficult. Problem. Are you talking about unknown inputs? Does this sound. [00:47:41] Speaker B: No, let's say like a neuron is like, who's not normally part of the circuit is like stochastically firing along and then decides like, hey, I want to be part of this, or whatever, you know, like, can it be influenced? Like how, how definitive is the boundary of the latent circuit? Right. Is it like fuzzy and probabilistic or is it you? But when I first asked that, you looked at me and you're like, oh, poor naive person, let me. [00:48:06] Speaker A: Oh, no, it's not. How. Not at all. Not at all. [00:48:08] Speaker B: No, no. I wanted you to educate me, but. [00:48:11] Speaker A: No, the Way I looked at you, like, oh my God, Paul, you're bringing up this super hard problem. Oh, but I didn't, because I kind of inferred something else. Ah, but I guess what you are like, if I understand you, I told. [00:48:22] Speaker B: You I would ask the dumb questions, not the super smart ones. There you go. [00:48:25] Speaker A: No, I don't think it was a dumb question. But if I understand your question correctly, now what you're talking about, like, oh, like there is a big network, but maybe five neurons in this network do the task and the rest don't participate. And what if like this, other neurons suddenly start to influence the five which do the task? So this doesn't have to be the case. Right. These circuits could be very distributed, like in a large. If it actually was the case that we have a large system and only five neurons participate in the task, it would be super easy to identify those neurons just by looking at the activity, because they will be mostly task modulated and the rest will not be. So I guess the challenge here comes and it's another very profound finding now across many, many studies, that activity, especially in cortical areas, has these properties which we often call distributed mixed selectivity. So let's say, like, your task involves some set of variables. So you ask, okay, what variables is one neuron in my recording sensitive to? By what variables in this task is this neuron modulated? So going back to this example of context dependent decision making, you can ask, is this neuron modulated by motion, color, feature choice and context? Right. Like there are four variables. And what you typically would find that individual neurons respond to mixtures of these variables. Right. So it's, you don't have just like let's say five neurons responsive for motion, five for color, et cetera, and the rest of the network is silent. [00:50:09] Speaker B: It's not, isn't it crazy that I don't know if this is actually correct, but like the way that neuroscience used to think of it, like with the single neuron doctrine, that you would get non, like mixed selectivity was like a surprise. But isn't that a little crazy when you think about the complexity of the brain and the messiness and all of it. [00:50:33] Speaker A: I'm not sure. Right. Because being in judging one's lab, like going back to the day, which, what we were doing, kind of building on their previous work, building circuits by hand for simple tasks, what we were trying to do to build circuits for more complicated tasks, and then you quickly realize that to solve any of those tasks, you cannot get away with pure selectivity Right. Like you cannot just have neurons sensitive to motion and color and don't mix them with context in any way and still be able to correctly read out the task output. [00:51:13] Speaker B: With those handmade models, you couldn't make models that would have mixed selectivity and still perform the task. [00:51:18] Speaker A: Well, they do have mixed select activity to perform the task. Because I guess this is a point also made very prominently by Matier Gotti and Stefano Fuzzi, like many years ago. Right. So if you think about like some of those tasks which require nonlinear computation. Right. It means like you cannot just have a linear readout from task inputs to correctly produce the output. Like one prominent problem is XOR exclusive or problem. Right. You cannot read out in a linear way from input space to solve this problem. And the way, one way how you can solve these problems is by generating nonlinear mixed selectivity, which will elevate dimensionality of neural responses. And then by simple linear readout which we often use or all the time actually used in these recurrent circuits. Right. Like we always have like some kind of readout from the network and it's always linear, it's always linear, which makes sense. Right. [00:52:23] Speaker B: Eventually it has to be linear. And are you Euclidean real world space? [00:52:27] Speaker A: Eventually it's some pattern of synapse driving the motor neuron. So it kind of makes sense. So. Right. So kind of from that perspective like mixed selectivity and even non linear mixed selectivity is like fundamentally required to solve some task behaviors. [00:52:47] Speaker B: I mean, the xor. Go ahead, please. I was going to say the XOR problem is what brought about the first AI Winter, basically. Right. Because. [00:52:56] Speaker A: But then came back propagation. [00:52:58] Speaker B: Yeah, yeah, yeah. That's a whole different story. But I had to throw that in. We were just reading. So I run this complexity discussion group and we're on Minsky's 1961 paper, like steps toward Artificial Artificial Intelligence. And so it made me think of Minsky when you mentioned xor, because he and Papert wrote this Perceptrons book actually, which led to the downfall of AI for a time, because he said that you can't. I think it was xor. Right. You can't solve XOR with like a, with perceptrons. Perceptrons, Rosenblatt, Perceptrons. But he didn't. Because he didn't think that back. Propagation. He didn't think that you could train multi layer networks. Anyway, that's an aside. [00:53:39] Speaker A: Sorry. [00:53:39] Speaker B: But it made me think of that because you were talking xor. [00:53:42] Speaker A: Yeah, exactly. So from this perspective, I guess maybe it's not surprising that it mixed selectivity, right? Right, Yeah, I guess it's surprising. Why does it have to be mixed, sorry, distributed. Because you can say, okay, maybe I require cells which are now in this task. Let's go back to the task. So I require cells which are tuned to conjunction of motion and context. It's a non linear mixed selectivity. So I just create a cluster of cells which will have this complex tuning and then I have another cluster of cells which are tuned to conjunction of color and context, et cetera. And by the way, that was exactly the way how we used to design circuits by hand, like in old days, like a series, like hypothesize how a problem can be solved. So like very often you would like wire these circuits to have clusters of cells with complex selectivity which can solve the task, but it's not exactly what we see in neural recordings. So like it's very debated. I guess I feel there is evidence for and against it, like whether there are functional clusters of functional cell types, group of cells which have identical response properties, maybe just up to scaling. I feel it's debated because what we often see is kind of a little bit more mass, right? So it's distributed across the entire population. So like this cell maybe has just a little bit of color and a lot of context and maybe also slight mixture of choice, you know, but on. [00:55:14] Speaker B: One trial, but then on another trial it could have a little bit more. I mean it's very messy, no, I guess across trials. [00:55:19] Speaker A: But then another cell, right, like would have a slightly different mixture and another cell, yet another kind of mixture. So they don't necessarily form these clear functional clusters. So that is the reason these representations are very distributed. And going back to your original question, you ask, could it be that you have this latent circuit and then other neurons in the network kind of influence My dumb question. No, it's not a dumb question. So the interesting fact is this latent circuit is actually very distributed, right? So it can involve all neurons in. [00:55:57] Speaker B: The population with a mixed selectivity of involvement. So each neuron can be partially involved. [00:56:05] Speaker A: Exactly right. So the way to think about is similar to cluster circuits. But now instead of thinking about this cluster population, you think about distributed activity patterns, right? So there is one distributed activity pattern which can influence another distributed activity pattern. So if you have a circuit with two nodes, right, it's very easy to imagine I have a connection from node number one to node number two. And when node number one gets active, it will drive activity node number two. [00:56:38] Speaker B: These nodes are populations of Neurons. This is the latest. [00:56:41] Speaker A: Yeah, no, this is kind of just simple idea. If you just think about these clustered networks, that's very easy to comprehend, right? You can have this simple connectivity structure from one node to the next and kind of this structure allows you to understand how activity will flow across this toy network. What happens in distributed network is directly analogous. But now we think not about these concentrated nodes, but we think about distributed activity patterns. So you think that there is one pattern of activity which can provide input which will drive or generate next pattern of activity which is also distributed, maybe engaging all neurons across the entire population. And the way how it can happen, one mechanism which is now very well understood in the field is by using low rank structure in the connectivity, which I guess I still like didn't explain what it is, but please try. [00:57:44] Speaker B: I mean, layman's terms. Yeah, don't. [00:57:46] Speaker A: Yeah, right. So if you imagine. Yeah, so if you imagine like you have one pattern and another pattern and you build a connectivity matrix which is just an outer product of these two patterns, you create a feed forward flow. I feel it's easy a little bit to explain this whiteboard equations and just hand waving. [00:58:05] Speaker B: Yeah, all right, that's okay. We can. [00:58:06] Speaker A: But I guess the conceptual picture here is that the same way one node can drive another node in a classical circuit, the same way kind of as we understand it now in distributed networks you can have one activity pattern drive another activity pattern. And now you can start thinking about how you can build these very distributed circuits which have principles of how they function very similar to classical simple networks or toy models. Right. Like something is driving something and something else is also driving. [00:58:39] Speaker B: But they're doing it in a fundamentally different way is that they're doing it. [00:58:43] Speaker A: Just in very distributed way. [00:58:44] Speaker B: Yeah, and that's hard to think about. I mean, it's hard to visualize. I mean, you know, I mean, maybe it's easy to visualize. Yeah. [00:58:56] Speaker A: I guess like it's easy to think about it in linear systems, right? So in linear dynamical systems, that's very easy to see because what you can do with linear system, right? Let's say I have just two neurons and they form a linear analogous system, right? And maybe it's a very simple linear system where neuron number one just drives neuron number two, as we discussed. But in linear system I can just rotate the coordinate system, right? And I still get an equivalent linear dynamical system just written down in different set of coordinates. So now if I rotate the coordinate system now I have a pattern, right? So for Example, I rotate the coordinate system by 45 degrees, then I have a sum mode and a difference mode, and they're interacting with each other. Right, but you're describing the same kind of dynamics in different coordinate systems. So it's a little bit more tricky in non linear system. But I guess linear system can already give us like a feel for how distributed patterns can interact. [01:00:03] Speaker B: Yeah. Okay. Linear systems are always. Estimates are always. Oh, what's the word? Simplifications of what's really going on. I suppose nothing is ever really linear. Oh, no. You're going to disagree? [01:00:17] Speaker A: No, I don't know. I feel like I just don't know. [01:00:21] Speaker B: Is there? Okay, so this is a little bit of a left turn. I do want to ask, like, what the question you thought I was asking. That was the harder question. What was that? Actually, let's do that. And then I want to ask. Then we'll kind of move on, but. [01:00:36] Speaker A: Sounds good, because I saw that you're asking, oh, let's say you recorded from this brain area your hundred neurons, right. And you describe their dynamics in some way. But what if there is a neuron in another part of the brain actually delivering a lot of very structured input to those neurons from which you are recording? And because your models don't. [01:01:00] Speaker B: What, as opposed to them generating the dynamics internally, what if they're driven externally? Is that what you're. [01:01:05] Speaker A: Yeah. This is a big problem because very often where we model neural dynamics, let's say with recurrent neural networks or any other framework, very often we assume some particular structure of inputs to these circuits, like, for example, going back to the task, again, when motion comes on the screen, we say there will be a step function of input to the circuit, and the height of this step will indicate how strong is the motion coherence. Right, right, right, right. Like, first of all, we don't know whether this is the exact input which this brain area receives. Maybe the motion, by the end we get to prefrontal cortex, the motion input is some kind of ramping activity rather than step input. [01:01:50] Speaker B: But what does it matter? What does it matter? [01:01:52] Speaker A: Oh, it turns out it matters a lot. [01:01:54] Speaker B: Oh, okay. [01:01:57] Speaker A: But even more, you can say, well, you can see there's this set of neurons and maybe you make guesses about task related inputs. But what if this brain area receives inputs from neurons in another brain area? And this input is very structured. Right. So let's say if we record it from prefrontal cortex and we model this as a recurrent circuit, right. Where neurons influence each other through recurrent connectivity. We attribute computation to interactions between those neurons. But like going to the extreme, the situation, maybe, maybe all neurons in prefrontal cortex, they don't even, they are not even connected. They don't even talk to each other. [01:02:36] Speaker B: Oh, but they're all being driven externally. Yeah. [01:02:38] Speaker A: Instead there is some complex internal external input from somewhere else which gives them their response profile. [01:02:44] Speaker B: Like a Maxwell's demon. [01:02:45] Speaker A: Almost exactly. And it's super hard problem. I feel like many people work on this, but it's very ill posed problem. Input inference, Is that a hard problem? [01:03:00] Speaker B: Does the difficulty of that problem vanish if you know the physical connectivity, like the causes? [01:03:07] Speaker A: Yeah, yeah, yeah. Oh yeah. Oh, that would be great. Because then you can just infer inputs if you know connectivity. But usually what we do, we just have neural activity and we try to infer connectivity. But what we realize that connectivity inference becomes very ill posed when you don't know the inputs. [01:03:25] Speaker B: Yeah, right. [01:03:25] Speaker A: And you think you can make simple guesses, but even if you do simple. [01:03:29] Speaker B: Guesses, you can never get rid of that possibility. [01:03:33] Speaker A: The connectivity you infer can be completely off compared to ground truth. Let's say when we do it in situations where we know what's the ground truth? [01:03:42] Speaker B: Yeah, okay. [01:03:44] Speaker A: That's why I said it's like super hard problem. [01:03:47] Speaker B: All right, okay. So maybe before we move on also, I mean, this is kind of a left turn, but you came up creating models tailored to solve tasks. And that's like the old days of neuroscience. We don't do that anymore. We just, we generate a big recurrent neural network and we give it the task and let the network handle it. Right. And then we look at the solutions that the network comes up with, which is very different than like, well, I'm going to build a node to do working memory. I'm going to build a node to do long term memory. And this is how they're going to work now. We just throw it into these big networks and let them solve it. And then we study that. Right. Is there, looking back on your early days. Right. Is there something that you think modern neuroscience is missing from that of kind of approach? Or is it. Should we look at that as sort of a mistaken approach to begin with? How should we think about those earlier modeling approaches? [01:04:43] Speaker A: I don't know. I think it depends who else. To me, this type of work we did in the olden day is a huge source of intuition and inspiration. [01:04:53] Speaker B: You're like what, 25 years old and you're talking about the olden days. Yeah, okay, sorry. [01:04:58] Speaker A: Well, we said, like in 2018, nobody knew how to use neuropixels. Right. Yeah. [01:05:03] Speaker B: I walk away for five minutes and I come back and everything's different in neuroscience. It's crazy. [01:05:10] Speaker A: Exactly. Right. So I don't know, at least to me, it gives this notion of grounding. Right. It demystifies a little bit the bespoke models that the classical models. Right. It gives you this grounding and understanding that, first of all, manifolds are not magical. Right. That they arise from recurrent dynamics in networks. [01:05:37] Speaker B: Wait, the classic models give you that? [01:05:39] Speaker A: Yeah. How I thought mine, because if you practice wiring circuits by hand to perform computation, it kind of demystifies a little bit or gives you this mindset if you get a set of neural recordings not to sync the. That somehow magically they form these beautiful manifolds, it forces you to go and think. Or can they identify structure in the network which actually generated these manifolds? But also in terms of interpreting mechanisms. Right. Like. Yeah, like, a lot of work we do was kind of motivated by this old work, like going down to the circuit level and trying to sorts of connections not stop. Because I feel like there is definitely a tendency, like, not. Not entire field, but like, like some part of the field would like to stop at a little bit higher level of abstraction. Right. And I know you had some guests on your podcast in the past, like, arguing that manifold is all we need. Right. Like, I know who you're talking about. Yeah, yeah. [01:06:55] Speaker B: And I'm going to have Juan Gallego on pretty soon, and he gave a talk recently at a cybernetics conference about. He has what he calls the Manifold Manifesto. So we'll be talking about that more as well. [01:07:06] Speaker A: All right. Sounds good. Yeah, yeah. So, yeah, it was Jon Krakauer together with David Barrock. They wrote this very, like. [01:07:14] Speaker B: Oh, that's who you're saying is manifolds is all you need? Oh, okay. Okay. [01:07:18] Speaker A: Oh, I don't know who you talked about. [01:07:19] Speaker B: Well, there's a lot of people, but manifolds are, like, the solution to a lot of things for a lot of people. That's why it's, like, so interesting. Yeah, go ahead. Okay. So, yeah, they're sharing Tonian versus exactly. [01:07:31] Speaker A: Their hope Fieldian view. And I guess the point they are arguing in that, in that perspective is that you cannot understand cognition fundamentally on the level of circuits. Like, manifold will be ultimately the level of abstraction where we'll have to stay if we need to understand cognitive processes. [01:07:51] Speaker B: Right. Yeah. [01:07:52] Speaker A: So I feel like at that time, like, we strongly disagreed with that point. So we wrote in response, our own Perspective article, which we call a Unifying perspective on Neural Money, that was a response to the. That was really inspired by Krakauer. Because when we read this, I feel we so strongly disagreed with their view. And why do I mention it? Because you asked, like, how this old work we did by wiring neural circuits by hand, like, influences how we think. [01:08:21] Speaker B: About that was not that long ago. Like, I'm surprised to hear that you strongly disagreed with it because I feel like it was just restating what has been going on for years. I was surprised that that paper and I know both John and David and enjoy them. In fact, David is here in Pittsburgh for a while in the philosophy department. But I was surprised to my reading, they sort of restated and gave new names to things that had been sort of accepted for some. But what did you strongly disagree with? [01:08:58] Speaker A: Oh, because I feel they actually put forward, like, in my view, at least in my reading of that perspective, the view that we should not be even trying to understand cognition on the level of circuitry. It's impossible and unnecessary. [01:09:15] Speaker B: Okay, and you disagree with. I still disagree with that. But have you changed your mind? [01:09:22] Speaker A: You disagree with. [01:09:24] Speaker B: I disagree. I think that it is a worthy pursuit to understand across levels. And what they're saying is like, well, don't even look at across levels, just manifolds. This is all you need. [01:09:36] Speaker A: Exactly. Yeah. So then you are, I feel, more on the side of our perspective, which we wrote in response. Right. Where we try to highlight how, going down to the surface, you force me. [01:09:46] Speaker B: On sides here, Tatiana, isn't it? But yeah, okay. [01:09:50] Speaker A: Perspective articles, they're meant to be a little bit more inflammatory, I guess, intentionally. [01:09:55] Speaker B: Right, Right. [01:09:55] Speaker A: So I don't think it's a bad thing. It's promoted. By the way, we also had together with David and John, we had. What is it called? Generative adversarial collaboration. [01:10:10] Speaker B: So I saw that, and I saw David tell you that he thought that latent circuits aren't real. Like, he was kind of like, oh, you and your. It's just something. It's like a convenience or something like that. So I wanted to ask you about that, but what were you going to say about it? [01:10:24] Speaker A: Right. Okay, so there are a few answers. Like, first of all, understanding this circuit on the level of connectivity give us a huge causal predictive power. Right. The connectivity. [01:10:40] Speaker B: Sorry, do you mean the physical network connectivity or the latent circuit connectivity? Because they're two different. [01:10:45] Speaker A: That's a big question. Right. Like, it's something which we would like to understand better in the future. But let's say in the published work, what we did is to show that if we have a distributed network which we train on a task and then we infer the latent circuit structure from responses of that network, the circuit structure which will infer, we will infer, will predict what type of perturbations of the large network connectivity we can do to alter its task behavior in very particular ways. So it will alter both the manifold, obviously and how the network responds in the task. Right. So I feel like it's difficult to do an experiment. Optogenetics now gets very impressive so we can perturb neural activity patterns. So it's something which we are actively working on now, kind of trying to extend this type of causal testing to experimental setting. But so far we have not done it. We only did it in artificial models. But nevertheless, I feel the circuit grounding helps you to gain some additional causal power. But also I feel there is another aspect of it, that there was this kind of another view that maybe single cells are completely not interpretable, but manifolds are magically interpretable. They're very low dimensional and interpretable. And it's not often the case. So often we have situations where single cells are very complex, but activity is also very, very high dimensional. Right. Then we can always find low dimensional projections of this activity which will show some lawful structure, but it will be very reduced picture of what actually happens in the entire circuit. For example, during context depend decision making task, if you look in the frontal cortex responses and you just ask how many linear dimensions do they span? It will be relatively large number, like close to 50. Obviously you can project this activity just on few axes, maybe four dimensions and find interpretable trajectories. But is it all the circuit does during the task? No, you just made the problem simpler. You interpolated part of the computation. And that's why it gives kind of this feeling of we got more interpretability than on the level of single cells. By that metric, I can say, well, I can just, these neurons are truly heterogeneous, but I can cluster them in rough clusters. I will lose some level of resolution. I just group them and interpret the average response profiles of these clusters and get some intuitions about how task is solved. Will I be making mistake? Yes, because I make an approximation. But it's also important to realize that if we just project neural activity to a few dimensions, we are also making an approximation. [01:13:48] Speaker B: Yeah, right. [01:13:49] Speaker A: Yeah. [01:13:51] Speaker B: Thinking about that little exchange that you and David had on, there was like a panel at the generative Adversarial thing. Did you guys just. Have you come to. Are you still in complete disagreement on this? Like where are, where are you now on the scale of. I mean, are you at a. Just a standoff and agree to disagree sort of. [01:14:12] Speaker A: I don't know. I talk with John after that. I feel like we got more aligned. But I guess maybe you should ask off the record. [01:14:23] Speaker B: You can get aligned, but not on the record. [01:14:25] Speaker A: Yeah, no, no, I think, I think ye found some common ground. [01:14:29] Speaker B: Yeah. Okay, so I want to make sure that we move on because. So we've talked a little bit about the latent circuit via manifolds, your latent circuit modeling work. Another thing that I wanted to discuss was your time scales, your intrinsic time scales work. And this is directly from I guess International Brain Laboratory or enabled by it. What got you interested in studying timescales of neural network firing and why should I care? I know why I should care. I'm directly interested in it for reasons based on what I'm studying. So. Yeah. How did you get interested in timescales and what does it mean? [01:15:10] Speaker A: Well, I feel like it's a longer story how we got interested in timescales. [01:15:16] Speaker B: This goes back to your studying single neuron dynamics almost, right? [01:15:20] Speaker A: No, no, no, it's more recent. More recent than that. [01:15:23] Speaker B: No, I know, but the interest. Yeah, so I was just making the connection to the old days as you call them. [01:15:30] Speaker A: So let's maybe not go all the way to old days. So let's just maybe let me just like focus on this like very recent study which we did, which was really truly enabled by IBL brain by recordings. Why people like in general, there are many reasons for why people may be interested in timescales. But one of the reasons was this finding by John Murray and collaborators back in 2014 that if you just measure timescales of resting activity, timescale just will quantify. It's just a number to tell you is activity of this neuron fluctuates fast or slow. Right. It's very simple metric. And then it turned out that the time scales were increasing very systematically across the cortical hierarchy. This is very cool because hierarchy is defined based on anatomy of the cortex, on the patterns of feed forward and feedback laminar termination patterns of neural connections. So there is this anatomical notion of hierarchy which aligns very well with functional information processing. Hierarchy you have your sensory areas which kind of need to quickly respond to stimuli and then you have higher level cognitive cortical areas which needs to integrate information over longer periods of Time. And then it turned out that even if you don't look during the task, you look just in resting state activity and measure this simple quantity, timescale of activity fluctuation. It correlates or predicts this information processing hierarchy very well. So that was fascinating. After that entire field of timescale research emerged. And I guess we should not go in all details of the field, but. [01:17:17] Speaker B: But this is like. This is like Xiaojing Wong's. Yeah, this is like, must love this stuff. And also Uri Hassan studies this and like story. People listening to stories. So this has been shown in kind of different ways. But yeah, go ahead. [01:17:31] Speaker A: It's. It's. Anyhow, I feel like we don't really have time to dive, like, because this field like expanded. There's now like entire field of studying timescales in all different contexts. Right. Like related to disease and in human and in different species. Like, let's set it aside for a moment. [01:17:48] Speaker B: Okay. [01:17:49] Speaker A: But what was cool that this simple metric can tell you something about organization of different brain areas. Right? It can give you a clue of how different brain areas organize. However, in the past, all these studies were focused on cortex almost exclusively. There were only just a few studies looking at thalamus. But kind of this timescale hierarchy and the notion of hierarchy is mostly only applicable to the cortex. But what we learned from working with ADL brain is much bigger than cortex. It is interesting. [01:18:24] Speaker B: I don't know where we are in the. At least for a period. You would think if you study the brain that all you have is cortex. But are we in a state now where people are appreciating more the systems aspect and the subcortical stuff? Or have we even moved further toward cortex? I can't tell like where as a field it is. [01:18:48] Speaker A: I don't know where the field is. To me, it feels like we are expanding our view of brain computation to the whole brain scale. [01:18:57] Speaker B: I hope so. I hope so. But your work on the timescales is contributing to that. So that's wonderful. [01:19:02] Speaker A: But that was a motivation. So our thinking was. Well, timescale seems to be this very simple metric, but such an incredible marker of organization, functional organization of cortical areas. Can we use this simple metric to understand the logic of how temporal information processes is organized on the scale of the entire brain? And this was really enabled by ibl because IBL recorded from almost every single brain region. So that was very natural. So this work was done by Yan Liangxi, who is IBL postdoc in my lab. And the way IBL is organized, there are obviously these platform projects, like very big projects where everybody contributes. But also every postdoc has their own personal project where they can leverage the IBL data set to pursue a question of their interest. [01:19:53] Speaker B: So there are. So the postdoc gets hired. I'm sorry, this is just. I'm just being selfish, like out of curiosity. So you said that they're an IBL postdoc. Like, so they're hired by ibl, but they work under you? [01:20:06] Speaker A: Oh, no, I don't know. I just call it IBL postdoc because Yanliang is postdoc in my lab who works on IBL related projects. [01:20:15] Speaker B: Okay, all right, got it. [01:20:16] Speaker A: Yeah, sorry. Yeah. All right. So in this work we also did in collaboration with Roxana Serati and Anna Levina in Tubingen in Germany, and we collaborated with them on timescales for a long time. So. But our idea here was like relatively simple. It's actually a very simple idea. Can we use timescale as this biomarker to delineate the organization of dynamical processing on the scale of the whole brain? So we did something super simple. We just went and measured timescales in all these thousands of neurons which IBL recorded from everywhere during spontaneous activity. And luckily in IBL design of the experimental protocol then didn't only have part of the session dedicated to task, they also had part of the session dedicated to 10 minutes of recording spontaneous activity. So it's a very long time. Like, it's actually like the longest duration of spontaneous activity. [01:21:14] Speaker B: I feel 10 minutes. I mean, that's a lifetime at those timescales. It really is. [01:21:19] Speaker A: Right, because like very often these timescales, for example, they would be measured during the fixation period of a task. Right? Like where, for example, animal is fixating, waiting for the trial to start. But trial didn't start yet. So animal is not doing the task yet. [01:21:32] Speaker B: But right then that. That's up to like maybe a few seconds, barely. [01:21:37] Speaker A: I feel like second is like, would be very. [01:21:39] Speaker B: It's hard for anyone to fixate for longer than a second or more, you know, like to fixate on. But it can go. It's in the seconds range. Not minutes, not minutes, in the millisecond range generally. [01:21:50] Speaker A: But yeah, yeah, but kind of opportunity to have these recordings done for very extended period of time also allowed us to measure very long timescales, which would not be possible if you just have these snippets of activity which are at most one second. So we had both very long Window. And we also had the scale of the whole brain. There were I feel, maybe two main interesting results which came out of it initial result which was very surprising. What we found that if you just look in median timescale in each brain region, those median timescales were overwhelmingly, overwhelmingly longer in subcortical structures than in cortex. [01:22:37] Speaker B: That's surprising. That surprised me. [01:22:39] Speaker A: That was very surprising to us because by this logic of cortical information processing hierarchy, right, you think like frontal cortex, all association areas, these are the epicenters for you to think and deliberate on very long time scale. [01:22:55] Speaker B: Cognition is so slow, abstraction is slow, and everything underneath is like really fast and dictating things. Right? [01:23:02] Speaker A: Exactly. It just like controls your immediate responses to threats or fast escaping reptilian brain which doesn't know how to plan. But it turned out that's not the case. And like in hindsight, well, maybe it kind of even makes sense because a lot of these structures are involved in controlling your internal states, which evolve on much slower time scales than any of these cognitive tasks. [01:23:28] Speaker B: You used the word control there. And I think of it also, and I'm curious what you think of the term like constraint, right? So it's almost like shaping things, which is like a slow control kind of process. But in a sense you have cortex acting fast up against this slow constraint. Right? So that does make sense. [01:23:51] Speaker A: Like modulator. [01:23:52] Speaker B: Yeah, yeah, yeah, you can say modulator. [01:23:54] Speaker A: Yeah, yeah, no, I agree with that. So that was one interesting result. That was super interesting that we see big differences in timescale across brain regions. But across the board, midbrain and hind brain in cerebellum had timescales up to five fold longer than in cortex and thalamus. That was unexpected. But then second surprising finding we had just by sheer amount of neurons which we had in this data set. We also noticed that within every brain region, we see a huge variability in timescale across individual neurons. So if I tell the prefrontal cortex maybe has timescale of about 150 millisecond, it doesn't mean every single neuron there has that same time scale. In fact, even in regions which have relatively fast median timescale, we would find neurons which would have excessively long timescales up to a second or even longer. So this got us curious. So we asked, okay, what's the distribution of timescales across neurons? And we looked at it and it turned out to be the same heavy tailed power law distribution with exponent close to 2 everywhere in the brain. [01:25:10] Speaker B: Despite the march toward longer Time scales on average. [01:25:14] Speaker A: Despite this huge variation in median timescales across areas, if you look in distribution of timescales across neurons, it was the same universal distribution all across the brain. [01:25:25] Speaker B: Scale free. Scale free. That's the rule. [01:25:29] Speaker A: Yeah. No, that was super surprising. So we wondered, okay, how can we reconcile it? And they use some mathematical modeling trying to make sense of it. And this model really suggested that this type of behavior in timescale could arise from systems which operate in similar dynamical regime all across the brain. Although the physical properties of those neurons may vary from area to area, driving differences in median timescales. But this universal scaling behavior could arise from a shared dynamical regime. [01:26:04] Speaker B: Are you continuing on the timescales line of work? Do you want to talk about what you're doing now? [01:26:11] Speaker A: No, I feel like timescales is not necessarily the main focus of what we do. Right, yeah. So what are we excited now? In general, we're very interested in single trial neural dynamics. And we worked on it in the past because that's another kind of leverage which manifold gives you. So the classical approach allows you to study neural responses in reference to external variables, but manifold view allows you to study neural responses with respect to each other. Right. You can reconstruct those manifolds just kind of by looking in how activity of cells is coordinated without like. So you put it another way, we can use unsupervised models to reconstruct the manifold structure. So, and we did it in decision making. And now like what we are super excited about, we kind of going towards understanding dynamics in the visual system which like feed forward and feedback dynamics across areas which resolve the ambiguity in sensory input. So that's super exciting to us, but it's super new project. So we are also, I already mentioned it to you, but super excited about these new tools like causal perturbation tools which are available in experiments. But on the other hand, right, like you also need good modeling to get some knowledge extracted from application of those tools. Right. Because for example, with holographic optogenetics, you can stimulate small ensemble of neurons. But like what neurons should you drive really? Right. And what do you hope to learn from this? Or if you want to control neural activity, how do you know which pattern to apply to drive your neural activity towards the desired states? So that's another direction we are super excited about kind of extending these models towards kind of kind of this causal causal perturbation effects. [01:28:17] Speaker B: And what's the answer? [01:28:22] Speaker A: Right. So so far we actually had not been able to do this in actual experimental collaboration. So we did this in synthetic models and also analyzed some existing data set where patterns were selected a priority. So they were not designed to do anything specific to the circuit. But it's, I guess, something we are actively thinking about. [01:28:47] Speaker B: And what do you think? Like what. So here's a different way to ask that is like, what is. What's the obstacle to better understanding in that regard? Is, is there something like, clearly in your way that you know, if you had more of or better equipment, better models, et cetera? Like what, what is in your. Is there something clearly in your way? [01:29:10] Speaker A: I think we are getting a reasonably good handle on models. At least what we do works in artificial models. And I feel like we were just looking for collaborators. But it looks like this technology is also taking off because what we want to do, to collaborate with experimentalists who perform these manipulations in animals during task behavior. So. But it looks like, just like with neuropixel, like this technology obviously is very complex. Right. But it seems like now we are kind of on the verge of this technology to become used across multiple labs. So I hope, like near future it. [01:29:49] Speaker B: Will be so more ibl. You're saying more IBL is what we need? [01:29:54] Speaker A: No, I didn't say that. Right. I just said, like, we just need one collaborator. We don't need many. [01:30:00] Speaker B: So 30 years from now, are we going to be talking about manifolds? Manifolds popped up and now they are real and they exist. Right. Even though people were working with manifolds back in the day and now everything's a manifold in 30 years. I mean, is manifold the solution? What are we going to think about manifolds 30 years from now in neuroscience? [01:30:19] Speaker A: Like, I don't see why they should go. Right. Maybe there will be new level of theories which, like supersede manifolds. But like, we still, like, think about simple and complex cells in primary visual cortex, although we also have like more complex theory of coding in the visual system. Right? Yeah, I don't think, like, why. I don't see, for obvious reasons for why this type of theory approach which we currently use is necessarily wrong. So it needs to be like dismissed and replaced. Right. Maybe it has limitations and maybe soon we will discover new ways to think about neural computation which will not require manifolds and kind of overcome some of those boundaries or glass ceiling hit now, who knows? [01:31:07] Speaker B: Okay, but so things do semantically drift, right? So the notion of a manifold might change over time to accommodate new findings and new abstractions. But you mentioned simple and complex cells, and those are defined, defined by their response Properties. And there's an advantage to naming something manifold doesn't have that same thing where someone came along in anesthetized cats and was like manifold. [01:31:32] Speaker A: Right. [01:31:32] Speaker B: They were like simple complex cells. They gave them names by their response properties and those names have stuck. But I don't know that they're that useful. Are they still as useful as they used to be? We still call them simple and complex cells. I'm harping on that example because that's the example that you gave. Whereas manifold might just. Our notion of what a manifold is and does might change over time. I don't know. Does that ring true to you? [01:31:55] Speaker A: So let me try to make sure I understand your question correctly. So you say 30 years from now, if you go to visual cortex, you can use the same definition to still identify simple and complex cells. And they will be still the same. [01:32:10] Speaker B: Concepts, even if they're not that useful, though I don't know if they're that useful of a concept, but I don't. [01:32:15] Speaker A: Want to argue about that. Right. But at least you say definition is clear and stable. But you say it's maybe not the same with manifold. That's what we call manifold. [01:32:25] Speaker B: Maybe not. But that's also visual cortex. You just said we name brain areas. Like I study motor cortex, like as if there's a little homunculus in there. I mean, there is. That's where the homunculus is in the brain. Right. But when you give a name to something, visual cortex. Oh, it does vision. But we know that there's lots of other things going on in there. Yes, it partially does vision. Right. But by giving it that name now we understand it as a visual area and it's. We can't get rid of that. [01:32:52] Speaker A: Right. [01:32:53] Speaker B: In the way that we conceive of it. And maybe manifold is. The notion is still, at least in my mind, is still sort of forming like what the hell a manifold actually is, you know, that's why I asked you, are they real? So maybe that's part of the process is. Is figuring out what we actually mean when we say these things. And so as we're building the notion, the concept, even though it's mathematically well defined, as you said, maybe our understanding of it will just be shaped differently as time moves on. I don't know. [01:33:25] Speaker A: Yeah, I don't see why. I feel like at this point we got to the point where, like in some of those recurrent neural network models, which clearly are not exactly as the brain. Right. But we can have a very clear definition of what the Manifold will be explored by neural activity, given the knowledge of the connectivity structure in this network. And it's all very well defined, Right. We can derive equations for latent variables, but these latent variables have very explicit meaning and can be explicitly related to activity and connectivity of the neurons in the full network. So I don't see how these definitions which we currently use are wrong. So maybe they are not all encompassing and apply just particular kind of network architectures which we study now. And maybe new architectures will come about where these notions will not be useful anymore. Like the same way you say, like isimpole cell and complex cells still useful. So maybe manifolds will not be useful anymore to reason about neural computation and they will be replaced by new concepts. [01:34:34] Speaker B: But is that what you think though? What's your bet? [01:34:36] Speaker A: Well, no, I don't know. But like simple cell and complex cells are not wrong. [01:34:40] Speaker B: They're not wrong, but they're defined by us. Right? I mean. [01:34:43] Speaker A: Right, but the same way manifolds are defined by us. And it's a useful concept to make progress now. Like, I don't know. I don't know. Like, it's hard to anticipate what will come. [01:34:55] Speaker B: Sure. Okay. Well, you're helping invent the future. Anything that we didn't discuss that you want to highlight or talk about that I did I not ask you any something that you want to discuss? [01:35:05] Speaker A: I think we touched on so many things. [01:35:07] Speaker B: So many things. [01:35:08] Speaker A: Yeah. [01:35:09] Speaker B: All right. This was fun. Thank you, Tatiana. And continue the good work. [01:35:12] Speaker A: Yeah. Thank you, Paul, for hosting me. It was a pleasure. [01:35:22] Speaker B: Brain Inspired is powered by the Transmitter, an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives written by journalists and scientists. If you value Brain Inspired, support it through Patreon. To access full length episodes, join our Discord community and even influence who I invite to the podcast. Go to BrainInspired Co to learn more. The music you hear is a little slow jazzy blues performed by my friend Kyle Donovan. Thank you for your support. See you next time.

Other Episodes

Episode 0

August 13, 2021 01:23:50
Episode Cover

BI NMA 05: NLP and Generative Models Panel

BI NMA 05: NLP and Generative Models Panel This is the 5th in a series of panel discussions in collaboration with Neuromatch Academy, the...

Listen

Episode

December 11, 2018 01:19:39
Episode Cover

BI 021 Matt Botvinick: Neuroscience and AI at DeepMind

      Show notes: DeepMind. The papers we discuss: Neuroscience-Inspired Artificial Intelligence. A nice summary of the meta-reinforcement learning work. Learning to reinforcement learn. Prefrontal cortex...

Listen

Episode 0

August 05, 2022 01:24:53
Episode Cover

BI 143 Rodolphe Sepulchre: Mixed Feedback Control

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord...

Listen