BI 196 Cristina Savin and Tim Vogels with Gaute Einevoll and Mikkel Lepperød

October 11, 2024 01:19:40
BI 196 Cristina Savin and Tim Vogels with Gaute Einevoll and Mikkel Lepperød
Brain Inspired
BI 196 Cristina Savin and Tim Vogels with Gaute Einevoll and Mikkel Lepperød

Oct 11 2024 | 01:19:40

/

Show Notes

Support the show to get full episodes and join the Discord community.

The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. 

This is the second conversation I had while teamed up with Gaute Einevoll at a workshop on NeuroAI in Norway. In this episode, Gaute and I are joined by Cristina Savin and Tim Vogels. Cristina shares how her lab uses recurrent neural networks to study learning, while Tim talks about his long-standing research on synaptic plasticity and how AI tools are now helping to explore the vast space of possible plasticity rules.

We touch on how deep learning has changed the landscape, enhancing our research but also creating challenges with the "fashion-driven" nature of science today. We also reflect on how these new tools have changed the way we think about brain function without fundamentally altering the structure of our questions.

Be sure to check out Gaute's Theoretical Neuroscience podcast as well!

Read the transcript, provided by The Transmitter.

View Full Transcript

Episode Transcript

[00:00:05] Speaker A: This is brain inspired, powered by the transmitter. Hey, it's me again. I'm on dry land. I hope that you are somewhere. So this is the second conversation I had while teamed up with Gauta Einwall at a workshop on Neuro AI in Norway called validating models. How would success in Neuro AI look like? Gauta creates his own podcast called theoretical neuroscience. Go listen to it. I link to it in the show notes along with a handful of other relevant links to the good people you're about to hear. That's at Brainspired co podcast 196. Thank you for supporting brain inspired, and thank you to the transmitter. And we are back. We're still on a boat. I'm here with Gauta Einwall again. We still have our. What is it, sea legs? Is that what they're called? [00:01:00] Speaker B: Absolutely. I think we all sort of after, when we got off the boat, right. We sort of all were sort of like a little bit. It's interesting because what you call vestibular system, balance system is a little bit out of whack. Yeah. Even back on land, we still have some oscillations. [00:01:19] Speaker A: Yeah, I was that last day, we had another couple sessions, or one last session in a conference room off the boat, and I was a chair of one of the panels and moderated some things. And I was standing up there sort of swaying still, you know, so that was fun. Okay, so in the last episode, you heard from Andreas Tolias and Ken Harris. In this episode, Gauta and I had a conversation with Christina Sauvin or Savin and Tim Vogels. And at the end of our conversation, we'll sort of wrap things up with Mikhail again, who helped organize. He and Conrade curding helped organize this workshop. So I'll just start by saying we're not going to give huge introductions here, but Christina gave a talk about and researches more the theoretical side. So she's right up your alley, Gauta, for your theoretical neuroscience podcast. But she uses recurrent neural networks to study how learning works in a very theoretically driven way. And do you want to say something about Tim? [00:02:21] Speaker B: Yeah, Tim has been, he has been working on synaptic plasticity for many years. So I know, I mean, some of his work back from when he worked with Wolfram Gerstner, educated many of the people working on synaptic plasticity in Europe. So there he did some really interesting work, I think, on how sort of networks sort of gets, gets into this balanced state by self tuning and by inhibitor plasticity. And now he's sort of using these AI tools sort of to, or at least optimization tools to sort of to not just actually explore the whole space of possible synaptic plasticity rules. So that was sort of. But it's really. Yeah. His group in Austria is doing really excellent work, I think, when it comes to exploring synaptic plasticity in its many. [00:03:18] Speaker A: Facets, I think even more so than the first one. The audio quality in this is sort of in and out. There's a lot more creaking and noises. This was in the night and the sea was angry that evening, it seems. [00:03:32] Speaker B: It's actually when I talk to the people on the boat about it. So say this. Oh, this, because we passed a fall off. So it's like a well known sort of stretch of ocean where it sort of often gets rough. So this was sort of. So he said, oh, it's only to last only an hour or two, they said. [00:03:50] Speaker A: And then, well, thankfully our discussion didn't get rough. [00:03:54] Speaker B: No. So that. No, no, it didn't. So it was a little bit. Was also late in the night. So that was also another reason for bumpiness, maybe. Orlando. Yeah. [00:04:05] Speaker A: All right. Anyway, final thoughts. I mean, this was just an excellent workshop, excellent amount of people. Excellent people. It was just a lot of fun. And I learned a lot, actually. [00:04:14] Speaker B: And I like in a final comment there, because ask Mikkel where, because I think what people really liked, what that was really was a wide variety of people there. They had really different backgrounds. And also one of Mikkel's worries, as he mentions, I think, was that people were not going to able to communicate, have some common ground to sort of to discuss, but that was not the case at all. And interestingly, he also said that some of the people he invited was actually hurt on your brain and brain inspired podcast that got sort of. So maybe that's also you could take some credit for the excellent selection of researchers who was invited on this. [00:04:56] Speaker A: Thankfully, I can take credit instead of be at fault because it worked out well. [00:05:00] Speaker B: That's true. Absolutely. [00:05:02] Speaker A: Okay, enjoy our second discussion here. [00:05:09] Speaker B: We're going to ask you some general questions or many questions. And what about the relationship with Neuro and AI? And we want to start being a bit personal. It's a bit late in the evening and both is maybe even more than when we did the other recording. So. But anyway, so in what sense has Neuro AI changed the way you ask questions or do, do a science christian. [00:05:40] Speaker A: You want to start? [00:05:40] Speaker C: I can go first. I would say that we were doing conceptually what we're now doing with neuro AI many years before the term was invented. So, like, there are different flavors of neuro AI. So there's the, we kept talking about it throughout the week. So there's the AI to neuro in data science terms and in theory terms, and there's the other way around, and we do a little bit of everything. But before there was AI, there was machine learning, and we did exactly the same things with a different set of tools. So from my perspective, our fundamental structure of our approach hasn't really changed, but the tools have gotten better. [00:06:25] Speaker A: But how about for you in particular, you personally on a personal level? [00:06:31] Speaker C: No, this is specifically about the research. [00:06:33] Speaker B: That maybe you should just briefly mention what your research in the group is. [00:06:38] Speaker C: Yes, sure, why not? So my group is sort of fundamentally interested in understanding principles of computation in the context of adaptive behavior, and we're interested in sort of like normative mechanistic interpretation. So we make circuit level models of learning, of memory, of task dependent adaptation attention, things like that. [00:07:05] Speaker B: So in normative, you mean that you ask, how is this helpful for the animal? [00:07:09] Speaker C: Is that what exactly? So normative, we mean that we think that these are fundamental computation for the animals, and so the, through evolution, development, et cetera, they've been optimized to do them well. So then looking at the machine learning optimal solution of the same kind of problems should give us indication about the essence of the computation that the brain has to do. And I think that it's really important for those computations to be very important for the animal. This wouldn't happen for everything, but that's kind of the approach. [00:07:41] Speaker A: But deep learning itself has nothing. You're saying the machine learning tools that existed before deep learning were sufficient for what you do? [00:07:49] Speaker C: I'm saying that the structure of the approach, how we ask this kind of question hasn't changed. So before deep learning, we would formalize our normative descriptions of the task in probabilistic terms. And you would use tools from bayesian machine learning to say what the optimal solution looks like and sort of try to make the maps to circuit function. Now we have a richer set of. [00:08:15] Speaker B: Tools, bayesian machine learning, that's like using machine learning techniques to help to find these probabilistic functions. [00:08:26] Speaker C: Yeah, so as I said, we have more tools and we have more powerful tools, but the way we approach the questions hasn't really fundamentally changed. [00:08:38] Speaker D: I think I would agree with that. [00:08:40] Speaker A: Wait a second. [00:08:41] Speaker C: This is for what we do in my group. I'm not saying that this is a general statement. [00:08:45] Speaker B: What about you, Tim? [00:08:47] Speaker A: Based on your talk, I was thinking that you would have a different answer. [00:08:51] Speaker B: No, I thought what about maybe this is for the listeners so we should. What was talking about? [00:08:55] Speaker A: That's what the introduction is for. [00:08:58] Speaker B: Okay. That's. [00:08:59] Speaker A: No, no, you want to just give a brief roundup of what that. [00:09:02] Speaker D: So my lab is similarly interested in circuit dynamics and the interaction between network level activity and plasticity rules. So how do plastic synapses change the dynamics and how do the dynamics change the synapses? And I think I spent a lot of my time as a PhD student and as a postdoc tuning spiking networks at times for months. And now I don't have to do that anymore. I guess in part because I'm not the one doing the programming, but also in part because the tuning part is being taken over by machine learning methods. So how does AI change the way I approach your question? I don't think about how painful the tuning is going to be anymore. [00:09:54] Speaker B: Parameter fitting. [00:09:55] Speaker D: Yeah. [00:09:55] Speaker A: Looking back, does that feel silly that you spent that much time or was that a valuable. [00:10:00] Speaker D: I think it was valuable. I think actually I had a blast doing it. It was frustrating, but it was also rewarding and I don't regret having tuned for six months. [00:10:10] Speaker B: But also, I mean, when I remember one of, one of the papers that you wrote with Henning Spekelet and also in the group of Wolfram Gerstners was this really cool thing where like inhibitory plasticity does the tuning force. Does the tuning force. [00:10:24] Speaker D: So that came out of this. [00:10:25] Speaker B: Yeah, but that was so this sort of, if you had good AI tools then maybe you wouldn't have thought about this. [00:10:32] Speaker D: No, I think I still would have thought about it. [00:10:34] Speaker B: Okay. [00:10:35] Speaker D: But we may not have hand tuned the rule. [00:10:37] Speaker B: Okay. [00:10:38] Speaker D: Yeah, and yeah, I think we were like that. [00:10:42] Speaker B: So it didn't really, the absence of AI didn't help you to prevent. [00:10:46] Speaker D: I don't think so. I think there is other ways. You asked the other question, how has AI negatively affected your. [00:10:51] Speaker A: I was about to just go ahead and bring that up because. Yeah, I thought, go ahead, you're going to answer it anyway. [00:10:56] Speaker D: So I think there is a push, there's some pressure to use ML tools as a scientist and if you don't, you're not considered interesting. [00:11:09] Speaker A: But is that ML tools or is that specifically like deep learning? [00:11:14] Speaker D: I don't think it has to be deep learning. I think it has to be some flavor of large amounts of compute. So if you don't, you don't agree? I think if you don't put in your cv somewhere that you've used 600,000 hours of cpu time, you're not available. [00:11:32] Speaker C: I don't put that on my cv. That's bad. [00:11:36] Speaker B: So. [00:11:39] Speaker D: And I'm saying this facetiously, but there's truth. [00:11:42] Speaker B: So now I know why I haven't gotten the grants lately. This has not been my cv. [00:11:47] Speaker C: I do think that the community has gotten more machine like in the sense that it's driven by fashions. And so, like, particular brands of deep learning have become fashionable. And it's easy to publish certain things and very difficult to publish things with other tools. But for me, the consequence of this fashion driven research enterprise has been a reduction in the entropy of our approaches. [00:12:13] Speaker D: I agree. And there are some people that are. Sorry to interrupt. Sorry. There are some people in our community that are very clearly very deep thinkers and very theoretically minded scientists that. That are our seniors that wouldn't get a job today because they're not using. [00:12:34] Speaker B: ML to further reduction in entropy. Admitting that there's like, there's more things. [00:12:41] Speaker C: Basically, I don't think there is one way of doing research that solves all of the questions. I think that there is strength in diversity in the community for the approaches, because for different kind of questions, certain different approaches are better or sort of like, make more sense. And it's also sort of longer term. We want to preserve knowledge about lots of different ways in which to do things because they might become relevant again. Like, one of the things that I learned when I was an undergrad from one of my professors, I did computer science as an undergraduate, was about this sort of like really old ways of doing, like, memory storage. [00:13:16] Speaker A: They made you learn this? [00:13:18] Speaker C: Yeah, we had to learn this in school. And it's like, okay, like, why are we learning how. [00:13:23] Speaker D: Grad school. [00:13:23] Speaker C: Grad school, undergrad, sorry. [00:13:26] Speaker B: But she studied in Romania, so she's a hardcore, hardcore thing. [00:13:31] Speaker A: Hardcore. [00:13:31] Speaker D: What we had to learn is in. [00:13:32] Speaker C: Kindergarten, it was sort of like we were learning in the computer science classes in undergraduate about the history of different operating systems and how they handled memory things and things like that. That was like, okay, but we have better computers, we know how to do this better. Why are we learning these things? And the moment when mobile phones came on, which had like, very, very different resource constraints, all of these old tricks that were completely relevant for a number of years became all of a sudden, like super relevant or super important again. And I think this is the kind of thing that we also want as a community of scientists. We want to explore sufficiently many different things to be able to do sort of like long term selection, this cross breeding of ideas. If everybody. It's an echo chamber, everybody thinks exactly the same way. We have lost something really fundamental about the process of doing science. [00:14:31] Speaker D: I think there used to be a period about ten years ago or so that there was a bit of maybe a snobbery towards people that were purely numerical that has flipped, and now there is a snobbery towards people that are not purely numerical. So that just theory will simply not get you there. [00:14:53] Speaker A: Well, neither of those are healthy. [00:14:55] Speaker C: No, no, clearly, that's exactly the point. Entropy is the answer. [00:15:00] Speaker B: Secondly, many approaches. Yeah. Actually, also in physics, it used to be that people who just do too much numerics, they were like number crunchers and not theorists. [00:15:09] Speaker D: Right, exactly. Are you a theoretical physicist or not? Are you even. [00:15:14] Speaker B: Exactly. [00:15:16] Speaker A: But one thing that, I mean, in your talk, you have used machine learning as a tool to explore the space of possible parameters that could tune the plasticity. Right. And I mean, you were alluding to that earlier as well, but so for you, it's really just changed the way you approach things. [00:15:35] Speaker D: But as a tool, well, like, my students can do things that I would have never been able to do. They maneuver vast landscapes of parameters that I could only dream of, and they have the means to not only just travel through them, but actually find meaningful combinations. And that's just. [00:15:56] Speaker A: Well, that's the whole point, right? Because you got to explore such a large landscape of possible combinations of parameters, you found the ones that actually work. And the backstory of this, of course, is that there's a. I'm just going to repeat the term, I know it's said over and over. There's a zoo of quote unquote plasticity rules in terms of the duration between pre and post spikes that then lead to strengthening or weakening synapses. And it used to be that, you know, the bliss and lomo. Sorry, that's a technical sort of very specific thing. That was the rule. But since then, there have been lots of rules that have been found. And what you used machine learning for was to explore the capacity, essentially the possibility of the rule space. [00:16:35] Speaker D: We added another 10,000, basically. [00:16:37] Speaker A: Yeah. [00:16:38] Speaker B: It's sort of interesting. It has some analogy analogous to sort of like when we used to do, like, modeling of physical detail, biophysically detailed neural modeling. Then there was only a handful of models that people used, like mains and Oscar fuel cells. And so we did. Yeah. So there was like a few cells that everybody used, essentially. And suddenly you got this automated way to make these neuron models. Right. So you got, like, the blue brain project produced a lot of neuron models and also the Allen Institute. And so suddenly you go from, like, three for a handful of neuron models to, like, a whole suit. [00:17:21] Speaker D: Yeah. [00:17:21] Speaker B: So has it made life more complicated? [00:17:24] Speaker D: I think, for morphologically plausible modeling, certainly. I think choosing what model you want to use has not gotten easier. [00:17:33] Speaker B: But also for you, in this, like this. Synaptic plasticity rules. Right. Joanna, you showed us. [00:17:39] Speaker D: No, I think for us, it's gotten a little easier because, I mean, we have different questions, but they're certainly more satisfying, because when you found a single rule that worked, it was almost certain that you were wrong, and the experiments were incredibly arduous to do. And now that's not. I mean, now we're still wrong, but. [00:18:06] Speaker C: You have a space of hypothesis. [00:18:08] Speaker B: Yes. [00:18:10] Speaker C: I think that's actually one of the good ways of using these more powerful tools that we've inherited from the deep learning revolution is to explore options that you wouldn't have thought of otherwise. [00:18:21] Speaker A: They're not so expensive to explore as well. [00:18:24] Speaker B: Right. [00:18:24] Speaker A: Time wise. [00:18:24] Speaker C: Yeah. And it's sort of, like, practical for a PhD students to do that and get the PhD in a reasonable amount of time. [00:18:31] Speaker D: So. [00:18:31] Speaker A: Sorry. So what I'm hearing from you both, and I think that everyone in neuroscience would agree, is that the new brand of deep learning, machine learning on steroids and AI, as great as tools. However, it seems like you both agree that there's something lost in terms of the knowledge of the other spaces of possible solutions to things and approaches. [00:18:53] Speaker D: I don't know if it's lost for us because we already have a job. [00:18:57] Speaker C: We're old enough to know things. But I do worry about of, like, the incoming PhD students who, like, have been trained, like, know how to train convolutional, neural net or whatever, they know how to run some deep learning autodiff, and they come to their PhD and expect that to be the essence of. [00:19:16] Speaker A: What you do worry about that. [00:19:17] Speaker C: I do worry about that. So it's like, we're talking about sort of, like, negative impacts, I think. [00:19:24] Speaker D: I don't know if we are just old. [00:19:25] Speaker B: Right? [00:19:25] Speaker D: We just sound old. [00:19:32] Speaker C: Finish the thought. So, I think there's, like, with really powerful tools, they're as good as you. Sort of like, your uses of them are. So, like, it's. It has the potential of making things substantially better, but in the wrong hands, it could, like, also make things much worse. So my worry about, sort of, like, our ability of our students to critically think they use these tools in a reasonable way. So the focus now for, if you're thinking about a junior person trying to get into this field is like, it's not how you use its tools. You can go to the Internet and sort of like, learn how to do that in a week. That's not the educational component. The educational component is how to think hard about the problems and the use of these tools in a meaningful way. Way. And that's hard because the objects are increasingly complicated. So, like, reasoning about them is hard. [00:20:30] Speaker B: Because I remember this, I read this Linus Pauling. He was the guy who sort of, like, found this structure protein, right? So he said that, like, when he sort of did this x ray of this, he had to do so much manually, and it was so many. So he had so much time to think about things. So he was worried that these were the new tools where you get this extra spec route or, like, automatically. [00:20:52] Speaker A: But wasn't it Socrates who said that he worried that would make us dumb? [00:20:56] Speaker B: Yeah, I was just thinking, you sort of from this paper that you mentioned where you tune this inhibitory plasticity to get to the balanced state. Right. So, I mean, in this process of tuning by hand, you thought about it much. [00:21:11] Speaker C: Yeah. [00:21:11] Speaker D: But for the ISP paper, we didn't tune. That was the beautiful thing. It was the inhibitory synaptic plasticity paper. We tuned for the balance paper the two years before that. [00:21:21] Speaker B: Yeah. Okay, so then you tune that. But then didn't you learn much about it? Didn't you get, like, the dynamics of the network under your skin? [00:21:27] Speaker D: I did, and I still do, in fact. And I. It's. [00:21:31] Speaker B: But you do that also with the machine learning techniques. [00:21:34] Speaker D: Your students. I think my students do, yeah. [00:21:37] Speaker B: They sort of have like an intimate. [00:21:40] Speaker D: Yeah, I think so. [00:21:41] Speaker B: Yeah. [00:21:42] Speaker C: So I keep thinking about this sort of like this. Was it like an Arthur Clark quote about any sufficiently advanced technology is not indistinguishable from magic kind of comment. Sometimes I feel that this sort of like the at least transformers and some of these, like, really big LLMs and things like that feel like magic to me. And I wonder how many of the neuroscience users of that technology treat it a little bit like magic, that you. You're so detached from how it does things. What are its limit cases? What are the things that it could, could possibly go wrong? Then you kind of take for a given that whatever that thing spits out is the truth. So I think that's kind of another thing that we're kind of losing with all of this complexity that our ability to sanity check the process becomes very, very hard. [00:22:39] Speaker B: So I guess you both say that sort of like, that it has improved the methods for doing what you already were interested in, subsets. Right. And then. So that's like an improvement of tools. But, and I guess, I mean, these tools will lead to new discovery, hopefully new discoveries in the future. But has it. Do you already have some examples of where it has already changed the way you think about the brain and cognition or. [00:23:05] Speaker A: Well, so, but we've been talking about the tools, and the other side of that is this. Continue this continually growing. People are very excited about sort of using deep learning models as a proxy to model brain areas and understand brain areas and or cognitive functions better. And that's sort of the, that's a different facet of neuro AI. So the, you know, that's where I was thinking, like, well, has it changed the way that you thought about how brains function in sort of not as. Not from the tool front, but from the. Using them as models of brain function? [00:23:39] Speaker C: So, like, when I said normative, we are actually doing that to some extent. And there is one example where I was like, okay, I started my faculty job saying that we're going to do interesting, mathematically tractable things and this is going to not be a deep learning lab. Forcefully. And sort of like, then students came into the lab and they really, really, really wanted to do it. And sort of like, then I had to. [00:24:03] Speaker B: To do deep learning. [00:24:04] Speaker C: To do deep learning methodology. And we tried to think really hard about how to do that in a way that's not stupid and trivial. So, like, I couldn't stand by my original statement, but I was pleasantly surprised in a couple of occasions. [00:24:19] Speaker B: Anyway, they were doing deep learning after you had left for home or something. [00:24:24] Speaker A: But that must be these days. Doesn't every student want to come in and do deep learning? [00:24:28] Speaker C: This is the problem. [00:24:29] Speaker D: Partly not to my lab, I have to say no, but I do. Spiking networks. [00:24:35] Speaker A: I know, but I. I just. I don't know. Everyone wants to apply it to everything, so I just couldn't imagine. [00:24:40] Speaker C: Yeah, it's going to happen sooner or later. But there is one example where sort of, like, there's a problem that we've been thinking for a very, very long time, and using tools from deep learning to ask those questions gave us qualitatively different solutions. And that's sort of like trying to think about how the brain infers from noisy observation things that are important for, like, latent states in the world that are important to. To drive behavior. And we and others have a cottage core. Like, there's an entire nation. Computational neuroscience, engineering, probabilistic representation. So how would neurons go about encoding beliefs about the state of the world? [00:25:22] Speaker A: You're on that paper with Ralph Hefner. [00:25:24] Speaker C: I'm on for my things. It took us I don't know how many years to write. Yes, but, like, those are sort of, like, very constructive. Like, me, as an experiment, as a theorist, I go about and say, like, how would I go, what? What do I think are the things that are important? I know math. These, I think, are the things that are important according to math. And this is how maybe I could map them into. Into neurons and neural activity. And in the recent project where we're trying to understand the behavior of animals, making inferences about changes in the context. [00:26:02] Speaker D: We have to take your hands off this. It's going to keep creaking. [00:26:05] Speaker C: Sorry about that. [00:26:06] Speaker D: Yeah, that's me. [00:26:08] Speaker C: So, we went a different route because the task was sufficiently complicated that it wasn't obvious how to apply the traditional approach. We tried to train some deep reinforcement learning agents to do the task and at the level of behavior. So when we're analyzing this recurrent neural network the same way as we do the animals, their behavior looks close to optimal, probabilistic. But if you open the box and look exactly what they're doing, it's nothing that any of the solutions that we have imagined empirically. So this kind of, like, taught us a big lesson, that this sort of, like, perfect mathematical elegance in the map to the neural activity was probably a futile endeavor altogether, and that they're kind of this not so obvious ways in which to achieve functionally the same thing. And we wouldn't have ever come up with those kind of solutions on our own without the use of these technologies. But I think those examples are still rated. [00:27:15] Speaker D: Are you thinking, how does a neuron implement backprops? Some people think that, right? [00:27:23] Speaker A: I don't care at all. [00:27:24] Speaker C: Yeah. [00:27:25] Speaker A: Don't care at all. [00:27:26] Speaker B: No. In terms of finding parameters. [00:27:28] Speaker D: Yeah, no, I just like. I'm like, meh. It doesn't. I don't know. [00:27:33] Speaker C: Well, you need to be sort of like. Like to care about how, if you care about learning at the circuit level, I think you do need to care about how does task relevant information shape synaptic plasticity in a way to drive behavior towards good states. [00:27:51] Speaker D: Sure. [00:27:51] Speaker C: And back proof or backdrop to time, in this case, are mathematical tools to formalize that problem in precise ways. [00:28:01] Speaker B: I mean, you're just in your project you were just interested in getting at good solutions. So they say, like in, I say in war, love and optimization, everything is okay. [00:28:11] Speaker C: Yeah. [00:28:12] Speaker D: And not everything is global. Right. There's a lot of local learning that is not backprop. [00:28:17] Speaker B: Absolutely. When you talk about biology, right. [00:28:19] Speaker D: Kristen Stringer just published a paper were on the archive that says like, at least 50% of what they see in terms of learning effects can be explained with local changes and local rules. [00:28:32] Speaker B: But that's a question of what's happened in real brains. [00:28:35] Speaker D: Yeah, yeah, exactly. So how is backprop implemented? Is not that interesting a question for me. [00:28:42] Speaker C: Yeah, so Tony Zader would have totally liked this answer. I still believe that there's a sufficiently large amount of goal directed learning happening that we need to figure that out. [00:28:55] Speaker B: Yeah, well, Tony's going to be on. [00:28:56] Speaker A: My podcast again, so he's been on it before, but I don't know if it'll come out before this chat or after. It depends on how long. [00:29:03] Speaker C: They keep inviting us to the same neuro AI meetings and they keep putting his talk before mine. So he gives an entire amazing talk about how the brain does very little plasticity. And it's all sort of like evolution, inductive biases. And then I go say, I'm gonna give you a talk about plasticity. [00:29:19] Speaker A: Are you going to that this year? Is this at the end of this month? I was asked to go and I was like, I don't know. [00:29:24] Speaker C: No, we've already done a UCL one this year. [00:29:27] Speaker B: What meeting is this? [00:29:28] Speaker A: It's the cold Spring harbor. [00:29:31] Speaker D: Spiking networks. Ironically called spiking networks. [00:29:33] Speaker A: No, this is a neuro AI. There's a neural AI, something subtitled. [00:29:39] Speaker D: Yeah, subtitles. Spiking networks maybe. [00:29:41] Speaker A: So, yeah, there you go. So, yeah, but anyway, ironically, no. [00:29:45] Speaker D: Spiking networks. [00:29:46] Speaker A: So. But, so it doesn't sound like. It sounds like you don't give a damn about. No, I'm just kidding. It doesn't sound like, you know, like this deep learning revolution has changed the way that you think about brain function. [00:30:00] Speaker D: No, I don't know that it has. I'm still interested in the same things. I'm interested in local changes in plasticity rules and neuromodulated changes to activity that will then produce local changes in plasticity. [00:30:16] Speaker C: I wouldn't say the same for us. What's changed is the scale of our ambitions. So with this set of tools, we attempt to understand adaptive behavior of much higher complexity than we would have without it. [00:30:32] Speaker A: But that doesn't speak to the way that you internally, think about how brains function. [00:30:38] Speaker C: No, no. This is why I said that our methodological approach only has gotten richer, but it's, like, structurally hasn't changed. [00:30:48] Speaker B: But, I mean, you have sort of. I mean, you are sort of like what you're doing. [00:30:53] Speaker A: Like, he's pointing at Tim, folks. [00:30:55] Speaker B: What? Tim? [00:30:56] Speaker A: Saying you. [00:30:57] Speaker B: Yeah, Tim. Tim, you have sort of worked a lot with, like, with the spiking networks of, like, integrating fire type and I, which is sort of like. It's not like multi compartmental modeling, but it's still quite biophysical. Right. And you think the. Or. Yeah, it's sort of like physical, right? I mean, there are real things and stuff. So. So do you think. I mean, do you think that this focused on AI and will sort of actually. I mean, reduce, suppress sort of, like, the activity? I mean, that's. [00:31:28] Speaker D: No, I don't think so. Look, I mean. I mean, that's part of the appeal of. I think the questions that we're asking in my lab is that we're not competing with the big companies, we're not competing with DeepMind and etcetera. They don't give a shit about spiking networks because so far, they haven't been proven to be computationally viable. And one of the questions in my talk was, can't you find a function or a task that is actually computationally interesting? All the neurotasks you have memorizing something are totally boring for someone who's doing. [00:32:05] Speaker B: AI, but you still have this potential for providing, well, requiring much less energy. Isn't that the whole ide. But that's. [00:32:13] Speaker D: That's an argument that people make for spiking. I don't. I don't know. That's making spiking networks the water carrier for. For. For big AI, and I don't. That's not my interest. I want to understand how the brain works. [00:32:28] Speaker B: Sure. Yeah. Cool. So, how do you think. I mean, we sort of. What about, like, do you think. How do you see. Well, these are exciting times in AI. So how do you see the relationship between AI and neuroscience in the spec like that in the years long term. [00:32:46] Speaker A: Future, by the way? Sorry, Christina, you got the. I think you almost got the quote verbatim. It's Arthur C. Clarke. It's. Any sufficiently advanced technology is indistinguishable from magic. Sounds like you got it verbatim. [00:32:59] Speaker D: Yeah. [00:33:00] Speaker C: Yes, I think about that a lot. [00:33:03] Speaker A: Sorry to interrupt. [00:33:05] Speaker C: Thank you. So, the future of the interaction between AI and neuroscience, I think that's actually where, at the moment, where this is, like, maximally unclear, and one reason for that is entropy. [00:33:19] Speaker B: Is that what you like to say? [00:33:20] Speaker C: Yeah, like, we do write papers about maximum entropy models, but this is an accidental happening. So, on one side, AI is. This is in this phase of young enthusiasm and exuberance. So, like, you blink, and the entire sort of, like, set of those fanciest architecture, fanciest trick has already changed. Like, you can't keep up with the literature, these things change so fast. So it's hard to say what AI would be about in six months time, not to mention ten years. So that's kind of a big source of uncertainty because we don't know why, where that's going. Presumably, as this thing matures, you're going to see the same kind of things that you see in maturing other fields. So it's going to be less about changing our mind about how we want to approach this every, every few months, and more about sort of, like, converging to a set of good solutions and trying to build a foundational understanding of why they work. We're not there yet, but. [00:34:25] Speaker A: So you think that this recent exuberance will die down in the next couple years? [00:34:30] Speaker D: I think we're just going to switch course. I think general artificial intelligence is not far off. [00:34:35] Speaker A: Oh, geez. Hot take. [00:34:38] Speaker C: I'm going to nothing say that, at. [00:34:41] Speaker D: Least in parts we're gonna have. Yeah, I mean, like, so what do you find? Is general right, is creativity in. I don't know that we're gonna be using this as humans for a while. [00:34:54] Speaker A: Because creativity is not part of your definition of general. [00:34:57] Speaker D: Yeah, I would say, like, transferable skills, fine creativity. [00:35:02] Speaker C: And to ensure that's true, I think LlMsdev do something good, but it's not general intelligence. [00:35:09] Speaker D: But we're closer to it than I think we can project. Like, we can project two years down the road. And so what the relationship is between AI and neuroscience is entirely dependent on how, you know, for the next two years is going to be entirely dependent on how quickly tools from AI are going to become ever more powerful to understand relationships in neural recordings that we can't even fathom yet. [00:35:39] Speaker C: Yeah, I mean, like, we're going to become a users of these technology, that's for sure. [00:35:44] Speaker A: But we already are. [00:35:45] Speaker D: Yeah. [00:35:46] Speaker C: Like, to an even larger scale. So it's going to be kind of the bread and butter that you need to know how to do these things. [00:35:51] Speaker D: But NLP's are spiking. [00:35:54] Speaker C: I do wonder, sort of like, what's. What purpose does basic research in neuroscience serve if you have, like, a functional model of general artificial intelligence, a lot of the original motivations of why we got, like, why I got into this field was, oh, this is the most intelligent system that we know. So if you want to understand principles of intelligence, looking at the brain is, like, a good idea. But if we have an artificial model of that that we're satisfied with to some degree, I think it's nothing clear exactly how to assess that. But assuming that that would be the case, then I think that we, like the community, the computational neuroscience community, would have to have some really serious soul searching about what are our questions? What purpose do we serve now? And, like, what Ken and Andreas were saying earlier about sort of like, maybe this becomes more about circuit level and mechanism, like molecular, sort of, like, going more low level. So systems neuroscience will. Would not have that much of a purpose in basic research terms, but you're going to have to go down to get to the clinical applications. Part of the process might be one way this would play out that would make me very miserable, and I might have to change fields, but who knows? [00:37:15] Speaker B: But sort of. I mean, one. One thing that we have seen is, for example, talking about Andreas, that he's making this foundational models, which are extremely good based on deep networks, extremely good at predicting, like, in a neural activity, neural responses based on visual input, but it's difficult to interpret. So this thing of getting increasing predictability and losing. So the older models were not as good at predicting things, but you can sort of think about things like receptive fields and Gabor functions. And so are you more comfortable, maybe? Are you comfortable about sort of having less interpretable models, but more? [00:37:57] Speaker C: I think interpretability is a concern. I think, like, what the modern deep learning tools are providing us, which is kind of going back to the general intelligence discussion, are extremely powerful statistical descriptors of large quantities of data. So there's a very big difference between, like, very good statistical descriptions of the data set and a process model that describes the causal relationships that generated that data, like those, could be completely different solutions, solution classes. So I think, ultimately, we want to understand how things work, and we need process models. So that's. It can't be the end of the story, because it just summarizes exceptionally well a large amount of data. [00:38:47] Speaker D: But it also won't work to replicate what computer science has done, which is to create systems that we then don't really understand. So our goal is to understand the brain, so it won't help to simply replicate the brain. [00:39:01] Speaker C: In silico, there's another famous quote that I don't understand what I can't create. But I think the fact that I can create something doesn't necessarily mean that I understand. [00:39:12] Speaker D: Right. Feynman is wrong. [00:39:13] Speaker C: The converse is not true. [00:39:15] Speaker B: It's like a new thing. But also, I think if you have, like this, say if you have at least if you have like a. Like a biophysical model, the network model, which, of course, is just as complicated as a deep network in some sense. But at least if you're able to make this model, predict experiments, then you can start. It's like a white box. You can start playing with it, turning on parameters so that sense, it's like a beautiful research animal in some sense. [00:39:41] Speaker C: But one to one map, that's a beautiful research. I agree with that. [00:39:44] Speaker D: One to one map of the world is gonna be as. [00:39:49] Speaker C: Model of a cat. Is the cat preferably this? [00:39:52] Speaker D: I don't know. [00:39:53] Speaker A: Is it Schrodinger? No. [00:39:57] Speaker C: This is a quote from the very first summer school ever on computational neuroscience. I keep collecting these. [00:40:05] Speaker D: Yeah. [00:40:06] Speaker A: Someone else. I don't have any good quotes. [00:40:08] Speaker D: I like the other one better. With a cat, which goes, science is like looking for a black cat in a dark room, and it's not even a cat. And I think that's what we're asking, right? Where. What's the relationship with AI, between AI and neuroscience? It's not even a cat. [00:40:24] Speaker C: But going back to the original question, what's the future of the relationship between AI and neuroscience? I don't know, but I'm kind of excited to find out. I think this is gonna be fun, whatever creation. [00:40:34] Speaker B: I agree with you. [00:40:35] Speaker A: Do you have an AGI take? Tim gave his, christina gave hers. [00:40:40] Speaker B: No, no, I don't have anything. [00:40:43] Speaker A: Okay. [00:40:44] Speaker B: That's wild to say about this. [00:40:45] Speaker A: I just. So I think it's telling that we can all three, because I think it's. First of all, I think that we have a misconception of what intelligence is, but we all have a different definition, and we have to operationalize it, et cetera, et cetera. But I think this. What I think of AGI is, like, way far away, and we can all three disagree, and none of us knows anything. And that is exciting. [00:41:07] Speaker B: I should mention that. I mean, I used to do condensed matter physics back in days. Before I got. That was a postdoc. I took a PhD and was even, like, a postdoc doing this. So I switched at a late stage. And, of course, and this was at a time when sort of, I would say, like, we solved the Schrodinger equation, and we sort of. But there was no mystery at the end of the rainbow, and then you come to neuroscience, and we understand so little, but it's fantastic. [00:41:33] Speaker D: There's no rainbow. [00:41:36] Speaker A: You're trying to indent the rainbow. [00:41:37] Speaker B: Yeah, I think it's fantastic. At the end of there, somewhere far out there, there's consciousness, right? [00:41:44] Speaker A: Yeah, sure enough. Yeah, there's no rainbow. But there are leprechauns everywhere. [00:41:47] Speaker D: Yeah, I think so. [00:41:48] Speaker A: With the stars. [00:41:49] Speaker B: That's fun, though. I agree with Christina. I mean, it's fun that we are. It's a big privilege to really be at the frontier of this sort of this big unknown. [00:41:58] Speaker A: So we already. So, I mean, we already kind of talked about what they believe neuroscience can learn from AI, and that's really in the form of tools. Right. I mean, that's just mostly a tool. [00:42:09] Speaker C: Yeah. I wish there was more the other way around and, like, we. [00:42:14] Speaker A: What AI could learn from neuroscience. [00:42:15] Speaker C: Yes. So ways in which knowledge from neuroscience science informs architectural choices, for instance, in deep learning or other algorithmic quirks. I think it's sort of like there is a transfer the other way around, but it's kind of subtle, and it's not like one thing that made a humongous impact, but it's sort of, like, in subtle ways, it affects a lot of things that are happening in, like what? Yeah, like neurons and attention being the immediate, obvious things, but. But also sort of like, the way they approach interpretability of their trained recarnual networks is essentially by treating it like a brain. In doing neuroscience experiments, they do ablation experiments, they do in silico mapping of receptive fields and things like that. So I remember sort of like in earlier days, five years back, when they first start, like, deep learning first started to be really, really successful, and I was looking at what they're doing with the networks, trying to understand their properties. Like, ha ha. You're using experimentalist tools to try to understand something really complicated and failing in exactly the same way as the rest of the neuroscience community. [00:43:25] Speaker A: That hurts on the inside. [00:43:28] Speaker C: Yeah, but this is kind of the point that it's not that there is no flow of information, it's just that it's hard to pinpoint. This is one thing that completely made the difference, but in subtle ways, we're influencing how the process works in a lot of ways. [00:43:45] Speaker A: Well, it's not subtle how AI is influencing neuroscience. I mean, someone had to argue that the brain is not a transformer today. You know, like, they actually spend a whole talk arguing that, you know, we're not transformers. [00:43:57] Speaker C: Like he stated it. I don't think he was trying to, like, I don't think that was a position difficult to defend. [00:44:03] Speaker A: No, but the fact that it was defended for that period. [00:44:06] Speaker B: But there are also neuroscientists who make models for how transformers can be implemented, both at the neuron circuit level and at, like, neural area level. [00:44:14] Speaker C: So, yeah, yeah, we do have models of contextual modulation, of visual processing, for instance, that are circuit level models. They're not transformers in the details, but they're transformers in the spirit, in the sense that I have top down information that decides what kind of things from my input stream is task relevant, what kind of bits are nothing. And I preferentially transfer the bits that I care about, and the dynamics of that is what we have circuit models for and related to actual neural data. So, again, what counts as similar enough? Is it similar in spirit inspiration, or is similar in spirit, but not in the details, something to be ignored? I don't know. [00:45:05] Speaker B: I mean. I mean, changing topic a little bit. I mean, you, Christina, and I just got tenure. Congratulations. Very good. Which is sort of like a milestone, obviously, if you want to stay in academia. And you also. I mean, you have a permanent job, but only for a few years. [00:45:23] Speaker D: Four years. [00:45:23] Speaker B: Exactly. So you're rather newbies in terms of tenure. Right. And that also means that, because this question, what does it mean to be productive in science? And from an operational point of view, if you want to stay in academia, sort of being able to qualify, getting tenure is a good thing. Right. And you have to be show off with, like, papers and grants and stuff to get to that. But now you have sort of, at least you have just been for a few weeks, and you have now time to. You can sort of do different things with your career. I mean, you can sort of be involved in many projects and spread your thin, and you can. [00:45:58] Speaker C: We were always doing that for ourselves. [00:46:01] Speaker B: Yeah, exactly. [00:46:02] Speaker D: Yeah. I think, actually, the time before tenure for me was probably much more diverse in what I was doing. [00:46:11] Speaker A: Is that because you were trying to find the thing? [00:46:13] Speaker D: No, also, I was doing things like the Mbizo or, you know, the summer school in South Africa that I started to co direct or worldwide neural. These things were not necessarily directly career relevant, but they were fun, and they sort of served as an outlet for what is otherwise a relatively, you know, computational neuroscience doesn't have a direct impact on many things. [00:46:43] Speaker A: We'll cut that. We'll cut that. [00:46:49] Speaker D: And those additional hobbies, quote, unquote, serve that purpose. [00:46:54] Speaker C: If you're on tenure track, american professor, you don't have time to have hobbies. So I did like, my entire career has been about doing too many things at the same time. And that's a bad strategy. They tell you not to do it, but it was sort of like, just happened to me. It's kind of what works with how my brain works. I have lots of spread around interests. I don't care about one thing, but about the range of things. And we had lots of collaborators, so that mushroomed even more projects. So it just happened. [00:47:23] Speaker B: But do you think it actually, did you think you did your best, did you do your best work in that way, or do you think now that you can. Now you can sort of focus on fewer things? [00:47:33] Speaker D: I find myself being swallowed up by administrative chores since I have tenure. So it is now said that the time of the juniors has to be protected. And so here is the administrative load increased. So what I used to spend on the inviso, which I have just retired from, or worldwide neuro, I now spend on hiring committees and grant committees, all kinds of nonsensical, less immediately relevant. [00:48:02] Speaker A: Yeah, okay, wait, before, so were you gonna ask about productivity? [00:48:08] Speaker C: The answer to the question. So basically, because I was. I was putting up my documents for tenure not so long ago, and I was like, had to think retroactively is like, what it is exactly that we've achieved. Where is this going? Things like that. There was a lot of soul searching involved, and I think I've came up with the conclusion that doing fewer things better is something that I would like to try in the incoming years. They're practical things, so you can't trim down as much as you want, potentially. [00:48:40] Speaker A: But I just wanted to ask, like, immediately, and then we can come back to this because. So what then? What advice would you give to people going in now, given, you know, reflecting on your own? I mean, you know, don't. Are you going to say don't do what I did? [00:48:55] Speaker C: No, I do tell people that on a regular basis, but this is not what I'm going to tell today. So I think sort of like taking the time to find a good question before you jump into doing frantically like things is something that I try to encourage my starting PhD students. [00:49:19] Speaker A: How do you know what a good question is when you don't know anything? [00:49:22] Speaker C: Yes, so this is the second immediate advice that they give to those people is read literature broadly. Maybe. Maybe this is like old grumps talking, but I do feel that in graduate school, school I spend a good fraction of my time reading papers. And there used to be this sort of like 1010 advice. Like what? Plus what? 1010 rules. Ten rules for blah blah blah. And there was ten rules for becoming a great writer. And one of those was like, read ten papers a day. Okay. Like, nobody can read ten papers a day. What are you talking about? Like, these papers are getting so complicated. It takes a week to read. But I think a milder version of that advice is very good that you need to know what the field is about. And people tend to like read very narrowly, so exactly in the niche of what the project is about, but sort of like missing a lot of really important connections because they just don't have. [00:50:22] Speaker B: Course, now you have two excellent podcasts in the field. Brain inspired. Brain inspired. And mine is thorough. Yeah. [00:50:32] Speaker A: And chat. GPT chat can summarize. [00:50:34] Speaker D: That's not a podcast. [00:50:37] Speaker C: That'S kind of a thing like Tim's new app to summarize. [00:50:42] Speaker B: What is the new apple topic topotopic? Yeah, tell us about. We can better. [00:50:48] Speaker C: No, I just want to say that the challenge there is because the field has gotten, has mushroomed a lot more production. So basically there are a lot more papers to read in general. But that's maybe one of the places where deep learning might help because like if you have summarization tools, like, you can get at least a superficial breadth and twee prints. [00:51:09] Speaker D: Twee prints. Actually, I think a hugely interesting way of like tweeprint, like the pre print on Twitter, you know, or now on. [00:51:20] Speaker B: The. [00:51:21] Speaker A: A summary. [00:51:22] Speaker D: Yeah. Quick summaries on Twitter. I really enjoy those. [00:51:25] Speaker B: Yeah. Okay. I should maybe start doing those. [00:51:29] Speaker A: What would you, what's your, what would be your advice? [00:51:31] Speaker D: To drink, I think drink from the firehouse. I don't know. I, I really, I really don't know. I find the, I find the term work life balance really problematic, especially for young PIs, but also for grad students and postdocs. I think the only work in your life is that you have to sleep enough. That's the only work time. Everything else is sort of life. [00:52:06] Speaker A: I'm probably, wait a second, I'm trying to understand this. The work is the sleeping. [00:52:12] Speaker D: I find sleeping is my worst chore in the day. I set an alarm clock in the evening so that I go to bed at twelve. [00:52:17] Speaker A: Because otherwise you're going to keep working. [00:52:19] Speaker D: No, I just keep staying up. I mean, that's really the only thing that I violently dislike in my day, going to bed. [00:52:28] Speaker A: Have you ever tried putting the cocaine away? [00:52:32] Speaker D: No, but I think what would be my advice? Do what you're interested in. Run as hard as you can. Don't take prisoners. Drink from the fire hose. I don't know. [00:52:41] Speaker A: Do you think work life balance has gotten out of control? [00:52:45] Speaker D: I think I'm going to get slaughtered if I say that. Seriously. [00:52:49] Speaker A: But there's, you know, things can. [00:52:51] Speaker D: I think what is considered work has been. Has been a little bit corrupted, in a way, because, you know, we're not. You can't be at the same time a student and a worker, in my opinion. So you have to sort of decide whether you want to be a graduate student and take on what you get as a privilege of being taught something, or you decide that everything you do after you reach your lab is work, and then it's a nine to five job, but you can't have the cake and eat it, I think. And so if you decide that you're actually a graduate student or a scholar of some capacity to whom society gives relatively large amounts of money for very little productivity compared to a baker or a builder or, you know, various other jobs, graduate students, postdocs and PIs as well. [00:53:54] Speaker A: Relatively large amount of money. [00:53:55] Speaker D: I would say. So. [00:53:56] Speaker A: Okay. We're through. [00:53:57] Speaker D: Yeah. [00:53:57] Speaker A: All right. [00:53:58] Speaker D: I mean, I certainly got a lot less money than what grad students are getting today. And. [00:54:04] Speaker B: As a grad student. [00:54:05] Speaker D: As a grad student. Yeah. [00:54:06] Speaker B: And you were in us, and you're not working in Austria, so it's partially a european us thing. Right. [00:54:12] Speaker D: I. Yeah. I don't know. I think. I think it's a life choice you make at the end of the day, and you are going into a field that is a very competitive, but be very privileged in that, you know, we're sitting on a ship talking about science. [00:54:30] Speaker A: It's the first time for me that I've been on a boat at a conference. [00:54:34] Speaker D: Yeah, me too. Same. But a friend of mine, Guillaume La Joy, always says, I fucking love science, and. And he fucking loved science. Always. Even when he was being paid very moderately. And. And that to keep that in mind, to call that back into. Into your own memory, into your own every day that you're not, in fact, doing something because you're being forced to, but that you're doing it by choice is an incredible privilege. [00:55:05] Speaker A: Christina, you look like you're chomping at the bits. [00:55:07] Speaker C: Yeah, no, I think I agree with this. I think that, like, being in graduate school is very intense and very hard. And if your heart is not really in it and you're doing it as a job, then probably there are better ways of getting the same amount of money with better, with better benefits and better life. Bio. So I think sometimes people get into graduate school either as inertia. They were, they were doing, oh, that's. [00:55:35] Speaker A: Like maybe 70% of people, I would. [00:55:38] Speaker C: Guess, or you or because of the sort of like social pressures that they won the title, but they don't really enjoy the process. [00:55:45] Speaker A: And those are overlapping populations. [00:55:48] Speaker C: There might be, but. But I kind of feel sad about those people. [00:55:52] Speaker B: You are sort of clearly, well, both extremely motivated and sort of have made it also in the sense you got excellent academic jobs. But I mean, there are also people who sort of. I mean, they're not that. I mean, at full health, they have some, like, health limitations or maybe like some family obligations. So are you saying that or maybe they're just lazy? Yeah, lazy. [00:56:14] Speaker A: I don't point a gut to them, really. [00:56:18] Speaker B: That's impossible to do. [00:56:19] Speaker D: That's an affliction. [00:56:21] Speaker B: So, I mean, there are sort of, if you cannot put in, like, I'm. [00:56:26] Speaker C: Not saying that you need to work 12 hours a day. I'm just saying that the hours a day that you work, you need, you know, you need to be hundred percent in it. Like that. Those need to, to count. [00:56:35] Speaker B: Exactly. [00:56:36] Speaker A: Whereas what's a job that would not apply to. [00:56:41] Speaker B: Most jobs? I mean, most jobs when you are sort of work from eight to whatever, nine to five, and then, like, if you work in a shop or like, maybe like, I'm sure that they're able. [00:56:54] Speaker C: This is not arable, but basically, like, there are plenty, like, plenty of things that you can do at 80% and get away with it. And I don't think you can do science at 80% and get away with it as a career. Like, you might get the PhD that way, but you're not going to be very successful. [00:57:13] Speaker D: And to be fair, at the postdoc level, I think there is a lot of people who are putting in 100, 120% and they're not getting jobs and they're not getting invited and their interviews go poorly. And because the bottleneck is getting a PI position and that's the fire hose. [00:57:30] Speaker A: That won't turn on. [00:57:31] Speaker D: Yeah, yeah, yeah. No. Or that is, you know, passing them. [00:57:35] Speaker A: Right, right. [00:57:36] Speaker B: I say to sort of, like, people ask me that, like, taking a PhD. So you learn how to program code, whatever. That's, that's a safe investment regardless. But going on a postdoc, maybe that's, that's a little bit. I mean, if you're aiming for academia, that's sort of bottleneck. There are much more, many more postdocs than PI jobs. Permanent jobs. Right. [00:57:55] Speaker D: I was. I was. I was ignorant to the cliff that I was standing on when I started my postdocs. And I simply. I always. Until I had kids, I actually had no risk management plan. I was just like. [00:58:10] Speaker A: But you didn't really need to. [00:58:11] Speaker D: No, I didn't need to. [00:58:12] Speaker B: I was young and immortal and could always do something. [00:58:14] Speaker D: Exactly. [00:58:16] Speaker A: Go ahead. [00:58:16] Speaker C: Sorry. Because I did my PhD in Germany and the german system treats like Hashem a lot more sort of like insight into this because it's so difficult to get a permanent academic position. In Germany, people start a PhD largely with the expectation they will go to work into industry and sort of like the transfer work skills and what you're learning being used for lots of different things. It's kind of like part of a mo. It's not an afterthought or plan b. It's like the default and if anything else happens that good, but it's not really expected in the US, we're still sort of like selling the academic path as the default and everything else is plan b, although the numbers are really not substantiating that in any way reflecting that. So. [00:59:12] Speaker A: Yeah, I'm slightly afraid to ask your productivity question, given their last one. [00:59:17] Speaker B: No, but I mean. Yeah, the slow productivity thing. [00:59:20] Speaker A: Yeah. [00:59:21] Speaker B: That I talked about. I mentioned that I sort of read this book about slow productivity. [00:59:24] Speaker A: We have just a few more minutes. If you guys are good for just a couple more minutes and then we'll go. [00:59:28] Speaker B: Yeah, I think. Yeah. So, yeah, so I just, I said, I read this book of slow productivity and it's like this. The basic idea is that it's, it's, well, it's easy to measure productivity if you're, if you're a farmer or producing industrial things or. But the knowledge work is not so easy to measure. Right. And then you get these proxies that, oh, many papers or that you look visible and you work a lot and so is that. And then you think about when you sort of read sort, for example, about the lifestyle of Darwin, who sort of like, I would say, like productive, had a quite productive scientific life. He didn't work that many hours. Right. He sort of had like in the morning and then he did some. But it was like this, this, this. He was. Yeah, he was sort of like focusing on a few things and then sort of doing high quality work. So is it. Is it something? But I guess you feel pressure to get in grants and have students. Success is often like measured now in sort of how many students you have and how many grants you get in and how many papers you. [01:00:34] Speaker C: Yeah, I need to get enough money for my students to have jobs to be able to graduate. So there are external pressures. So it's not like you can, you can take your time. I wonder sort of like this historical examples, whether it's fair to make those comparisons. I do think it's the better way of doing science, but the sociology doesn't work. And the reason for that is a lot of the very successful scientists in that period were like independently rich and they did this as a hobby for fun. So they did it whenever they wanted. They had the time and the leisure, the leisure to do that. And it wasn't sort of like, I need to desperately get some stuff done. We're in much more sort of like externally driven. [01:01:17] Speaker A: So how do you think about productivity personally then? [01:01:21] Speaker C: I wish, I, like, I think Darwin might be an extreme version, but I do think like, you actually make better science that way. But like. [01:01:29] Speaker A: But screw it, we're not in that system anymore. [01:01:31] Speaker C: So the sociology doesn't work and sort of like until as a culture, as a field, we decide that we are going to change the incentive structure in a way to make that a feasible mo, like we're going to have to do what the external pressures force you to do. I can't stop doing certain things. It's just not going to work. [01:01:59] Speaker D: So my postdoc advisor, Wolfram Gerstner, when I joined his lab, told me three rules for his lab, or there was four actually, but three big ones show up in the lab once a day. One paper per year with your name and my name on it. Doesn't matter what positions. And if you want to go to a conference, you have to present your own work. And then on top of it, we couldn't speak anything but English in front of open doors. That was rule number four, but that was it. And I kind of try to propagate this and I think that's for postdocs, a great rule. One paper per year is doable and not necessarily as a first author because sometimes they support grad students. But if you're a postdoc for four years and you got four papers out of it, that's really productive. [01:02:47] Speaker A: Yeah, but if that's the rule, then it says nothing about the quality of the work. [01:02:51] Speaker D: No, no, but it does, because one paper per year is not a lot if it doesn't have to be a first author, but it takes a lot. [01:02:57] Speaker A: Just to write the paper and shape the paper and get the paper out the door. So that in itself is a lot of work. So if you're work, I'm trying to like. So if you're working to get a paper out, that's different than working to answer a question, right. I mean, presumably you would want to be doing both. [01:03:14] Speaker D: Yeah. No, but I think you front load with work and then you end up with papers. And so maybe I'm speaking because of confirmational bias, because I was in Wolfram's lab for four years and I have four papers. But, you know, this plus minus one seems to work okay. Also in my lab, the postdocs seem to, you know, they're not slacking off. They're working. And at the end of the day, they're ending with about a paper per year. [01:03:40] Speaker B: And you were doing monkey physiology, which is the hardest thing. [01:03:45] Speaker D: And this was a theory lab, right? This was a theory lab. Experimental labs are probably easy. [01:03:50] Speaker A: What you do is exactly. [01:03:52] Speaker D: It's not real work. [01:03:54] Speaker A: I have to clean monkey cages. [01:03:56] Speaker B: Hard work. [01:03:57] Speaker A: Scrape that. Never mind. All right, so actually, I have one more just kind of fun question. [01:04:05] Speaker B: Yeah, please, go ahead. [01:04:06] Speaker A: I asked this of them earlier, and that is how do you know when or if you have a good idea scientifically without, without doing any work to vet it? You just, you know, you're in the shower, you're daydreaming, whatever. And you, you have this idea. How do you know if it's any. [01:04:26] Speaker D: Good, if it's coming back? [01:04:28] Speaker A: If it's coming like the next day? Yeah, if you think of it again. [01:04:31] Speaker D: Yeah. [01:04:32] Speaker A: So you don't write it down? [01:04:33] Speaker D: No, I write them down, but I used to write them down religiously. [01:04:37] Speaker A: But then you can just read it and it comes back. So that's, so every idea is good? Every idea you write down. [01:04:41] Speaker D: No, no. Now what I usually do is I text them to a student of mine or a post hoc, and then they're like, you're insane, or you're stupid, more likely. Or they just don't respond because you're not even wrong. [01:04:57] Speaker A: They're busy writing that paper. They have to write. [01:04:59] Speaker B: Exactly. [01:05:01] Speaker D: But if it comes back, usually I think a good idea will avail itself a few. Because if it's a good idea, you think about it for quite a while. [01:05:11] Speaker C: And then I sometimes discover, like my, I look at old notes like a year before and stuff like that. It's like, oh, I had this idea before. I completely forgot. [01:05:20] Speaker A: You must write better notes than I do. [01:05:22] Speaker C: I write terrible notes. Otherwise I would have done it the first time around, and I wouldn't wait for a year. But for me, sort of like ideas that I find good and they don't have to be right, they have to be good. It's ideas that I'm itching to find the answers, like, I keep thinking about. And I really want to do. Do the numerics right now. I want to do the math right now. I want to talk my student into doing it right now. So I get this sort of, like. [01:05:47] Speaker D: Vibe, wouldn't it be cool if we could do this? Yeah. [01:05:52] Speaker C: And it's not that all of the things that we do are like that, but I think what keeps me going is the things that feel like that. [01:05:59] Speaker A: Good. All right, guys, keep going. Thank you for your time. Yeah, thanks a lot. [01:06:04] Speaker D: Perfect. [01:06:04] Speaker A: It's been fun on the boat, hasn't it? [01:06:06] Speaker D: Thank you. [01:06:06] Speaker C: It's been more wiggly than. [01:06:09] Speaker A: It's super wiggly right now. [01:06:10] Speaker D: Yeah, because we're outside. Right. [01:06:17] Speaker A: So we're here with Mikael again, the organizer, the brains behind the conference. I know you had help and Tona and everyone was a great help in putting it together, but. All right, so now we. So now you've had this thing. What was it? A success. So the title. Right. Validating models. What would success look like in Neuro? Aih, and the last thing that I brought up and the very last thing that we did was this panel discussion about what would success look like in Neuro AI. And there was a wide variety of responses, actually. But. So we've had a lot of great talks and great discussions throughout the trip. And Gauta, you can chime in here too, but I just wanted to get your sort of reflection on how you think it went. And so both this year, Gauta thinks spaceship. Next year, outer space. [01:07:06] Speaker B: Yeah, sort of top yourself. You have to sort of like space space station. [01:07:10] Speaker D: Yeah, exactly. [01:07:11] Speaker E: There's no arguing against. [01:07:14] Speaker B: Next year. [01:07:15] Speaker A: We all get our own speed boats and we all have, like, headsets, you know, racing and giving talks. Yeah. So what do you think? Was it a success? Was the workshop a success? How are you feeling that it went? [01:07:29] Speaker E: I think it was a great success. I think everyone really enjoyed the conference or the workshop in terms of the scientific material, but also in terms of the kind of social aspect and just the trip has been really great. But in terms of the science, I think there's been two big worries. One would be high winds and waves. The weather was great, bad weather, so everyone got seasick. That would be terrible. Right. So that didn't happen. So, yes, that's a success. The second thing I was worried about would be that, you know, all the talks were off target or that no one would discuss or talk and, you know, it would be just like another science conference where everyone just. Yeah, just gives all their data and there's like impossible to respond to it because there's so much details. [01:08:30] Speaker B: I think this format where you had like, first, I mean, everybody had like a 40 minutes allocation slot and then a 20 minutes for presentation and then 20 minutes for discussion, that was very successful. [01:08:42] Speaker A: Yeah, that was good. [01:08:43] Speaker B: Yeah, that was the right format. Yeah, I am for this meeting. It was perfect. [01:08:47] Speaker A: Did you learn anything? Is it going to change the way that you approach anything in your own work? Because I'll start off by saying I made connections and had conversations that gave me new avenues of thinking about my own work. So that's been super valuable to me. [01:09:02] Speaker E: I mean, so one thing, I mean, with just rethinking about the workshop kind of topic, I think continuing kind of probing the community and kind of the people doing the science on how we should do it. [01:09:21] Speaker B: Right. [01:09:22] Speaker E: I think has been a major kind of insight, that this really is an important thing to do. Ask these critical questions on how kind of, you know, if you take a step back, like what, you know, what would it look like if your model were actually doing something like the brain is doing, and how would you measure that? Or, you know, where are your satisfaction criteria? [01:09:49] Speaker B: I was surprised, I mean, when you were asking, when you were leading the panel debate at the end, Paul, that how many of you feel that you sort of know what success would look like? [01:09:59] Speaker A: There were about eight hands that went up. So the question. [01:10:02] Speaker B: Yeah, just repeat the question. [01:10:04] Speaker A: Yeah, the question was. So it was on a scale of one to ten, if you feel, and if you're eight or higher in terms of feeling that you know, what success would look like, then raise your hand. [01:10:17] Speaker B: Yeah. And what fraction of the participants? [01:10:19] Speaker A: Well, there are about 30 here, almost a third, 27%. [01:10:24] Speaker B: I would expect that. I would expect that number to be higher and that it would rather be that people had different opinion of what this success would be like. [01:10:34] Speaker A: Well, everyone who did raise their hand did seem to have a different opinion, but I would expect that if people were honest, that it would be about that number. [01:10:43] Speaker B: Really? Yeah. [01:10:43] Speaker A: I didn't raise my hand. No, no, I can't articulate it. [01:10:47] Speaker B: No. [01:10:47] Speaker A: And that's a problem. I know that's a problem. And so this is a good venue to explore that. [01:10:53] Speaker D: That's true. [01:10:53] Speaker B: So that was a little bit surprising for me, I think, sort of. Yeah, that was. [01:10:59] Speaker A: Maybe you raised your hand, Gauta, maybe. [01:11:01] Speaker B: Because I'm sort of a little bit like an outlier here and participants in terms of. I sort of. I come from this, like, this physics side of modeling, and I do sort of, like, physics type modeling the brain as a physical system. Not that I'm not interested in what the functions are and the other models. And here, the success is a little bit clearer in the sense that you try to mimic physiological data. [01:11:25] Speaker A: That's why you like Stefano's. Sorry, Andreas work so much. [01:11:29] Speaker B: Yeah, exactly. So that was. So I think maybe that's. So maybe when you come from physics, the idea of what the success is, whether it's a good idea or not, is. This is more clear. It's more imprinted in us. [01:11:41] Speaker A: Well, and that's why when I drew that awful diagram, it had a lot of different little lobes of success and how those lobes could maybe attach to the different goals from the way people use AI as tools or models and stuff. But, Mikko, you also raised your hand, didn't you, when you felt. When I asked that question, don't I remember. You don't have to articulate it, but don't I remember you raising it, because the question was, do you feel like, can you articulate it? [01:12:09] Speaker E: So you. [01:12:09] Speaker A: You're an eight or above? [01:12:10] Speaker E: Yeah, yeah, that was. Yeah. And, I mean, that has to do with, you know, what I was doing and thinking about before I started this workshop, because, I mean, I was thinking a lot about it before. Before I started the workshop. So if you would add, like, a confidence score, you know, as well, that. [01:12:33] Speaker A: Is the confidence score. That's why it's a one to ten. [01:12:35] Speaker D: No, no. [01:12:36] Speaker E: It's a confidence that you know what it would look like, but it's not a confidence of how sure you would, you know, how sure you are that it is actually achievable achievement. [01:12:47] Speaker B: That is a different thing. Yeah, that's a different thing. Right. I mean, so I know what success would look like. I'm not quite sure if it's possible. Right. [01:12:54] Speaker E: I mean, because some of the. I mean, you could say success would look like. Success would look like us building perfect brainstor like that. I mean, no one would argue that that's not success, but. [01:13:07] Speaker A: No, no, but if you build a perfect brain, then what you're left with is a brain, not necessarily the understanding of it, of its functioning, how it works. [01:13:15] Speaker B: Sure. [01:13:15] Speaker E: But you would argue you wouldn't need to understand how to build a brain if you were supposed to build. I mean, I'm not talking about growing it from like, you're putting some genes together and like, that's not. But if you can, like, build a robotic brain or whatever, or even make. [01:13:33] Speaker B: A very detailed model, and it's true that that would be very hard to understand, but then you could start, have that as a, starting from probing. And it like, would be the, like the perfect test animal, as we discussed. [01:13:43] Speaker A: But the model has to behave correctly. [01:13:44] Speaker B: Absolutely. So it has to fit all fit the experiments. [01:13:47] Speaker E: But it would be like, it would be a per, if it would be a perfect brand, that would be a success, I think, like, if you build it, but you wouldn't be like, I mean, that's. [01:13:56] Speaker B: Some people had this idea that it really. Success would mean that you could make a model of an individual. Right? And that's, I mean, individual brain, like Condorad's brain. Right. And. Yeah. So that was like this. And I think that's sort of, that is sort of very. That I don't think is very realistic. I'm thinking more in terms of sort of like more some kind of average plane, some general properties, and maybe sort of what is the difference between healthy brain and maybe like a psychotic brain or different kinds of brain states and sort of more like the average kind of. So actually mimicking a particular brain would mean that you need to rewind the whole, or I mean, replay the whole history, probably with environmental inputs. [01:14:42] Speaker A: Shouldn't we start with a below average brain like mine, though? Isn't that more feasible? [01:14:49] Speaker B: I don't know. It's, you know, you know that Tolstoy saying that is like, he never wrote about happy families, because it's like a happy family is only happy in the same way. There's so many ways to be unhappy. So maybe that's the same thing that unhealthy brains. There's so many ways to be below average. [01:15:08] Speaker A: Yeah, yeah. Below average. Yeah. Any parting thoughts? Speaking. We have to leave happy. So, you know, this conference, this is happy. [01:15:15] Speaker B: How do you want to. This has really been a great success. I talk to people even when you're not there. And they also criteria. Right. So. And they are extremely so happy and learning. Everything was perfect. So how do you have any plans for. You don't have to do it today. Now we are still that knowledge room. I don't know. [01:15:39] Speaker A: The thing is, you have to keep it kind of small for it to be useful. [01:15:43] Speaker E: I mean, I think that was one of the success criterions, actually keeping it small enough that we could become like, a small group of friends, basically. [01:15:54] Speaker A: But I know what you're gonna say. You're gonna say, because you said it during the panel, that you'd like to have more people from the computer science side. [01:16:01] Speaker E: I would, yeah. Yeah, yeah. I think that would be. That would be really interesting to see, but I don't know how that would look like, though. I mean, it would be. It would have to be people from the others, the other side, computer scientists that are genuinely interested in the topic. So it couldn't be, like, a bunch of people that were just, you know, hoping maybe that can get some cool ideas from neuroscience and just take it and build something, but it would preferably be someone that wanted to be in the community. That would be really cool. [01:16:32] Speaker B: That's always a challenge, I think. Sort of like, you have to sort of. On one hand, you want sort of, like, a broad set of perspectives. On the other hand, you want sort of like, people have some, like, interface, so they actually can sort of communicate. Right. So that's sort of. So if it's too broad, then you're not able to, like, half of the audience doesn't understand what the other half of the audience is talking about, and then it's, like, difficult to get. [01:16:55] Speaker E: One thing I really want to say that I. One thing that I think is really important, and I think has also been a really big part of this success, is to bring people with different backgrounds together. Yeah. And it's important that they have some, you know, common focus, or else we'll just talk by each other. But so if you have, like, a focus that will, you know, make people think in some parallel or same direction, it's great. [01:17:23] Speaker B: So. [01:17:24] Speaker A: Yeah, and you did that because there are people working on synapses and spiking. There's people working on, you know, Kwame normorphic, essentially. And so what you call. What you called, you set it up as, like, the implementation level. There are people at the representation level, the algorithmic level, the computational. So we had just a wide variety. Like I said before, I'm repeating myself now, but I think that you achieved that already. [01:17:47] Speaker E: Yeah, yeah. And so that's definitely, like, one of the coolest thing with scientific, you know, interaction or conversation or, like, this sociology of bringing together people from different, you know, mindsets and having them talking together, that I think, like, there's some magic that can happen there. [01:18:08] Speaker B: Yeah, great. Thanks a lot, Mikhil. Yeah, thanks again for the participants, on behalf of the field, to be a little bit pompous. Yeah, exactly. [01:18:16] Speaker A: Pompous. Pompous. Gauta. [01:18:18] Speaker B: That'll be your. [01:18:19] Speaker E: That's the new one. [01:18:20] Speaker A: All right. Thanks, Miko. [01:18:22] Speaker B: Thank you. [01:18:27] Speaker A: Brain inspired is powered by the transmitter, an online publication that aims to deliver useful information, insights, and tools to build bridges across neuroscience and advanced research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives written by journalists and scientists. If you value brain inspired, support it through Patreon to access full length episodes, join our discord community and even influence who I invite to the podcast. Go to Braininspired Co to learn more. The music you're hearing is little Wing, performed by Kyle Donovan. Thank you for your support. See you next time.

Other Episodes

Episode 0

July 12, 2021 01:27:12
Episode Cover

BI NMA 01: Machine Learning Panel

Panelists: Athena Akrami: @AthenaAkrami.Demba Ba.Gunnar Blohm: @GunnarBlohm.Kunlin Wei. This is the first in a series of panel discussions in collaboration with Neuromatch Academy, the...

Listen

Episode

July 09, 2019 01:02:19
Episode Cover

BI 040 Nando de Freitas: Enlightenment, Compassion, Survival

Show Notes: Nando’s CIFAR page.Follow Nando on Twitter: @NandoDF He's giving a keynote address at Cognitive Computational Neuroscience Meeting 2020.Check out his famous machine...

Listen

Episode 0

September 25, 2022 01:28:48
Episode Cover

BI 148 Gaute Einevoll: Brain Simulations

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord...

Listen