Episode Transcript
[00:00:04] Speaker A: This is brain inspired, powered by the transmitter. Hey, everyone, it's Paul. This is the first of two less usual episodes. I was recently in Norway at a Neuro AI workshop called Validating. How would success in Neuro AI look like? And what follows are a few recordings I made with my friend Gauta Einval. Gauta has been on this podcast before, but more importantly, he started his own podcast a while back called Theoretical Neuroscience, which you should definitely check out. So Gauta and I will introduce these conversations we had with a few of the invited speakers at the workshop and one of the main organizers. I link to everyone's information in the show notes at Braininspired co podcast slash 195. So I hope you enjoy these conversations that we had on a rather large boat you'll hear about in a second. Enjoy.
Two podcasters on a boat in Norway.
[00:01:07] Speaker B: How can that go wrong?
Yeah.
[00:01:11] Speaker A: Hi, Gauta.
[00:01:12] Speaker B: Hi. Hi, Paul.
[00:01:13] Speaker A: Why are we doing this together? What's happening?
[00:01:15] Speaker B: Yeah, this is. I mean, you have been making brain inspired for a. Yeah, for how many years?
[00:01:22] Speaker A: I think five, maybe six.
[00:01:24] Speaker B: Five. Six years. Exactly. So I've been podcasting for five, six years. But this new theoretical neuroscience podcast is sort of that started in October. So I remember I talked to Kondrat koiding and, well, this brain inspired was an inspiration for me to make this what I call academic podcast.
It's not really about. It's actually for people in the community, it's not a popular science podcast, really, it's. Which are of course, really important too, but it's a different thing. So in some sense my, like, when I talked to Conrad coding about this, he, he said, oh, so your podcast is brain inspired. Inspired. That is actually true. That's actually true. So anyway, so we are a little bit, some sense, like sister podcast, wouldn't you say? We are sort of both sort of like. Like, I mean, yours are like neuro AI and mine is a little bit more sort of like into the, like, more other aspects of theoretical neuroscience, maybe like more like starting the physics tradition, because that's sort of where I come from. But anyway, when we both were going to this very nice workshop on Neuro AI up on the coast of Norway, so then, of course we have met before.
You visited me in Oslo once. So then we decided, why don't we pull, pool the resources and make a joint podcast?
[00:02:52] Speaker A: Yeah. So what came out of it is just a few discussions that we had with some of the invited speakers and then also with one of the organizers, Mikael, to sort of frame the workshop. So you'll hear from Miko a little bit about how this workshop came about. And then in closing, so this will be two episodes. And then at the end of the second episode, which will be our second discussion, Miko kind of summarizes and wraps up as well.
[00:03:19] Speaker B: So. Yeah, anyway, so the workshop, just to introduce that a little bit more. So that was then on the, like Neuro AI, it was Michael Lepreux in Oslo and I think Konrad Koiding, who is sort of like most central in working out the program. But there are also like other organizers who I think sort of supported the well secured funding, like people in Oslo, Andersch Maltesernschen, Marianne Finnen and Thuneskiramsta. I like to give credit to people, of course.
[00:03:51] Speaker A: They did an awesome job. It was an awesome workshop.
[00:03:53] Speaker B: It was an awesome workshop. Both. So this was quite, quite unique. I mean, I never taken. It was on a coastal liner along the most beautiful part. Beautiful coast of Norway from Tromso down to Trondheim. It took like three days or two and a half days or something. And it was good weather and it was really, really excellent in all ways.
As we mentioned, the first clip we present here is with Michel, the main organizer, where we actually discussed with him why he made that podcast and what x wanted to get out of it. It was done in the.
[00:04:34] Speaker A: You said podcast. You mean workshop.
[00:04:36] Speaker B: Workshop, exactly.
We call a freudian slip is not something.
Yeah, so I meant workshop. So. And this was done at the. Actually at the last day of the. Of the workshop in a luggage room of a quite fancy hotel in Trondheim, the Britannia hotel, which is sort of. I used to study in Trondheim. So this was sort of like the places you don't go when you are on the student budget, to be sure. But anyway, we're in. Yeah, it's very fancy. So we're in the luggage room. And so the sound is actually better. But they also had people walking, coming into the luggage room. Like our friend John Krakauer, for example, who attended the workshop.
[00:05:19] Speaker A: Yep. He makes a boisterous entrance, but it's brief. But he was the main interrupter or whatever. So we say hi to him and you'll hear other things in the background and other people kind of coming in and out.
[00:05:32] Speaker B: Yeah. And we link to the homepage of the workshop.
[00:05:38] Speaker A: So this is divided into two episodes. So in the episode that's going to come out in just a couple days, you'll hear Christina Savin and. Or Savin. Or Savin. I'm not sure how to pronounce her last name. And Tim Vogels, and we'll talk a little bit more about them in the next episode. But on this first conversation that we had, we have Andreas Tolias and Ken Harris Gauta. Do you want to talk about Andreas?
[00:06:00] Speaker B: Yeah. So Andreas Tolias, he is just moved to Stanford, by the way, and he is sort of, at least he's done many things, and he obviously is doing monkey physiology and mouse physiology.
Some of the work that I'm particularly interested in is these foundation models he has made for the visual cortices or visual cortex of mice, where he has sort of trained these deep networks to.
To essentially sort of predict sort of calcium responses when the mouse is shown different kind of visual stimuli. And I think in terms of predictability, this is sort of like really the state of the art when it comes to making these kind of molds that predict things. And, of course, in terms of interpretability, which we also discuss a little bit in the podcast, is it's a bit different and it's harder to interpret. And then there were Ken. Ken Harris.
[00:07:03] Speaker A: Yeah. So, Ken, among other things, I mean, he's interested in how large populations of neurons sense the world and convert that sensation into action. But one of the things he's been known for in the past few years is just the immense recording capacity.
So in the past few years, our data has skyrocketed. The computational power has skyrocketed, and he was speaking of cutting edge. He was one of the first to use a super high density recording electrode to record thousands of neurons at the same time. And that has not been done before. So we're still grappling with what to do with all these neurons, but we don't actually talk so much about their.
[00:07:46] Speaker B: Research in these conversations because he actually out at the.
At workshop. Like a workshop. Exactly. Not the podcast. At the workshop. He actually talked about. He had been challenged to talk about sort of actually what it means to understand something more on the, like philosophy of science. That's right, yeah.
[00:08:09] Speaker A: Okay. So I think that's a fair enough introduction and all right, so enjoy our discussion and see you in a few days, Mika. So you had this idea to put this workshop together that we're now at the end of, but we're going to go back to the beginning and ask you how this workshop came about and what was it supposed to be about.
[00:08:36] Speaker C: Yeah, so, I mean, I come from, I had the training in both kind of computational and experimental neuroscience. I've been working the past kind of, I don't know, four years on Neuroia mouth.
[00:08:53] Speaker A: It's okay.
[00:08:54] Speaker C: Yeah, it's just.
[00:08:56] Speaker A: John, crack at the number 15 on my podcast is the 15th time you'll appear on the podcast.
[00:09:05] Speaker B: It has a characteristic laughter, which we.
[00:09:08] Speaker C: Have enjoyed all through the workshop.
[00:09:11] Speaker D: I hope you meet Ed.
[00:09:12] Speaker B: Yeah, no, I do.
[00:09:13] Speaker A: Hey, are you leaving? Are you out?
[00:09:14] Speaker D: I'm going to check in.
[00:09:16] Speaker A: Okay, cool.
[00:09:17] Speaker B: Sorry. No worries.
[00:09:18] Speaker A: No, you're good. So, you've been studying computational neuroscience and.
[00:09:22] Speaker C: Yeah, I mean, I got really excited when I learned that we could. I mean, when I did my work in my PhD and training, I was working on grid cells and spatial representations in rat brains, foraging.
[00:09:39] Speaker A: It's a very norwegian thing to do.
[00:09:41] Speaker C: It is a very norwegian thing to do, I guess.
But I'm always been interested in the why question.
Why do we have these representations? What are these cells used for?
And my training from mathematics always drove me into comparing experiments with models. That is how I. How I envisioned science would be for me.
And I tried to do that in some of the experimental work I did. I tried to perturb neurons with optogenetics. I perturbed the medial septum area while recording middle and thrando cortex and trying to see if I could say something about validating models back then, like the classic computational models of grid cells. So I think this kind of notion of validating models has always been kind of a core part of my vision of how science should be done.
[00:10:42] Speaker A: Well, how did the neuro AI thing come in?
[00:10:45] Speaker C: I mean, I started working on neural AI models of grid cells, if you will, or navigation when there was a model coming out from DeepMind. And also in parallel, there's a paper from Cueva way that did a parallel kind of discovery that if you train recurrent neural networks on path integration tests, you give them velocity input and you train them to output like a position. Based on that, then in some models, you can see a similar pattern that you see in real brainstor. This seemed to be very. And I think it still is, it's a very interesting way of modeling these phenomena, because you don't put that many assumptions in. So in the classical mechanistic models, you kind of build in what you're seeing.
This is a slightly different way approach. So my hope with these models was that you could kind of, without putting in too many assumptions about what the system does, you could have this pattern emerge or become part of the computation that Rnan is doing, and then you could probe it afterwards and ask why questions. So what are these cells doing?
How are they interacting? And so on, and so forth, because that's something you can't do in experiments. It's very easy. But working on these models, I realized it's not that easy. So, you know, you can train a model and you can get a lot of different results, and you can even. There was a big discussion whether or not this grid pattern would occur consistently, and it doesn't.
[00:12:38] Speaker A: Over different training regimes and different architectures and stuff.
[00:12:41] Speaker C: Exactly, yeah. Or even just the initialization, at least in the early models.
[00:12:46] Speaker A: Yeah, yeah.
[00:12:48] Speaker C: Looking at that, I started worrying about, how are we supposed to relate these models back to neuroscience? That was my motivation, at least, and I think it's the motivation for many that you can use these models to say something about how the brain works.
[00:13:07] Speaker A: So, for the workshop, did you think, oh, I need to make some new friends here and have some people to.
[00:13:14] Speaker C: One thing is that this work kind of sparked the kind of notion in me that we really need to do this in the right way in order to say something that has scientific rigor. And we can't just generate lots of models and say that every model that has sufficiently gritty or whatever pattern you're looking for is a good model. So that kind of sparked the scientific question. And in terms of the workshop, I think many of these problems, they would occur in many other systems as well. And looking at the other types of neural AI models that are being used, that are somewhat similar, where you train, you can train a vision model to recognize images, and you can compare their, you know, by linear probing, you can compare their activations with real neurons or just do this similarity analysis.
[00:14:20] Speaker A: Yeah, lots of different ways to compare the models with the neural activity. Exactly.
[00:14:24] Speaker C: So I think in many of those cases, similar problems can occur where you kind of can generate lots of different models, and even though they look similar to what you study in the brain, and they even look similar to between the models, but that doesn't necessarily mean that they're doing the same thing.
[00:14:45] Speaker A: Well, even if the output is the same the way that they're doing, it might not be the same.
[00:14:49] Speaker C: Exactly.
[00:14:50] Speaker A: Yeah.
[00:14:51] Speaker C: It's kind of this algorithmic level question of whether or not they're implementing the same algorithm.
[00:14:57] Speaker A: So you gathered together a wide variety of people from different fields, using different types of models to study different brain systems, but also using AI as a tool to analyze their data.
There's even a philosopher here. One. One philosopher. That's enough. That's enough for any conference, right?
Yeah. So it was a nice group of people.
[00:15:21] Speaker B: So it's a fantastic venue you chose.
[00:15:25] Speaker A: Oh, my God.
[00:15:26] Speaker B: Yeah. This, like, this coastline that goes shows this, like, the coolest thing of Norways from the nature point of view are the fjords. So even the hitchhiker's guide to the galaxy, this guy got the planetary engineer who got an award for the fjord. So Norway.
So I think that was a well deserved award. So now, with this trip, going from the north to the middle of Norway or Trondheim on the boat was really showing off the best of the nature. And I guess that also was. That was probably intentional.
[00:15:57] Speaker A: I also heard multiple times, this is like the coolest conference workshop that I've ever been to. I'm sure you did, too. Multiple times.
[00:16:07] Speaker C: I think most of the people I talked to, they really enjoyed themselves.
[00:16:12] Speaker A: You just made the bar very high. So clear to be. To go to other workshops, right?
[00:16:17] Speaker B: Yeah. So, I mean, now, again, this is going to be at the start of the podcast, and now we're going to have some, like, the main material with, like, two. Well, first, this first one. Well, in this episode, we're going to have one pair of participants, and then in the next episode, we're going to have the other pair of participants. And then.
So after that, we sort of want to come back to you, and then at the end of the second episode, to sort of. To maybe think a little bit ahead.
[00:16:51] Speaker A: Yeah. So through the magic of podcasting, let's get to the first episode, and then we'll revisit Nico.
[00:16:57] Speaker B: Yeah, exactly.
[00:17:03] Speaker A: So we're on a boat in Norway, and Gauta and I have stolen you away to answer a few very broad, unfair questions, maybe.
[00:17:13] Speaker B: Really cool question.
No. Unfair. I don't think they're unfair. But maybe you haven't sort of thought so much.
I don't know if I had to answer to these questions without having thought too much about it.
Well, anyway. But you are better than I am, so that's why.
[00:17:31] Speaker A: Yeah, so, I mean, these are kind of broad questions. So one of the questions that we had for you both was how in parentheses has neuro, you know, whether neuro AI. And we can talk about what neuro AI is.
How has it changed the way that you ask questions or approach your scientific questions?
[00:17:52] Speaker D: Yeah, that's a really good question, but let's try and maybe first define.
[00:17:56] Speaker B: So this is Andrea speaking. Yeah. This Greek English, slightly slight greek accent. Then it's Andreas.
[00:18:02] Speaker A: Yeah.
[00:18:03] Speaker D: So for me, Neuro AI is sort of defined in two ways. One is using the modern version of AI, which is deep learning with large data and large compute to build models of the brain and the way that I think it's impacting the research that we do. One way that is impacting it is essentially, it has changed our thinking of embracing high entropy data and then using the tools that have been developed to feed models.
[00:18:41] Speaker B: What is high entropy data?
[00:18:42] Speaker D: Well meaning, like, you sample, let's say, natural images without being very hypothesis driven. You can say, I'm just gonna, or you do naturalistic behaviors, and you record, you know, without, like, trying to, like, control the behavior a lot. And basically, you know, and they build a lot of tools. A lot of them are engineering tools like GPU's libraries, you know, different Pytorch and stuff like that. That enables us to feed these large scale data and basically use them as statistical ways to extract statistical structure from the data. So that's one way that it has been impacting us. The second way is that this field has been developing tools. Once you have a neural network that predicts something, they are trying to develop tools to try and understand, get some interpretability. The whole field called mechanistic interpretability, and how it works, we're incorporating there. So these are the sort of more going from AI to neuro. On the other hand, of course, as neuroscientists were always interested to build intelligent systems, that's a much harder thing. But it also has helped us think about what are the type of tasks that may be important, and, you know, what is the advantages of brains versus AI? For example, this talk about generalization, robustness, adversarial robustness. So I think it has been a fruitful interaction between AI and neuroscience.
[00:20:18] Speaker A: But how can you contrast that with the way that you used to do science?
[00:20:22] Speaker D: Yeah, it's very different, because the way we used to do science, we were always limited by data, both in terms of, if you, let's say, wanted to record from neurons in the brain, even if you could record for many neurons, you had, let's say, an hour or 2 hours to learn experiment, and you were, like, developing some hypotheses that you are testing, whereas now it allows us to, like, control, you know, do more like non hypothesis, more data driven science. So it has changed from, I would say, hypothesis driven science to data driven science.
[00:20:57] Speaker A: Yeah, well, maybe, Ken, let's just ask you the same question if you have. Because you have a different perspective on this, I think.
[00:21:03] Speaker E: Yeah. So I'm a bit more old school, perhaps. So we.
We certainly use AI technology, for example, to do video processing, the deep lab cut software, and other things like that, as a tool.
[00:21:20] Speaker A: As a tool?
[00:21:21] Speaker E: As a tool to let you do science by doing video processing that wouldn't have been possible a few years ago and that sort of thing. Certainly in terms of AI for informing scientific questions and conclusions, for me, I'm less up on the more.
We certainly read stuff like what Andreas publishes using the most recent techniques. For me, the really valuable concepts at the moment are still those from a few years ago, such as kernel machines and variational Bayesian inference and things like these we just use very fundamentally in the way we think about things.
[00:22:05] Speaker B: So why do you, why do you, why do you, why do you stick with them? Because you understand them better.
[00:22:10] Speaker E: Exactly, because these are the things that you can understand. Because a kernel machine, you know, you know exactly what it is doing. You know how it works. A deep network, it's found a solution, but I don't really understand how it's, how it works. Maybe the point is that's the same of the brain. Maybe we're never going to understand how the brain works in the same way that we, we don't understand how a deep network works.
[00:22:36] Speaker A: By the way you guys interrupt each other and argue.
[00:22:40] Speaker D: I agree that, you know, there's pros and cons here. I mean, right now you're, we're like basically putting a lot of emphasis on building a very accurate model of the brain in silico that's differentiable and stuff like that. That then we hope we can then analyze it to understand it. Mechanism, which is another approach, and of course it has other advantages, is like you start with a model that is already like building interpretability. So then if you feed the data, then you, the understanding falls out of it, whereas the deep learning approach that is more data driven, it's sort of, you basically emphasize the predictability first, and then you hope that by looking inside the other thing is very good to compare the two, because if, for example, you use an interpretable model that only explains how the deep learning model does. It also says that there's really a.
[00:23:38] Speaker B: Lot that we don't understand because I remember, I read some of your papers where in terms of where you train these deep networks to predict, like this two photon calcium responses to all this visual stimuli in mouse. And when you compare with the previous approaches with like, bore filters and whatever, I mean, in terms of predictability, nothing compares to the predictability of these trained models, but that sort of maybe comes at the expense of being not quite sure how it works and maybe with the interpretability. So what's the interpretability of this, of your models? Can these kernels?
[00:24:18] Speaker E: Well, okay, so the kernel is just a measure of similarity between how similarly a population of neurons responds to any pair of stimuli, right? So it's just a number for every pair of stimuli, tells you how similar the population responses to those two stimuli. And there's a whole field of machine learning theory that went out of fashion about a decade ago that understand, that did a lot of very useful work in understanding what sort of representations those give you. And you can use all of that to understand what's going on in the neural code of the brain.
[00:24:58] Speaker A: You made the point earlier on, very early, that it's changed your science in terms of making it more data driven. I hear a lot that what we need is more theory, you know, with big compute, with big models as tools, with, you know, with big data. So how do you, how do you think of that? I mean, are we in this, we're kind of in this weird space, right, where you have to explore to then generate theory, maybe even. I don't know how you think about it.
[00:25:25] Speaker D: No, this is something that sort of, I think a lot and it bothers me in a way, but also, you know, it's like we, we are at this stage where the way we are doing sort of science and engineering with deep learning, it's a little bit non classical scientific in that we figure a way, a hacky way to basically build models that can drive cars around, but we don't really understand them in a classical scientific way, how they do it. We understand to some extent, we understand the loss function, we understand something about the architecture of the network, but we don't have that algorithmic understanding in a more classical way.
And that is an issue, that's a problem.
Now the question that I hope, and it's a hope, is that by building very highly predictive models and that can generalize very well, and internalization is key here, then the fact that they are differentiable, the fact that you have a model that you can do any experiment you want, that is actually a neural network, which is sort of like the brain, even if it's not implemented in the same way, but it still has synapses, synaptic weights, activities, then we just have to develop tools. And the AI community is already doing that because they care about that too. And this is one of the issues about safety and robustness and generalization. These are key things in AI that then we can leverage what they're doing, what we developed to try and gain some more understanding. I think once we have that understanding, then it may be possible to then think about more interpretable models that are going to be simpler, that then we'll fit the data and then we'll get to sort of bring it back to what Ken's talking about.
[00:27:15] Speaker A: Do you believe that, Ken?
[00:27:17] Speaker E: So. Well, I don't know. I mean, if you look in the history of science, there's some cases which are a lot of cause for optimism. So, for example, if you were an astronomer in the days of Tycho, but before Kepler, you might have thought, there's no way all of this data is ever going to end up being simple. A few years later, a few hundred years later, you have Newton's laws. You've got one equation that can explain everything. Same if you're a chemist.
Before the periodic table, you would never have guessed it was going to be that simple.
On the other hand, if you were a biologist before the genome, you were then confronted with the fact that there's 20,000 genes and they're all different, and you're never going to know what they all are. So we don't know for neuroscience, but there's a difference.
[00:28:06] Speaker A: But those weren't complex systems.
The DNA code readout is not a complex system, whereas with brains, we're dealing with a complex system. Do you. Does that difference make a difference, do you think?
[00:28:18] Speaker E: Well, the solar system is a pretty complex system.
[00:28:22] Speaker A: Complicated, but it's not a complex system. Well, I guess three body problem.
Well, complicated just means hard, lots of parts. Complex means interacting parts where there's emergent.
[00:28:33] Speaker B: Properties, so it's stable. I mean, you can predict the planet orbit of Venus 500.
[00:28:38] Speaker E: Oh, you mean chaotic.
[00:28:40] Speaker B: Chaotic, yeah, I think complex systems are often chaotic. Right. Or not always. That's true. We are not chaotic of complex systems. That's right. Anyway. Yeah, also, maybe that's.
[00:28:51] Speaker A: But your point was things have looked impossible for over and over in the history of science, and maybe this is one of those things.
[00:28:58] Speaker E: Things have looked impossible over and over.
Sometimes they weren't.
Sometimes, so far, they still appear to be.
[00:29:07] Speaker A: And we're in that regime.
[00:29:08] Speaker E: Well, we don't know.
[00:29:09] Speaker B: I think if I can come back to what you. I mean, you said, andreas, that. I mean, now you're doing this, like, training these deep networks, difficult to understand, and you hope to get the. That is, or forms a basis for more interpretability. So. And then, of course, we have better tools for understanding the whole. How the brain works. But at the moment, given what you sort of have at the present stage, has it sort of told you anything new about the brain or cognition? That.
[00:29:40] Speaker D: Yeah, no, I think that's a really good. I think it's still early to know if this is gonna be, you know, if the way to the future is like, this is gonna be like the standard. But there are cases where, for example, you know, this starting contextual modulation, you know, in the individual cortex, people have been studying it with gratings, and they found the specific relationship between centers around interactions. And then when we followed these image synthesis using these deep learning methods, we got something that was different. And the nice thing here is that then you can verify it back in the brain. You know, you can run this closed.
[00:30:24] Speaker B: Loop experiment, inception loops, right, that you've been doing.
[00:30:28] Speaker D: So that's one example where it's not a circuit level mechanistic model, but the description of how the center and surround interact. We gain some new understanding that, you know, the other thing is, like, once we do that, then you can design experiments in a more classical way, because now you develop the hypothesis and to test it and then do exactly what Ken has been talking about. In fact, we did build a model like that. You know, we build like, a more bayesian model that can try and explain centers around interactions based on natural image statistics and priors.
[00:31:03] Speaker B: In a primary visual cortex.
[00:31:05] Speaker D: Yeah, in the primary visual cortex. So I think that's one example where you start with a data driven model that there's no interpretability. It's just sort of an engineering task. Then you analyze it, then you derive a principle or some level understanding, you test it back empirically to see if it's correct, and then you build a simpler model that is more based on more classical stuff. Like, in this case was hierarchical Bayesian inference. And, say, can a model of hierarchical Bayesian inference trained on natural images predict, at least qualitatively, the same type of effect?
So I think that is an example.
[00:31:40] Speaker A: You started your talk today with the clip of 60 minutes of Hubel and Wiesel serendipitously discovering the edge. So they were moving a piece of paper, they were trying to test the visual, neural, neural response to dots, and they happened to move the transparency, or whatever it was, off the screen. And part of that transparency, there was an edge where that transparency ended. Right? And that's how they discovered edge cells. Right? And that's a data driven approach or an exploratory approach, but it was also serendipitous, and it made me think, right, when you were showing that, are we. Are we past that stage where we're going to, like, how does serendipity play a role these days?
[00:32:22] Speaker E: Ken, you're shaking your head all the time. Almost everything we do is serendipity. You get your data.
Normally you have a hypothesis in mind, and when you get your data, you realize, oh, actually, wait, that was never going to work anyway.
But then you notice something a bit odd in the data and it's probably a bug, but you chase it out and, well, it doesn't seem to be a bug, and then you follow it up a bit more and then there's something you don't understand and you don't understand why you would see this, and then you try and figure out why you're seeing it.
[00:32:53] Speaker A: So it's the analogue of, oh, who was the famous, what's the famous quote? When science progresses by saying, oh, that's funny.
[00:33:00] Speaker E: Exactly. Exactly.
[00:33:03] Speaker A: Does anyone remember who said that? I'll have to look it up later.
[00:33:09] Speaker B: So is there any way that, like, AI has set neuroscience back or put some, or many or all neuroscientists on the wrong track?
[00:33:22] Speaker E: It's probably led to some blind alleys. But the problem is we don't know which ones they are yet.
We will in the future.
[00:33:29] Speaker A: Do you think so? I worry that in a lot of, especially young researchers, minds, might have confuse the map with the territory and that people, in terms of people thinking of the brain as a transformer, you know, and sort of substituting the model for the real thing, and then that conceptual framework then frames their research questions. And I wonder if that has had a deleterious effect at all.
[00:33:56] Speaker D: Yeah, I mean, it's early, it's still early to know where this thing is going to progress right now. But because there's also this kind of two way interaction, some people are working. No, it's like using tools from AI or as a model of the brain. I do think that there is some danger here of like, basically just doing, like, using it as an engineering tool that is sort of the end game versus. It's just the beginning, the end.
So I do think it's important to, like, just see it as a tool right now and not take it. It's just sort of like, okay, if I just take a model and I build it and it works great, I'm done, I should move on to something else. I think there is a little bit of that.
[00:34:45] Speaker B: It's a lower threshold for starting doing it, right. I mean, sort of, if you have tools like whatever, Pytorch, tensorflow, you can sort of quite easily train networks to do something. So it's maybe, it's probably don't. I mean, if you do like traditional statistical analysis. It takes more effort to get into it, maybe. And certainly compared to the physics type modeling, which is like the threshold for getting into it is more like basic tools. So this is. But I guess low threshold is both good and bad, right? I mean, it means that many. It's easier to test, but it's also been easier maybe to do things which are not that high quality.
[00:35:22] Speaker A: Is there a danger that it introduces less critical thinking from the beginning?
[00:35:28] Speaker D: There is definitely that danger from AI in general. Right. Like, I mean, it's sort of one of the, you know, like there are people that are trying to build, let's say, autonomous AI scientists to analyze the data for you. Right. I mean, I don't think it's gonna happen around the corner, but there is.
[00:35:47] Speaker A: It might replace me. It probably won't replace me.
[00:35:50] Speaker D: No, no. I mean, but this is an issue, right? Like, it's easier to, you know, but on the other hand, you know, that's what I think historically, people were worried about when calculators came around, people were not. So I don't know, I think it's important to just use it as a tool, but make sure that people are educated and they're doing the critical thing.
[00:36:11] Speaker E: Every new technology introduces new ways of doing things wrong.
[00:36:14] Speaker A: And you don't think this is different at all in that respect?
[00:36:16] Speaker E: No, not really.
[00:36:17] Speaker B: No.
[00:36:19] Speaker A: More is different, you know?
[00:36:21] Speaker E: Well, it could get, you could. It's easier to fool yourself when the thing trying to fool you can speak English fluently. Yeah, right.
[00:36:34] Speaker B: That's true.
[00:36:35] Speaker A: By the way, just as an aside, I have to commend you because you worked on a talk, and I think, against your will, the talk that you gave you, like, really put thought into it and tried to address.
[00:36:45] Speaker B: So the talk was on. What was the talk on?
[00:36:47] Speaker E: The talk was on. What would it mean to say that a system such as a deep network is a good model for the brain?
[00:37:00] Speaker A: And did that drive you crazy, creating that talk?
[00:37:03] Speaker E: Well, I thought at first I was going to say it's impossible, and ended up after thinking about it, thinking that it could be possible. I don't think it's been done yet, but it's not actually impossible. But what it would need is an interventional experiment where you have a mapping between the artificial system and the brain that not only shows they have a similar representation of information, but also if you perturb them, you perturb the network.
That causes a change in the rest of the activity in the network in a way that maps onto the way a perturbation of brain activity would then you can say there is a mechanistic similarity in how they're computing information, not just a similar representation.
[00:37:49] Speaker A: But you also said that a model is something that shows what is not possible.
[00:37:54] Speaker E: Yeah, yeah.
[00:37:55] Speaker A: How does that fit in?
[00:37:57] Speaker E: Oh, I think that's just what science is.
[00:38:01] Speaker A: Well, let's falsify.
[00:38:03] Speaker E: Exactly.
[00:38:05] Speaker B: Is there, I mean, in, I mean, I was trained as a physicist and there sort of did, but then there it was sort of looked a little bit down upon by active physicists to sort of talk about philosophy. It was all like this idea, shut up and calculate. Right. That sort of.
[00:38:22] Speaker E: Yeah, they do it instinctively.
[00:38:23] Speaker B: Yeah. But is, are we talking too little about, like, these kind of questions? I mean, this kind of, what does it mean? Believe in explanation. This is some sense, have been our job for the last decades, right, to do science, which is about this. And still it is sort of, we don't really formulate it too often to ourselves. Right.
[00:38:45] Speaker A: Yeah. Well, that's why I was asking Andres, too, if this is like, if we're in a weird space, because you have to explore with all these new tools.
[00:38:52] Speaker D: I think we are in, I mean, to me, we are in a more weird space than I've ever experienced. And I think, I mean, it feels like different, something collectively different than before.
[00:39:06] Speaker E: Why don't you say that?
[00:39:07] Speaker D: Because I think we, I think the reason is because we are capable of building systems and models that seem quite intelligent, very intelligent, are capable, have capabilities. You know, I bought a Tesla and I drive it, and I'm really impressed by, I mean, you can drive for hours in a very complex environment, and it's really impressive.
But then at the same time, it's a neural network that nobody really understands in a classical scientific way. And I think that, and I do think it's a technological advancement of these two things. We have data.
I'm talking not just in neuroscience, but I generally in biology, in medicine, in, you know, the way you have cameras everywhere collecting data, the Internet. Right. Basically, we've created a society where made it easy to collect a lot of data, and then we have computers that became very fast, and we put these things together with neural networks, and we build all these very complex systems that are capable of doing complicated things, but yet we don't understand them. So, and I do think that, you know, there's some danger here, right?
[00:40:25] Speaker E: So you're not talking about science specifically, you're talking about the state of the world.
[00:40:28] Speaker D: I'm just talking about the state of, yeah, basically the whole you know, if you look at, let's say, AI in general, I mean, neuro AI is just neuroscience. Using AI, you can think of, let's say, the people that predict the stock market using AI, people that are trying to build in autonomous driving. I mean, they were doing it before neural networks. Yeah. All these areas, even in physics, they're using AI.
[00:40:52] Speaker C: Yeah.
[00:40:52] Speaker B: Because that's. That's. I mean, Feynman and other physicists said that if I can't build it, I don't understand it, or if I understand it, I can build it. I mean, didn't mean, like, physically build it, but meaning make a physical model, physics type model, that essentially gonna, you know, you know, and now we just have to finish, because now we put in these learning models, and so we build things we don't understand.
[00:41:14] Speaker D: Yes, exactly.
[00:41:15] Speaker B: Which is sort of a new thing. Right. And which is very fascinating. We have made this fantastic large language model just thinking about what it can achieve, and we don't understand. Now it's like a research project. We just do research on it, just like we do on a test animal. Right. What does it really mean? Which is really fascinating. We built something. It's man made, but we don't understand it.
[00:41:35] Speaker E: Yeah.
[00:41:36] Speaker D: And my impression is we went through that before, somewhat during the industrial revolution, people, the first steam engines that were built without really understanding, let's say, thermodynamics. And, in fact, they were not safe. They used to blow up all the time.
And it only became after people started doing measurements, especially after, you know, discovering, like, low thermodynamics at temperature and pressure. Things became. So maybe we're going through that phase.
[00:42:04] Speaker A: Well, at least we know that AI is safe and doesn't affect society. Exactly.
[00:42:08] Speaker B: Nobody worries about that. Yeah, that's really interesting, because the electromagnetism there, the revolution came after the, like, Faraday and Maxwell and, like, these people. So that's sort of. But it's true. The thermodynamics sort of came after the steam engine. They made things like entropy this time, like term and energy conservation and stuff.
[00:42:29] Speaker A: So if we're. If we are in a weird space right now, just scientifically, then one of the questions that we were going to ask you was, how long are we going to be in this space? Like, what role? Will AI still have a role in 50 years? Or will we have used it to solve lots of brain problems? I'm not going to say solve the brain or solve intelligence, but will it have been a key factor, or is it a fad? Will it go away.
[00:42:56] Speaker D: I'll tell you what, I think about this a lot. I don't think. I've never been in a situation in my life that I'm really curious what's going to happen in five or ten years? Are we going to really have AGI that's going to basically solve not just very difficult questions for the brain and neuroscience, but let's say health, I don't know, climate change? Or is it going to be like the iPhone? When the first iPhone came, it was like, amazing. Okay, now we have, I don't know, what is it? Model 13.
It kind of looks similar, right? These better apps, but it's large language models, all these like things just as fathers, you said. And in ten years, we're going to be like, yeah, you know, and in fact, in the last two years, I wouldn't say it's like, impressively, amazingly getting much better. Right. If you look at cha GPT, I don't know, 3.5 versus four. Oh, yeah, it's a little bit better. It does something better. Or is new applications, you can do video now, but it's not like, you know, a collectively new type of thing. We're getting incremental changes now. That doesn't mean it will remain like that, but I think it's sort of very interesting because it's like, you know, and some days I wake up and I feel like, oh, my God, this is it. And others are like, yeah, you know, not much is gonna change. One thing is that this stuff relied a lot on the scaling laws, and I do believe that there's a big problem that these big companies are facing now. They're just running out of data to train their models.
[00:44:29] Speaker B: Scaling in terms of like, computers or data? Both.
[00:44:32] Speaker D: The scaling law is both larger computers, I mean, basically more energy and more data, but data, you know, they are synthesizing data, but it's a niche.
[00:44:43] Speaker B: Little bit incestuous.
[00:44:44] Speaker D: Yeah, exactly. A lot of the stuff they are doing now is like theta selection, because there's a lot of, you know, they've shown that if you select the type of data you train, let's say large language model or a video, it may make a big difference. But it does feel like to me that there could be things, let's say, in robotics, with the other people wearing cameras and collecting their body movements and imitation learning. But it's possible that we are sort of gonna have incremental improvements or maybe there will be a new revolution, I don't know. And I think that's gonna impact us too.
[00:45:18] Speaker E: You know, so I think you're right. Even if there isn't another step change, though, just the one we've already had.
[00:45:26] Speaker A: We don't even know how to deal with it yet.
[00:45:28] Speaker E: It is creating a lot of changes to societies in ways that are going to be unpredictable, like the industrial revolution. So, for example, a friend of mine works in classical music. She has a lot of friends who are composers, who used to write for film schools, and now they're at work because. Because that's something that AI can do pretty well, is write classical music for film scores.
And, you know, who would have predicted that, you know, and society is going to change in a lot of ways?
There was. What was it called? There was something that had an AI girlfriend app. I think that's gone away.
[00:46:05] Speaker A: Japanese thing. I think that's popular.
[00:46:07] Speaker E: Yeah. But there's all sorts of ways that society may really change that, that we just can't predict, even if there isn't another revolution.
[00:46:16] Speaker B: So if, I mean, go back to the science, because I'm sort of doing sort of, like, physics type modeling, sort of little bit, and sort of, like, extension of Hodgkin, Huxley and, like, multi compartment models and networks and so on. And it seems now that, like, computational neuroscience has always been like a small subfield of neuroscience. And now I also feel that maybe computational neuroscience is because you have all these a. I mean, if you want to go into computational neuroscience, maybe many people go into this using these AI tools. So that some sense, is physics going to be traditional computational neuroscience? I have this hope, and maybe this idea, when you push this sort of learning routes and stuff, at some point, to understand more, we have to get back to the biophysics of neurons and stuff.
[00:47:04] Speaker E: It'll be important in the end. Yeah, sure.
[00:47:06] Speaker B: Just not.
[00:47:09] Speaker A: How do you.
[00:47:10] Speaker E: That's an important point. What physical brain neurons do is so much more complicated than unit threshold relu units. Right. And that's surely important for what, though?
[00:47:22] Speaker A: Not maybe not to explain cognition. Maybe it's important for what question is it important?
[00:47:28] Speaker E: Okay. If you took your brain and you replace every neuron with a relu unit, I would predict your cognition would be severely impacted.
[00:47:38] Speaker A: Do I still get plasticity, though?
[00:47:40] Speaker E: Yeah, you just don't get any ion gate voltage, gated ion channels, no intrinsic oscillations.
[00:47:46] Speaker A: My brain, again, might not be that different.
[00:47:50] Speaker E: Yeah.
So, yeah, I think that, you know, that's an important point, that the physical brain, for whatever reason, it has all these different cell types, the cells are very complicated. Maybe it didn't need to be like that to have an intake intelligence system, we don't know, but it is like that. And neuroscience, which is whose goal is to understand our brains, rather than just to come up with an intelligent system, that stuff is going to matter.
[00:48:15] Speaker A: I mean, isn't there a limit to the biophysical detail that you would need to implement? And can't you just figure out if there are a thousand different types of neurons in the brain, give them a thousand different activation functions and other algorithmic profiles, for example.
[00:48:29] Speaker E: So you're saying you don't need to simulate every single channel.
[00:48:32] Speaker A: What's the bot? Where is the bottom layer that matters?
[00:48:35] Speaker D: I agree. I mean, I think that this is a very interesting question, right? And I don't know the answer, but one possibility is the following, right? To really understand cognition and behavior, you really don't need to go down to, you know, the ion channels and the non linearities in the dendrites at that level, because it's just a bridge too far, it's too complicated, you know, just understanding it at the representational level and manipulating it, maybe, as you said, can at that representational level, you know, so you have, let's say, an AI system that represents information, and you have the brain and you study that this representational level, and you manipulate them at this representational level, and you try to understand them. It's enough to build. To understand the science of intelligence, or the loss of intelligence and build systems are intelligent, and maybe they are computational neuroscience in a traditional way, maybe in trouble. But there is another way that is very important, which is diseases, in order to intervene. And maybe that's what Ken was saying, like, in our brains, let's say psychiatric diseases, neurological diseases, they're not raylos. I mean, they have ion channels, they have molecular pathways, they have dendrites. You know, if you look at autism, the spines are different, and they are. Is where maybe people that are doing more, let's call it physics based or more classical biophysics, computational neuroscience, should maybe pivot in that direction. And that field probably needs people like computational. And also that's where probably, like, you know, most of, if you look at it even from a practical point of where the funding is and what, let's say, NIH cares about is to cure diseases more than to really understand the algorithm of perception, right? So I do think there is, I'm.
[00:50:28] Speaker B: Going to quote you in my next application. Yeah, like the great doctor tolius.
[00:50:32] Speaker A: But even those details, like, I think that's a good point about diseases and how that those lower level reductive small molecules, dendritic shapes and spine sizes, etcetera, matter at that scale. But then, just like an artificial unit, if you can abstract what's important about how that affects communication, that might just solve it.
[00:50:52] Speaker B: Right?
[00:50:52] Speaker E: Yeah. You won't need to know every single potassium channel, but I think there will be. To make an accurate model of how the human brain works, you will need to incorporate things like dendritic non linearities and oscillations in some simplified form. That's what I would.
[00:51:10] Speaker B: But also, I mean. I mean, we are sort of. I mean, if you think still we haven't really figured out what sort of mind or whatever, conscious. I mean, this feeling of mind, which is different, that we are not only sort of looking at statistical relationships when we sort of infer things. Right. We have this first person perspective. And so it. It's more to our intelligence than just sort of the things that are. I picked up by AI, I would think. So that's saying that maybe this is where maybe you need these kind of things to.
[00:51:40] Speaker E: Well, it's funny, I asked a large language model the same question that said the same thing.
[00:51:45] Speaker B: Really?
Did you say that they had a mind or that they didn't have a mind and we need to look at. We need a better brain model or something?
[00:51:57] Speaker E: No, I'm just saying that the fact that you just said all of that, any agent could say that.
[00:52:04] Speaker B: Yeah, but do you believe that I'm conscious? Don't you?
[00:52:07] Speaker E: I mean, I don't know what that word means.
[00:52:10] Speaker A: Oh, no, let's not do it.
[00:52:11] Speaker D: Let's do it.
[00:52:13] Speaker B: Okay.
[00:52:14] Speaker E: One sentence on the topic of consciousness. The moment you actually define the word, it becomes quite a boring question.
[00:52:21] Speaker B: But I think it feels like something to be. Ken. I think it feels like something to be. Me and Andreas Paul. I'm not quite sure if it feels like something to be a large language model.
[00:52:30] Speaker A: I don't think it feels like anything to be. Ken. Just kidding, Ken.
[00:52:32] Speaker E: No, that's correct.
[00:52:35] Speaker B: Anyway, we went off to the. Off the consciousness.
[00:52:41] Speaker A: Yeah. And so we're at, like, 35 minutes.
Why don't you just want to take up everyone's time and everything so.
[00:52:48] Speaker B: Oh, they have plenty of time.
[00:52:51] Speaker D: Don't worry.
[00:52:52] Speaker B: They are on the boat. Where should they go?
[00:52:57] Speaker A: Okay, go for it.
[00:52:58] Speaker B: Okay. No, I mean, sort of. Yeah. We have one. One question. I just read this book called Slow Productivity. I think it's. It's one of these. It's almost like the kind of book that you pick up at an airport, but it was sort of interesting in a sense that what does it mean to be, need to be an efficient knowledge worker, which is different from being an efficient farmer or an efficient. Because then you measure the output and then sort of what does it mean? So I thought what does it really mean? So he, the book claimed that then, since there's not easy to define knowledge productivity or knowledge workers, you make these proxies to sort of look busy, sort of, right. You work for a company and they sort of just look busy when the boss is coming, right. And are we? And maybe some of these things, when you sort of look at sort of how we survive in science is like having many papers and is this. So this has sort of become a proxy for productivity, but not only numbers, but also the quality.
[00:53:57] Speaker A: Let me just add also like especially in the neuro AI space, one way to look really productive is just to throw a neural network at whatever problem that you have without considering the theoretical framework or questions or hypotheses. Right. So I just wanted to preface it.
[00:54:13] Speaker B: I just want to add what. So I was thinking, what does it really. I haven't thought so. Didn't thinks too much about it. But what does it mean to be productive in science besides paper?
I mean, it's like these mundane things of surviving and funding grants and getting a job.
[00:54:31] Speaker D: I think it's a problem. I mean, you know, there is, I mean, this is the way science is, right? It's like, it's not, you know, the way that I like to be justified is that we humans didn't really evolve to do science. We evolved to do other stuff. And science is something that we started doing in the last few hundred years, right? And it's kind of remarkable how much we've advanced science as humanity. So whatever we've been doing, even if it's like at any given point in time and one of us, it looks like it's very incremental and we're not doing much. You know, as a species, we've done tremendous advancement, right? So some ways why, you know, now doesn't need improvement. I think it does, right? Like, you know, and one or what you are saying is maybe the way our reward system is of the classical academic system may need to be reformatted to some extent. Like some of these questions may require more teams working on it where credit is. You know, it's not about just publishing the next paper, but it's more about like working on a project longer maybe, you know, and allowing people to work as a team on a project longer and having a way to, like, reward them, you know, maybe allow some more risk.
[00:55:56] Speaker A: But do you mean, you know, personally productive?
[00:55:59] Speaker B: Yeah, I know. I'm just sort of. Because, I mean, I understand people who don't have permanent jobs and want a permanent job. That's like a different thing. Right. Or. But I'm sort of like a professor with a permanent job so I can, in principle, do anything I want. But still, you sort of, as a professor, when you sort of easily get involved in too many projects because you don't sort of want to be. It feels good sometimes if I'm, if I'm doing this as well.
[00:56:25] Speaker A: You get roped into podcast conversations.
[00:56:28] Speaker B: Exactly. But I mean, it's sort of this thing.
[00:56:31] Speaker D: There's a bit of a danger in this.
[00:56:32] Speaker B: Yeah. I think it's just something that my own sort of the psychology of people makes. It makes us be very stressed sometimes. And often you end up breathing here. Right. And often when you have several PhD students and you have, and you have some feeling of responsibility, you maybe have to spend most time on the project, which is working the least. So you have to sort of try to salvage it. Right. Sometimes think, I would much rather work on the thing that this person is doing because that's. Oh, great. So it's just something about me, something about psychology of how we make choices.
[00:57:06] Speaker E: The best thing for my productivity was when I formed a team lab with Matteo Carandini. And there's just something about that that, you know, when you've got two people, I might have an idea. I think it's great. But then he says, wait, think x.
And then you save so much time just by that. I mean, you have a lot of collaborations as well.
Similar thing.
[00:57:33] Speaker A: But do you also feel the responsibility to be productive for the other person? I mean, do you feel, does having a collaboration make you more likely to do the work you need to do the work?
[00:57:44] Speaker E: Yeah.
[00:57:45] Speaker A: Okay.
[00:57:45] Speaker E: But there's someone to remind you, hey.
[00:57:48] Speaker A: Yeah.
[00:57:48] Speaker E: You know, didn't you say you were gonna. Oh, yeah.
[00:57:52] Speaker A: So put your self worth in someone else's hand.
[00:57:54] Speaker B: But that. But doesn't that mean that you sort of sometimes are just getting stressed and not able to think?
[00:58:02] Speaker E: Yeah.
When was life not like that?
[00:58:06] Speaker B: No, exactly. But is it optimal? That's all that.
[00:58:09] Speaker A: Well, there's the Dodds Yerxes curve.
[00:58:11] Speaker B: Is that right? But I heard, for example, in Francis Cricken, obviously he was after. He sort of, like, after I sorted out the DNA and also like this. Whatever the proteins. Exactly. Then he sort of after his name. He wanted to stay under busy so the thing that I think he just wanted to have so few projects that he could always jump on something which was really exciting. Right. Of course, then he's sort of in a position where you can sort of like. You don't have any social constraints or practical constraints, but it's. It's this thing that.
Yeah, no, I don't know. I've been thinking about that lately. Sort of what is. What is sort of external pressure and what is just internalized. And we make silly choices because we are just internalized. Some kind of behavior.
[00:58:57] Speaker E: We surely do. But the moment you try and come up with how to change the system, I mean, the things you're saying about these changes in incentives, it's all good, but these are minority. This is all fairly minor changes. Right. The system isn't perfectly efficient, but when you try and think how to make it better. No, it's quite hard to come up with anything radically different.
[00:59:18] Speaker B: Absolutely. So this is not what I'm more thinking that even people who are. But people should maybe like when people have like a Nobel Prize, they could sort of. For example, then I try to be.
[00:59:29] Speaker A: That. I said that to you.
[00:59:31] Speaker B: Really?
[00:59:31] Speaker E: Yeah.
[00:59:31] Speaker A: I said, well, he was like, for the crick example, he's already famous. He can do whatever.
[00:59:34] Speaker B: That's true. But I mean, I can do. I'm not. Yeah, but I can do what I want. Also, I think that many people at.
[00:59:40] Speaker E: Some stage in the career, there's loads of examples. Maxwell, the physicist.
[00:59:44] Speaker A: James Clerk.
[00:59:45] Speaker E: Yeah, James Clerk spent something like 20 years trying to find a particular explanation of electromagnetism.
[00:59:51] Speaker B: Yeah, right.
[00:59:53] Speaker E: No, you know, there's so many cases, but, you know, you don't know what. Sublime alley.
[00:59:59] Speaker B: But what would happen if you sort of this decided to. Now we're going to solve some kind of. Just focus on one particular mathematical problem related to neuroscience analysis. Would.
You wouldn't be fired, right?
Would you or. I don't.
[01:00:16] Speaker E: I'd need to have a. I'd have a conversation with my boss at some point.
[01:00:20] Speaker B: Yeah.
[01:00:20] Speaker E: She'd kind of say, you know, you used to get a lot of grants.
[01:00:25] Speaker B: Exactly. What about you, Andreas? We have actually changed. You just moved to Stanford, so you try.
Try to make a good impression.
[01:00:35] Speaker D: We can do whatever we want. But if you keep. I mean, especially in the US, the system is very competitive. I mean, it's true everywhere. But I'm saying it is a little bit like running a little startup in Dominion, you know, especially in medical school. So you have to bring in resources and. But I do also think it, you know, what you said about Craig is interesting. Right. Because, you know, I think we're still at the stage where, you know, I mean, obviously the most important thing is to choose the right question, but it's not clear. Like this. Like, this is the question. Right. There's always, like, a lot of possibilities.
So we try and sort of, you have to remain focused, but also remain open minded enough to see what new things maybe happen.
[01:01:25] Speaker A: Here's a different way to ask the question. What makes you feel unproductive?
[01:01:32] Speaker E: Web surfing.
[01:01:34] Speaker A: Web surfing. Okay. I don't mean. Yeah, I mean, in science, you know, not like, doing it. And you cannot answer this conversation.
[01:01:41] Speaker D: Yeah, no, I think, you know, sometimes, you know, the, you know, like, you can be unproductive. You know, it's hard to know. Like, at any, like, sometimes you feel you're unproductive because you're like, so you're daydreaming or you're like, what's an unproductive?
[01:02:01] Speaker A: A year goes by and you feel either it was a productive or an unproductive year. What do you think made it productive or unproductive?
[01:02:09] Speaker D: Besides, I think it's more. It's. It's complicated. Right. Like, there could be years where, you know, you may not be, be publishing as much, but, you know, your video are very productive because you're doing your experiments, you have new research, you're laying the groundwork. Yeah. So I wouldn't say like, there is a metric.
[01:02:28] Speaker E: You know, I've spent a lot of time over the last few years on statistical methods that a few of them have written pre prints. Not one of them has been. I've even submitted a peer reviewed paper.
[01:02:40] Speaker B: Okay.
[01:02:40] Speaker E: But I do think it was worth it. And maybe I'm at that stage now. I can do what I want. So I've spent a lot of time on these questions of, like, statistical analysis.
[01:02:50] Speaker B: Then it seems like you actually educated yourself also.
[01:02:53] Speaker E: Oh, yeah, absolutely.
[01:02:55] Speaker B: I think for me, like, an unproductive project would be something I just joined, even though I'm not particularly interested, not particularly challenging, but it has some reward in terms of maybe getting on a publication. So you spend your mental resources on.
[01:03:10] Speaker E: Things you don't recognize, find interesting, and.
[01:03:12] Speaker B: It'S not really building up, building up to anything. I don't really need, like a new AI for tax laws or whatever.
[01:03:23] Speaker E: Project.
[01:03:23] Speaker B: Really? Yeah. Okay. I don't know. No, no, this is not an actual project. But I was thinking if. Oh, then something which is important, but not really. Yeah.
[01:03:34] Speaker A: Noctropol. Let's, let's just move on then, and ask perhaps the closing question. By the way, the quote was, it's from Isaac Asimov. And I did use chat GPT to look this up, so hopefully it's correct. The most exciting phrase to hear in science, the one that heralds new discoveries, is not Eureka. But that's funny.
That's the question. Yeah. Isaac Asimov. I'm glad I didn't say Einstein, because everybody.
[01:03:59] Speaker B: But we should have this question. We had this discussion, this question we sort of prepared yesterday, and this question of sort of. I was thinking of this question of if. I mean, that if for some reason there was a moratorium, was almost like the pandemic, where you couldn't do experiments for a year, so that everybody, everybody had to sort of work on existing data. And would that be a good thing? Because now I have a feeling that it's all these things that you measured and you don't really understand it. So. Oh, let's put in another mouse.
[01:04:32] Speaker E: Could I go beyond that and say that you weren't even allowed to analyze existing data. All you could do is read their literature.
[01:04:38] Speaker A: You got shut down that much. I mean, that's why everyone, like 90% of neuroscientists, wrote their first books during the pandemic. Right. There's a bunch of.
[01:04:46] Speaker B: Okay, so. Exactly. So would it be a good thing? I mean, I don't want to live in a society where this is happening.
[01:04:52] Speaker E: If all I could do was read papers, I'd be very happy.
[01:04:56] Speaker B: Yeah.
[01:04:56] Speaker A: For how long, though?
[01:04:57] Speaker E: Oh, a year.
I mean, yeah. I don't need to write anything.
[01:05:02] Speaker A: But you don't. You want to do the science, you just want to read?
[01:05:05] Speaker E: I think so. You talked about Francis Crick. That's basically what he did in neuroscience. He just read papers and he tried to figure things out. And then he wrote.
[01:05:12] Speaker A: He wrote a book.
[01:05:13] Speaker E: He wrote a book. He wrote review articles, you know, but that's all he did. Right. It was very valuable.
[01:05:19] Speaker D: It is now. It's true.
[01:05:20] Speaker B: Yeah.
[01:05:21] Speaker D: I think different people, you know, I mean, I think we need to do more experiments, you know?
[01:05:30] Speaker B: Yeah. I'm more thinking about the moratorium to think about what we have, and then.
[01:05:34] Speaker D: Yeah, I think thinking, you know, stopping and thinking about things, I think is very, very important.
[01:05:40] Speaker E: Yeah. None of us do it, but it's easier.
[01:05:42] Speaker A: It's so much easier to do the next experiment than to think.
[01:05:45] Speaker E: Exactly.
[01:05:46] Speaker B: Yeah. Well, it's also when I do simulations and I change parameters and it doesn't really work, and then I. Oh, let's try another parameter set because it makes me feel that I'm doing something.
Well, actually, I'm just sort of avoiding to try to avoid to think.
[01:06:01] Speaker A: So we shouldn't have enough another Covid or we should.
[01:06:06] Speaker D: I don't think we should think. I think there's a lot of pressure to keep, you know, stay in the run race, but I think figure out ways to get outside the rat race and think.
It's always important.
[01:06:22] Speaker B: Yeah.
[01:06:23] Speaker A: I have maybe a quick question before our last question. If you guys have a few more minutes.
How do you know when you have a good idea scientifically?
Like, how do you, when you, when you kind of feel like, all right, this is a good idea without much vetting of the idea before you ask.
[01:06:40] Speaker B: Your fine experimental collaboration.
[01:06:44] Speaker A: Is it just an intuition? But don't you have those intuitions that turn out to be bad ideas?
[01:06:49] Speaker E: Oh, yeah, lots of times.
[01:06:50] Speaker A: Then how do you know, how do you decipher if you.
[01:06:52] Speaker E: Well, you know, you wake up, you know, they normally come in the middle of the night.
[01:06:57] Speaker A: Right. Or in the shower or.
[01:06:58] Speaker E: Yeah. And then, and then if you still think so that afternoon, that's pretty good sign.
[01:07:02] Speaker D: Okay, I agree something. You know, you know, sometimes, yeah. You think you have a great idea and then you, you think more about it.
[01:07:11] Speaker A: Isn't that the best feeling in the world, though? Like when that thing hits you and.
[01:07:14] Speaker B: You'Re like, oh, but I had this great business idea in the middle of the night and which was woke up and I had to write this down because I looked at the morning, it was just, well, that's good.
Completely ridiculous. But I felt so in the middle at 04:00.
[01:07:30] Speaker A: 04:00 a.m. but that's good because you could falsify it because I failed to write mine down all the time and then it goes off and I think, well, I still come up with good ideas, but who knows?
[01:07:39] Speaker B: Exactly.
So mine was not very good.
[01:07:42] Speaker A: Yeah.
So intuition.
That's.
[01:07:46] Speaker D: Intuition is key. Yeah.
[01:07:48] Speaker A: Okay.
[01:07:50] Speaker B: Yeah. So the last one. So this, like the advice to young.
[01:07:54] Speaker A: Researchers, this is Gauta's question because I've stopped asking this.
Okay.
[01:07:58] Speaker B: Okay. So then it's the.
[01:08:00] Speaker A: I'll include it for sure. Yeah, I like the question.
[01:08:02] Speaker B: Yeah, no, I mean this, you are sort of very established researchers and so what would you sort of, what is the advice to.
[01:08:11] Speaker D: My advice is especially, I think this is generally true, is to young researchers is spend a lot of time thinking about what is the question that you really want to address and talk to a lot of people, you know, like, don't just, like, do it just because you're gonna get some training. You know, just sort of focus on the question more.
And it's okay to kind of explore stuff and be thirsty sort of, for a good question.
And I often find, like, people are being too, like, practical and, like, jumping into a project, and then, you know, you know, kind of, you're going to get training. I'm going to do this, but I think it's very important because one thing that I didn't, you know, in my experience, but also this is true in the history of science.
It doesn't matter how motivated you are or how much hard work you put, of course these are important. But often, like, I mean, Francis Crick is a perfect example with the double helix.
[01:09:20] Speaker A: Right.
[01:09:21] Speaker D: It wasn't in their project, but, you know, it was a good question, the work done, he did amazing discoveries. Right. I do think that, you know, we kind of maybe don't spend enough time, you know, and this is something we don't get trained as undergraduates, you know, or in high school. Like, we don't think, okay, what's a good question? Like, we're just taught facts. And then suddenly when you start doing research now you have to start thinking about questions, and it's a hard thing to do. And I would say for young people, spend more time thinking about what is really the question that I want to do. So this is one. Unless you nearly have something that you're very excited about, you're very curious about, that's the best thing of a scientist. Like, if there's something that you're very curious about, just follow your curiosity and what really excites you. But if you're trying to choose, you know, you say you want to do, you don't become a scientist and you're trying to say, what should I work on? You know, I'm excited about a lot of stuff, or I can get excited about stuff, spend a lot of time thinking about the question, would you just.
[01:10:21] Speaker A: Linking that back to that productivity question, would you consider that productive?
[01:10:26] Speaker D: Yeah, I would think of that as very productive. Even if it looks like you're not doing anything for, that's valuable.
[01:10:33] Speaker A: I think that was valuable a month.
[01:10:35] Speaker D: Or a year or a couple of years, I think because, you know, it's like. It's like a big boat, right? Once it takes over and goes into direction, it's harder to steer it, you know, so at least it's important.
Unless you have, like, some thing that it's like, okay, I'm very curious. I don't know, about, you know, some problem in physics or, I don't know, like people are just curious about something. They can't sleep, they stay up at night thinking about it, then go for that, you know. But if it's like, you know, you're trying to get a PhD and you're interested, let's say neuroscience or molecular biology or genetics, you know, there's so many stuff, there's so many. Just put effort thinking about what is the question and educate yourself more broadly. Don't just go very narrow. You're going to have to go narrow and focus for sure, but start broad. Like, what is the impact of what I'm trying to do? Where is this field going?
I think it's important.
[01:11:39] Speaker E: So what I'd say is you have to enjoy it. And if you're not enjoying it, switch to doing something else that you do enjoy.
[01:11:48] Speaker A: Do you have to enjoy it 24/7.
[01:11:51] Speaker E: No, but on balance it has to seem worth it. And if you ever think, why am I doing this? Then think, how can I change it? So I actually want to do it. And if you can't change it so you want to do it, then do something else, because like, there are other options.
And if you're not at the point of science is you're supposed to enjoy it. So if you're not enjoying it and you can't figure out how to enjoy it, then do something else.
[01:12:21] Speaker B: But I guess also there is some, I mean, obviously you are sort of successful. It's sort of like a survivor bias because you actually, well, at least you made it to get so far.
[01:12:29] Speaker A: So far.
[01:12:29] Speaker B: So far. Exactly. So, but, so, but I guess people, like, young people listening to this, they think, well, I mean, I enjoy it, but what's the. And if I want to, I enjoy being like doing research, maybe get a position in academia or at the research institution doing basic research.
But what are my chances for actually making it? I mean, is it, I mean, this is always that thing also, right? I mean, investing many, many years in.
[01:12:57] Speaker E: You know, but it's not like you, you can't do something else. I mean, so, you know, for me, I very nearly stopped.
[01:13:06] Speaker A: Oh, I like this. Tell us more.
[01:13:07] Speaker B: Tell your story. You have a quite unusual story.
[01:13:10] Speaker E: So my. Yeah, very unusual. So my PhD, well, I started doing physics, switched during my PhD, which funded me to do anything I wanted anywhere in the world. So I thought about all sorts of things.
[01:13:26] Speaker B: That's a nice scholarship.
[01:13:27] Speaker E: I know it was a fine print. Lots of american citizens have this. They just don't know they have it.
[01:13:34] Speaker B: Okay.
[01:13:36] Speaker E: And so I ended up doing kind of neuro robotics in London, and, you know, it wasn't that much of a success, my PhD, really, but. Because.
[01:13:50] Speaker B: But your bachelor was in mathematics.
[01:13:52] Speaker E: It was in math, yeah. And. But because the lab I was in didn't have any continuation of funding, I had to fund myself. So I got a job building a very early Internet gambling website, and very easily could have stayed doing that. It's only a few years ago, actually, that I finally earned more money than I was in that. In that job.
So, yeah, I mean, there's always other options. You know, pretty much everyone doing this job can code.
[01:14:29] Speaker B: Yeah.
[01:14:30] Speaker E: You know, you have other options. I could have been a stay at home dad. That would have been great.
[01:14:35] Speaker B: Yeah.
[01:14:35] Speaker A: Yeah.
[01:14:36] Speaker B: Because I think I like, at least when I. When I. My students ask me about this. So I say, well, taking a PhD and you sort of learn how to, like, scientific programming, coding, and stuff, that's. That's. That's always going to be good. That's. I wouldn't worry about that. It's more. Maybe if you decide to go on postdocs, then you are sort of. That is. That's more of a bifurcation point.
[01:14:59] Speaker E: Do you think it cuts you off? Do you think it cuts you off? I don't think so.
[01:15:02] Speaker B: I don't really know.
[01:15:03] Speaker E: I've had postdocs go and work in data science. They've done great.
[01:15:06] Speaker B: Yeah. And they even got good jobs.
[01:15:09] Speaker E: Yeah.
[01:15:09] Speaker B: Yeah, yeah. Excellent. Yeah. Maybe I'm too. Too pessimistic, maybe.
[01:15:16] Speaker A: Thank you, guys. Is there anything else?
[01:15:18] Speaker B: Yeah.
[01:15:20] Speaker A: By the way, Christina, are you. First of all, you're cheating by being in this room. Secondly.
Oh, so you're not. You're not waiting for us, right?
[01:15:30] Speaker B: No.
[01:15:31] Speaker A: Okay. I just.
[01:15:32] Speaker B: I just realized because Tom is out. Tim is out hiking.
[01:15:36] Speaker A: Yeah. Yes. I just. Yeah, I thought.
[01:15:38] Speaker D: Now, you know what we said. You can see the rest.
[01:15:44] Speaker A: Okay. Anyway, thank you, guys.
[01:15:46] Speaker E: Thank you.
[01:15:47] Speaker B: Great.
[01:15:53] Speaker A: Brain inspired is powered by the transmitter, an online publication that aims to deliver useful information, insights, and tools to build bridges of across neuroscience and advanced research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives written by journalists and scientists. If you value brain inspired, support it through Patreon to access full length episodes, join our discord community, and even influence who I invite to the podcast. Go to Braininspired Co to learn more. The music you're hearing is little wing, performed by Kyle Donovan. Thank you for your support. See you next time.
[01:16:29] Speaker D: Sadeena you.