Episode Transcript
[00:00:05] Speaker A: This is brain inspired. I'm Paul. Welcome to this special episode of Brain Inspired. I say it's special because it's a panel discussion. I was recently invited to moderate a panel at the annual Bernstein conference. This one was in Berlin, Germany. The panel I moderated was at a satellite workshop at the conference called How Can Machine Learning Be Used to Generate Insights and Theories in Neuroscience? And you're about to hear that panel discussion because the Bernstein organizers were generous enough to let us record it and share it with you. So this discussion came on the second day of the workshop after each of the speakers had given their talks. So some of what we talk about may lack a little bit of context, but in general, all the speakers did a good job of kind of resummarizing what they talked about in their talks so that it would make sense during the discussion. There are also questions from the audience, and I cut most of the actual question asking parts out because the microphones didn't pick them up very well. But I did my best during the discussion to repeat the question. So if it seems jumpy, it's just because I cut out the audience question and re asked it as best I could. As for the speakers, I'll let them introduce themselves in a moment, briefly, because I asked them to introduce themselves in the beginning, but I'll say right now who the speakers were in alphabetical order katrine Frank, Ralph Hefner, Martin Hebert, Johannes Jaeger, and Fred Wolfe. And you can learn more about them in the show notes at Braininspired Co podcast 177, where I link to all of their information. I also want to say thank you to the organizers of the workshop who invited me, and especially to Mohammed and Michaela, who, at least for me, made everything run very smoothly for me, made sure I was having a good time. And I know that they were working hard to set up the recordings and essentially run the workshop. So thank you both specifically and the rest of the organizers as well for having me. So, like I said, the central question of the workshop was how can machine learning be used to generate insights and theories in neuroscience? But as you'll hear in the discussion, we go over a range of topics kind of surrounding that central question. So I think it made for a really fruitful discussion, and I really enjoyed getting the perspectives of each of the panelists. I hope you do, too. Thank you for being here. Thanks for listening. Enjoy.
[00:02:28] Speaker B: The first thing that we will do here is go down our distinguished panelists. If you would just say briefly who you are and what you do and if you have any pets, and then we'll begin. Fred, can we start with you? And then we'll go down the line.
[00:02:43] Speaker C: So I'm happy to be here with you today. And my name is Fred Wolf. I'm a theoretical physicist, been working for 15 plus years with systems neuroscientists, cellular neuroscience, mathematics, physiologists, still practicing rigorous, smarty ways to understand the brain. And currently, my research interest is mostly on the frontier between evolutionary biology, reconstruction, and computational neuroscience.
[00:03:11] Speaker D: My name is Ralph Hefner. I'm a computational neuroscientist at the University of Rochester. I'm particularly interested in sensory processing, and especially through the lens of probabilistic inference, interested in how neural circuits implement probabilistic computations.
[00:03:26] Speaker E: My name is Martin Hibart. I am a computational cognitive neuroscientist, focusing mostly on human cognition and representations and trying to understand how we represent objects in our visual world. I work at Maxfank Institute for Human Cognitive and Brain Sciences in Leipzig. And this was Liebeck University in Gieson. And I don't have any pets.
[00:03:52] Speaker F: Yeah. Hi. My name is Katrin Franke. I work at the Baylor College of Medicine in Houston, US. And I'm an experimental visual neuroscientist by training, and I'm interested to understand how visual information is processed by the visual system, working mostly with the mouse as a model system and using a combination of functional imaging and machine learning tools.
[00:04:15] Speaker G: All right, I'm Yogi Or. Johannes Jaeger. I'm not a neuroscientist. I'm an evolutionary systems biologist by training. I'm trying to do philosophical work in biology these days, and I'm interested in the differences and the relations between living systems and machines. And I have a cat called Max.
[00:04:34] Speaker B: Is that really your cat in the picture that you showed?
[00:04:37] Speaker G: That wasn't my cat.
[00:04:38] Speaker B: That was not your cat.
[00:04:38] Speaker E: Okay.
[00:04:40] Speaker B: I kind of pegged you for a cat person.
So this is going to kind of be a continuation of our discussion from earlier, which I enjoyed, and some of you were not here for that, so we'll kind of revisit some of those themes. But I wanted to start by reminding us all of the name of this workshop, which is how can machine learning be used to generate insights and theories in neuroscience? And none of the talks so far have really addressed that, and none of our conversation earlier really addressed that actual question. Right. So we talked a lot about what the models are doing, how to improve them, how they give us prediction and control, and how that's related to explanation. And I think that'd be fun to continue that discussion. But if you could describe how machine learning, like the modern machine learning, whether and how it has sort of shaped your thinking about brains and minds throughout.
[00:05:41] Speaker D: Your research yeah, I'll jump in. So I think there's a deep connection or similarity between the problem that the brain faces and the problem that machine learning is trying to solve, namely how to make sense of large amounts of data and try to extract actionable, behaviorally relevant insights from that. And so I see at least two ways in which machine learning can serve as an inspiration for my work. It's been particularly in terms of looking at what kind of algorithms and representations statistics and machine learning has come up with to serve as hypotheses to then test for the algorithms or representations that the brain may employ. And that's really kind of the closest to my work. The other aspect that I tried to mention yesterday is I think a big open question is what are the best levels of abstraction which observables in the brain, spikes, membrane potentials, firing rates, whatever should we try to model and are most amenable and productive to model. And I think that's also an area where machine learning may help us find those levels of abstraction without putting in too much of our intuitions, but trying to discover it in an unsupervised way.
[00:07:08] Speaker B: Anyone want to jump in, please? It's an open discussion, so just jump in.
[00:07:12] Speaker E: Yeah, I mean, I think as part of my talk I was trying to also try to distinguish between different goals that we can have when we are saying that we generate insights or theories. And I think one is that where machine learning can really help is that it acts as a tool, as it's just like, okay, here's my tool set. I can use these and these methods. And now we have really these advanced algorithms available that can support our work. For example, one thing, I mean, I'm just going to cite something that we've done ourselves, is we were interested in understanding what are actually the different properties that people use in their language for describing different objects. And what you can do is you can ask a bunch of humans about what are the properties of a cup and they will come up with a list of different properties. And now what you can do is you can take large language models, you can show them examples of how humans have been answering these types of questions, and then you can ask them to generate more such examples. And it actually works surprisingly well. It is actually almost at a human level. And what's really great now is you actually don't have to use humans to generate this data anymore. You can actually ask now machines, and this will now allow you to get access to almost human level understanding of how people are describing the world around them. And this is like a tool, essentially.
On the other hand, of course, then what you can also do is you can use these methods as models of whatever you are interested in. So for example, if you take deep learning models, what you can in principle do is you can say, okay, I'm interested in understanding how does our visual system actually work? And you can build different models that competing with each other essentially for explaining different phenomena. And you can look which of those models is actually doing a better job. And by seeing which model is actually performing better, you can potentially gain some insights into the mechanisms underlying these computations from which the representations then arise.
[00:09:18] Speaker B: One of the things that we were talking about earlier, let's say using them as a model of a system that you want to understand is how little variance they actually explain. So when you build a system that emulates the system you're trying to understand, would it be fair to say that you're coming at it theory free or I suppose in an unsupervised manner? So if that's true, if there's a lack of theory, then what insights can we gain from emulating the system that we're trying to understand? Does that make sense?
[00:09:47] Speaker G: It makes sense to me as a philosopher.
I don't believe in theory free. Here there is a theory. It's just a bit more one step remove from what we're doing. And that one step for me. So my current project, which is not in neuroscience, but in organismic biology, more general and in evolution, is how far can we push computational methods in biology? And so one way of using this really powerful method is to try and push it to the edge instead of looking at what we can do, sort of focus on where it breaks down. And I think that would be an interesting way of using it. And that gets back to your question about variance. So sort of trying to understand, like we discussed before, what is variance? That's just, for example, coming from the measurement and what is actually biologically significant noise. It's a general problem in biology, and it's probably hard to get a handle on that. But if you can somehow get a handle on that, it becomes really interesting to probe, really, how far you can go with these models. And I think that's sort of using them in this sort of weirdly inverted way is a very interesting way of using them. So how far can you push them, and then what are the kind of phenomena that are happening in the brain that are not captured? Is a very interesting question.
[00:11:05] Speaker F: Maybe one comment I have is that I think you're right, that maybe in our talks we didn't address the question explicitly about how we use it to generate insights. And I think it's a very important thing for each researcher who uses machine learning tools to explain what is the purpose of those tools. Like you said, you might use them just as a tool versus as a model of the system that you want to study. And I think it's very important to make this distinction because some of the limitations that come with a method are like, valid. And so it depends on the purpose of what's your ultimate goal, for example. And so I think one big advantage of the system is even though maybe we don't have so we model it and we might not understand it mechanistically, but we can use it as a model of the function of a network or of a specific neurons or brain area. And we can perform much more detailed and thorough characterizations that are disconnected maybe from the experimental limitations that we still have face.
[00:12:13] Speaker B: So everyone is comfortable using machine learning as a tool, right?
How comfortable are you on a scale of one to ten using it as a model for explanation?
[00:12:24] Speaker C: Well, obviously it's like three for me as somebody that actually cares about the physiology of cells or how a system like the private ventral stream gradually evolves or jumps evolved over vertebrate evolution. Currently machine learning based approaches haven't started really full fledged to contribute there and that's where I see important open questions that are timely to address.
So I would like to comment on this point of the missing variance, which I think is a great discovery and machine learning based methods are both to be lauded for that. So when I saw missing variance 15 years ago, it was because the models probably couldn't, were not powerful, they are not expressive enough.
But now the modern machine learning tools are nearly unlimited in their expressivity and so we cannot blame it on the model anymore. If there's unexplained variance, it's a really real finding. And then on the other hand, partially catalyzed but not really driven by machine learning making its way into neuroscience. There's more and more clear view on that. There's more holistic processing in systems like the Neocortex, there's more behavioral information relevant at all stages of cortical processing. And the way forward is just to set up other kinds of experiments and studies in which we let the animal do what it wants to do with the body that it has. And that's not approachable in science without the machine learning approaches. So we can do that step hopefully or attempt it, but without machine learning it would be undo. Even if I think current machine learning models are the great model of the brain.
[00:14:18] Speaker E: Let me push back slightly on this aspect with going all the way up to the noise ceiling and trying to incorporate everything because essentially what you said is like well, the approaches that we have at the moment are limited by us not being able to explain certain variance that is currently treating as noise. And I guess the issue that I'm seeing is that I think it really depends on the kinds of questions you're interested in asking and in answering. Right?
And the way that we've been doing science so far is that we're trying to identify specific phenomenon and study them in isolation. If you're saying like, well, there's no way this research approach would ever work, like you have to take everything into account to really understand something and I would have to agree with you. But if you take the more like traditional approach and saying like, well, I can study, I don't know, object representations more isolated, then I think we can actually learn quite a lot about by using these methods and we can really get quite some advances in understanding what is actually going on.
[00:15:29] Speaker G: So I would give them a two even, not a one, because I think as a tool, as you're saying. So the big question here is again, what is the right level at which you understand? So there's a huge plasticity in the brain of the mouse and the human at least. And so probably looking at mechanistic explanations in terms of the specific connections is actually not the right level to look at. But the danger, I think here is that we do this, the typical systems biology thing, where we replace a natural system that we don't understand with a computational system that we don't understand.
And then the question again is this is what we discussed before. What is an explanation, what is understanding here? And as humans, we need to have the right level of abstraction because the explanation needs to be sufficiently simple. We are really bad as opposed to those machine learning algorithms that keeping high dimensional data in our brains, right? So we need to sort of be able to have an explanation that is low dimensional enough for us to actually grasp and to make sense of in that sense. And I do think that the machine learning approach is going to be more of a predictive tool and an amazing pattern recognition tool in all the talks that we saw yesterday that allows us to recognize patterns that are too subtle or high dimensional for our human perceptive system and memory capabilities to grasp. So I was really impressed by that. But I would agree with Fred that as a model of how the brain actually works, they're quite limited both in the sense of explanatory power and accuracy of what's actually going on and having the right level of abstraction.
[00:17:14] Speaker D: If I can chime in here just to be controversial, I would give it an eight.
I mean, I think it's a somewhat ill defined question, right? What number would you give to the mouse model or to Drosophila? And these are all model systems, right? And you could also say, well, we've replaced by the human brain we don't understand, by the mouse brain if we don't understand, or dozofila brain if we don't understand. But we've gone there because we now can use experimental techniques that we can't use in humans. And we have a good reason and track record in generating insights that generalize to the human brain too. And I think the same can be said for artificial or basically machine learning based systems of the human brain. Because now even more so than for the mouse or for Drosophila. We can interrogate them in any way we wish and ask, okay, what might be more simple summary explanations for the complicated processes happening in millions of computational units there that we can then go back and test in biology? So I think it's really useful. The other aspect I would say is what is machine learning? I mean, for me, machine learning in some ways, that's math, right? So if you include, I don't know, spiking neural networks, recurrent spiking neural networks, I see no reason why we couldn't get a lot closer to the human brain to even include mechanistic explanations and be able to predict the outcome of causal interventions in the biology.
I think we need to distinguish between what we have right now and the current approaches that we have and where we might get if we keep pushing that research program.
[00:19:03] Speaker B: Katrina, you have an experimental background, and now you use these machine learning tools a lot. So could you reflect on how because an experiment, presumably you form a hypothesis and test it, although that never actually happens, but presumably you do, and then that is supposed to generate insight, right? You interpret your results, and in the machine learning world, it's a very different world.
Can you just reflect on the differences and maybe whether you think an experimental approach is more amenable to generating insights relative to a machine learning? More or less amenable, yeah.
[00:19:41] Speaker F: So maybe before that, I just want to make the point that I think I agree with Alice that it's very important to define your level of abstraction. And I agree that maybe the machine learning tools that I also use are not like a good model to gain mechanistic insights about the circuit or the cell type correlates of what we see. However, I think they are like a good model of each neuron's function and how they represent visual information. And in that sense, I have been surprised how well the models, although they might be black box models, capture, so when you verify it in the experiment, they truly capture the relationship between the visual input and the response, but also other variables like behavior, for example. And so as an experimental neuroscientist, everyone struggles with experimental limitations. And it's kind of a dream to have a model where you can present all kinds of stimuli you want and predict responses and derive predictions to then plan specific experiments to test those predictions. And I think in that sense, it has been very powerful in my research.
[00:20:54] Speaker C: That makes a lot of sense to me. I mean, one of the great promises or the great perspectives of neuroscience now is how multimodal and complex these experiments can be. But that means the design space is so much bigger at what you can think through seriously. And so we need machine support to design experiments.
There are limited resources in time, in person power and so on. And for that reason, there has to be a shift towards machine aided experimental design.
[00:21:29] Speaker G: Yeah.
[00:21:29] Speaker D: Building on this, I think this is related to bigger question of what's the role of simulation in the scientific process, which isn't specific to neuroscience or computational neuroscience. And I think there's just a special case of that and active experimental design trying to identify the experimental conditions that are the most informative for our scientific theories is really important because experiments are expensive compared to the simulations that we're using to design them.
[00:21:59] Speaker F: Maybe one question could also be that if I want to learn a relationship between the stimulus and a response, but also, let's say an intrinsic goal or behavior and I use a black box model and it learns a relationship and correctly predicts, let's say, the neural response to one of those variables. Maybe how it learns that is different from how it is implemented in the circuit. But then we could ask do I care about that? And I would say at this point, I would not care how it's implemented like in the model because I'm using the output or the predictions of the model to derive predictions that I'm going to test anyway in the experiment and then look at the implementation, maybe using other techniques.
[00:22:41] Speaker G: I think there it's really important to recognize the use of the model as a tool rather than an accurate representation of what a brain is. Because if we then suddenly forget that distinction, then we draw all kinds of conclusions of our brains. What was the book that came out two or three years ago? The brain is flat or something. The mind is flat.
[00:23:01] Speaker B: Mind is flat.
[00:23:02] Speaker G: And so that's a conclusion that I don't think is immediately warranted. Let's put it carefully here from that sort of thing. So there's a big difference. So I think I really love it how everybody is really actively reflecting on how they used to model as a predictive tool versus as itself a representation of how the brain works. And that was a really positive surprise for me coming here and not knowing the community. So that was really great to discuss with you guys because I think that distinction is crucial and in the literature it's rarely made actually. I wonder, should it be made more explicitly, maybe also in print.
[00:23:43] Speaker E: But also have to say that there's quite a sizable fraction, I think, of researchers really interested in using these methods as computational models of the processes that they're interested in understanding. So in my field with object recognition, there's a lot of push to trying to actually build a model that is mimicking the processes that we think the brain is doing. And obviously I think it's a really important, really important aim to achieve if you have something like that because then you can start manipulating this model, you can try to see, well, what does the model not explain? And you're really gaining a really much better theoretical understanding of what could actually be going on.
One of the issues I'm seeing with the approach, and it's just like where we are currently heading, is that people are taking just off the shelf machine learning models, fitting them through the data and seeing which model is explaining most variants. And in a sense that's good, that's good first step. But with this approach you're obviously always limited by kind of trial and error, if you want, because you always find new computational models, oh great, we can try these out. But you're not necessarily building in specific mechanisms yet. So that's something that I at least have not been seeking that you can actually get. Like, if you want state of the art predictions by building in very, very specific types of manipulations that really reflecting mechanisms.
[00:25:12] Speaker C: Maybe you can go a bit further in that object recognition domain and say something about the other term that we haven't mentioned here, which is insights. And then this theory. What is a theory to you?
In the old days, when I was a student, I thought there's initially the two and a half dimensional representation and then it goes to something three D, and then I build categories from that like Ma imagined. I would think it would arrive then at a much better so what is this theory? How can you use deep neural networks that model the ventral stream to extract a theory that gives us notions and relationships and something that you can express in a couple of principles and derivations, right?
[00:26:04] Speaker E: I mean, to me it's a no brainer that you can't generate a theory just from data alone, right?
You have to look at the data and you have to interpret the data, and you have to make sense of it, and you have to generate a theoretical understanding of it yourself. I think what I find really nice about using these machine learning approaches here is that it at least gives you the possibility of doing this because you have actually now the ability to simulate you have the ability to look if it's actually expressing the types of behaviors that you would be expecting based on your theoretical approaches. So, I mean, just to give you one example, let's say I discussed this recently with Alex Ecker.
Just one example could be well, I'm proposing that a device of normalization is a really important mechanism in the brain and you can build in such mechanisms into your models and you can see whether these mechanisms would then actually help you make better predictions. And from that you can start to formulate basic principles that would go into your bigger theoretical understanding.
[00:27:18] Speaker C: So you're constraining the machine more and more to adhere to certain processing principles or representational principles and then see work towards one that's still human like, but understandable or more principled than I think at least optimization.
[00:27:35] Speaker E: Yeah, exactly. I think at least that's the hope. And that's like one possible pathway. There's lots of limitations, obviously, that you can come up with, like isolating individual, like one of these parameters. Maybe they are actually only act in concert. Maybe I actually have to manipulate all of them together to get the kinds of phenomena that you're interested in. But I think that's kind of the hope.
[00:27:56] Speaker G: Yeah, that's very far away from a sort of an ideal or myth, maybe, of hypothesis free application of machine learning, though, right? I mean, is there an acknowledgment of that, or is that I think that's still very widespread, this idea that we can infer interesting theory in the sense that you used the term fret just from reading patterns out of data, and that's not confined to neuroscience at all. And I think that's a very problematic way to use these tools because they're very theory heavy, just in a different way. And to come back to Ralph's comment about simulation, I mean, there's a really big philosophical question about that because of that joke I made before about replacing a natural system we don't understand with a computational system we don't understand. We now have the power to create computational systems that are just as obscure as the natural systems that we're studying. Right? And that's going to become a really big problem. And theory and biology in general also in the organismic and evolutionary field that we need to think about.
[00:29:02] Speaker B: Should we talk about dynamical systems theory? So a lot of people are kind of taken going back to the idea of what's the right level of abstraction. So if you reduce the dimensionality of a high dimensional model and you compare its dynamics, compare its manifold to that of high dimensional neural recordings, you find some interesting similarities, right, while they're performing cognitive tasks, just thinking about generating insights and theories at first pass. That's very exciting, right? Because there's a low dimensional structure to this high dimensional data that is shared. And I can describe both the natural data and the artificially generated data on the same kind of low dimensional manifold. And so that has kind of sat with me for a while. And now I'm questioning, well, what does that really tell me, though? Is a trajectory a thought? Not really.
What does dynamical systems theory, what is projecting it into low dimensional spaces? What does that gain us in terms of insight? Do you have insight on that? And is that a good level of abstraction?
[00:30:13] Speaker C: The question of lower dimensional, effective state spaces is very intimately linked to our think about dynamical systems.
That the search for lower dimensional state spaces is somewhat successful, doesn't necessarily mean that it's the fingerprint of collective dissipative dynamics during the computation.
And so I think that's where my feeling is, that we are just at the beginning. So there's a task dimensionality that kind of puts a ceiling on how five dimensional, something that you can systematically extract can be.
And then even without feedback, if there is a lower rank connectivity in the system, it would confine the dimensionality of a representation even without recurrent dynamics playing a role. But of course, I mean, anatomically, we know that these are density connected, recurrent nonlinear systems. So eventually the display test nature will also play a role. And the aim is to understand how that is a tool for computation and. Things like persistent activity and working memory, it's getting substantiated more and more.
And in their beautiful experiments like in Tosophila Central complex, where theoretical predictions from that line of thinking and modeling made nearly 30 years ago pan out with every prediction, that's fantastic. But on the whole, in systems like primate brain or mouse brain, we should keep all these different hypotheses that can constrain state spaces of neural states in the play.
My expertise is not in our systems but I think it's just one way to make sense of these discoveries and we have to kind of test these against each other and become more systematic in nailing what's really behind particular low dimensional brain dynamics.
[00:32:20] Speaker G: There's another really important limitation here and that is that these classical dynamical systems have to have a fixed topological configuration or face space and that is not able to capture the organizational aspects of the brain because those are actually reconfiguring the system on the go. So you have configuration spaces that are constantly changing dimensions and topology qualitatively and there is work like by Walter Fontana in the 1990s for example, about those limitations. You can show that dynamical systems theory is a really big again, it's a tool that's really useful to basically bring the dimensionality of your problem down and generate understanding in that sense that you can actually grasp something but it has its own limitations built in. I think this is something we bump against. So you can then use something like lambda calculus, which is another implementation of computation theory, but it's almost intractable. I mean, you can run programs in that, but it's almost intractable to do analysis in systems like that. So again, you bump against this complexity barrier where you're generating a predictive tool that is not generating the kind of insight you're hoping maybe to generate in the end. And this is something that this balance between tractability of the tools we have and the ability of those tools to capture what's really important underneath the phenomena that we are trying to explain is a huge challenge right now, I think. Not just in neuroscience and the life sciences and also social sciences, higher level identential systems in general.
[00:33:59] Speaker B: So, Yogi, your talk yesterday was arguing that brains are not computers and a lot of what we discussed earlier this morning was in that vein. So if I completely agree with your arguments, I'm still left with wondering how that shift in perspective changes my science. Do I need to do anything different? And what does it gain me?
[00:34:25] Speaker G: No, the good news is you don't have to do anything different so people have a weird idea about philosophy nowadays and I think it has something to do with a lack of education in philosophy among scientists and that is that if you have different philosophical worldviews you have to have some empirical means of distinguishing them. But the problem is that even a really radically empirical view the world is a mechanism or the world is computation is based on a bunch of unprovable assumptions. And so you have a set of assumptions that you have to base your knowledge and your view of the world on, and you have to choose from them. And ultimately, these are not empirically testable. So the question should be different. The questions that we ask for the practical importance is what kind of context does it give?
The sort of work I do in the lab, which I have a master's that's called master's in holistic science, and to the day, I don't know what that is, because science is not it's reductionist, it's analysis. And the point is not that reductionism is wrong. The point is to use it in its right context and to see it in its right context. So that's one thing. And the other thing is then what is the range of questions you can ask legitimately in your field with this view of the world, and what kind of answers do you accept? And my argument is that if you switch your frame like I suggest, then you just have a broader explanatory, a framework with a broader explanatory potential than the mechanistic computationalist one. And so that's the practical implication. The question is not is there an empirical test to show is the world a machine or is the world not a machine? No, there isn't the choice you make and then you get a certain amount of explanatory oomph from that. And I would argue very practically that the view that the world is not a machine, the brain is not a machine gives you more explanatory power in the end and more a better grounded context for your empirical work. But the good news is, if you just reflect on that every once in a while and you don't forget it.
These methods we saw yesterday in the talks are super powerful. And as as you guys said, I mean, they allow you to ask entirely new questions. So it's great stuff. So as a tool, go for it. Yeah.
[00:36:44] Speaker B: Does that perspective please? Yeah, you guys just yell out questions as well.
When does a model or tool become a theory? Just repeating the question.
[00:36:56] Speaker G: I guess I'll have to answer that personally. It's not a black and white thing, but it goes beyond just prediction, right?
It gives you sort of an insight in the sense that it either clarifies concepts that you're using or it puts them in a specific relation. And again, I have a very pragmatic view of what theory should do in science and also a very broad view of what theory is. It's not just like a theory of evolution or broad theories like we have in physics. It can be a local model. And it's a conceptual framework, maybe, that you use to inform your empirical research.
And I think it has to do work in the sense that it has to give you new ways of querying your system. If it doesn't do anything like that, then it's not useful theory. Okay, so theory is not just there. This is armchair theory then. So theory also has to do work. And it's interesting. Philosophers are increasingly looking at what role theory plays in life and neurosciences. And there is a lot more than we thought there was. It's just not the kind of theory that physicists have of the world, because the kind of problems we look at are very differently structured. They're more local, they're more diverse. And so it's much more local. But there's theory like, you have specific concepts that you use in a certain way, and you have local models that use. And so it's much more interesting to see how is that working. And the aim is not to get an overarching grand theory of everything about the brain, but it's to understand specific phenomena, what's happening. And that's theory too, for me.
[00:38:30] Speaker C: But also in physics, there are sub theories.
We have light here, it's sunlight.
And we all been taught the theory that that's because there's a nuclear reaction on the sun going on. And there were alternative theories to that. It's a nice example because there were no experiential studies of the sun.
So, like, science comes from celestial mechanics, and there are no experiments in that domain. It's all inference. And so it's an example to remind us that if you work toward theories, you can come to firm conclusions about really untouchable things far away. And I think that's a legit goal for neuroscience as well. So many of you have the hungar belief that there are these latent states, lower dimensional things. You can't observe them. They have to be inferred. They are inferred in that and that way. And so can we move this to a level where it's kind of conclusive evidence that, okay, now this study proved without doubt that there's this system of latent variables. That's really what's going on. That's a statement like there's a nuclear reaction in the sun. And so that's where, for me, that's a different activity than building a model, really caring about a certain qualitative class of explanations and whether they really pan out. They should be able to be wrong. That's another way where there's so the great advantage of these modern models is they have so large expressivity that they cannot be wrong. Maybe you don't have enough data to fit them, but the models are never wrong. They are powerful to describe everything. That's why we have this great progress. But theory has to have the feature that can fail.
There should be actually alternatives if there's otherwise, we are bound to exhibits expectation, bias driven research.
[00:40:46] Speaker E: There'S. A lot of philosophers of science have been thinking about this kind of question of when does a model become a theory? At what point would you consider something a theory?
How does one theory evolve from another? Or how does this change across the course of science? I think in this specific context, I think we should ask ourselves, what do we mean when we talk about a model? Like, are we talking about a conceptual level description, like more descriptive model? Are we talking about a quantitative model? Are we talking about a mechanistic model? So these are different levels of description. And I think that for something to be called a theory, it has to be something more general than a model. I think most people would agree with that. So what is it then that makes it more general? And my guess is it has to do with the explanatory powers and the amount, like, number of predictions you can generate from it. And something that I think most theories have in common is you have some very basic principle, something that works for a lot of different aspects of what you want to be explaining. So let's say for a good model, for one phenomenon that we observe a lot in the brain is that you get this mirror reversal in retinotopic maps, right? And if you have a good model of explaining why this is the case, why we find the regions and the selectivities where they are in the brain, I think that would then maybe start to qualify as a theory of topographic organization in the brain.
[00:42:24] Speaker F: Johannes, may I ask, do you have any examples in mind of theories that you would call like, a theory in neuroscience or broader, like life sciences or biology? I think that makes the point probably much more clear, right?
[00:42:39] Speaker G: So I'm not a neuroscientist. I mean, the obvious candidate is always evolution in biology, but it's also problematic because it's well, it's kind of a good example because the basic principle of Darwinian evolution is very simple. You need three things. You need heritable fitness, basically, and some variation in that.
And then if you start thinking about it, the actual implementation, just like Fred said about physical theories, the overarching theory there is very simple. And then the application to specific problems requires a lot of work to make it into a working model, and then the particulars of how that works. And also questions like what is the right level of selection? What is the unit of selection? Become really hairy and complicated. And so I think that's a good example. Obviously not a very interesting or surprising.
[00:43:33] Speaker C: If you think of an organ system, maybe immune system would be an example. Quiet immunity, okay, you can have a part list of all the genes and the scrambling mechanisms and that there are antibodies. But we have a conceptual framework for how acquired immunity is a system's function that emerges from the election of many cells.
And so that would be, for me, a theory of a distributed organ system where we have a handle. And there must be something similar to how cognition and information processing and behavioral organization is generated by networks of nerve cells and other cells in the brain.
[00:44:14] Speaker G: And also what you presented yesterday, that the idea that you can recognize these patterns using a machine learning approach and the way you use it is highly theoretical, right? I mean, it's an approach that can be generalized across different problems in neuroscience. And although it doesn't come as a sort of overarching theory like evolution of physical theories, it's a theory for me. It's a way of guiding the practice of empirical research across a bunch of different contexts, basically. And that already qualifies as a theory, and it generates insight.
It would qualify as direct.
[00:44:49] Speaker F: I guess, for me, the most important takeaway from yesterday and today is that definitions matter, right? And so what you mentioned, or what the examples you mentioned about theories like in biology or like in neuromology, I wonder then if it's like that high level, what would be the theory of neuroscience? Do we even have it yet? So I guess it's a definition problem. What is the theory?
[00:45:15] Speaker B: Can you define neuroscience?
We're going to continue this conversation.
[00:45:19] Speaker F: Yeah, not sure I can define that. It's like everyone who goes to SFN or something. I don't know.
[00:45:24] Speaker G: So why the theory, right? I mean, it can be a bunch of things that go together practices, theories, local models that are more local. And I think we really need in biology and neuroscience to move away from this idea of there's going to be a grand synthesis of everything, because there's not going to be that.
[00:45:44] Speaker F: Yes, I would have the question.
We have these different definitions, and it's important to clarify them and reflect on them. It also illustrates the limitations of the approaches we use. But then I would have a similar question like Paul in the beginning. So if we come up with a definition of theory, what would that basically change about how I do the science? Practically, right?
[00:46:09] Speaker G: I could say something to that. So I think one problem we have in biology is a neglect and maybe even a disdain for theoretical work at the moment. That's a real problem. So we are actually bumping into all kinds of problems at the moment which have to do exactly with the part of making insight big data sets.
Huge algorithmic approaches to learn from those and predict things often fall short of actually generating insight into what we wanted to understand in the first place, which is how the brain works and how maybe higher level phenomena come out of that. I think that's how it's actually important because it allows you to more explicitly reflect on what you're doing. And maybe that's like, we had about 80 years of technological progress driving the life in neurosciences now, and maybe it's time to sort of reflect a little bit now on what we're going to do with those technologies and we have a bit more conceptual work to do at the moment. That's sort of one of my pet peeves that I have with my fellow biologists, that we're neglecting to do that at the moment.
[00:47:23] Speaker B: But on the other hand, this is the time to run rampant with data.
[00:47:26] Speaker G: And analyses because we can yeah, I'm not against empirical data. That would be not very scientific of me.
[00:47:38] Speaker E: Yeah, but this is a really interesting point that you're making, which is, like, I think we're all aware of it is that we do have a lack of a theory and that we're trying to model our individual little units and trying to run our experiments. But it's kind of difficult to put this all together into something, understanding the whole system, or even at a specific level, understanding, okay, what is this specific mechanism? How is this actually implemented?
And we're generating more and more and more data because we kind of realizing that, well, we need more data to understand this really complex system, which is the brain. And it's just so incredibly difficult to understand what's actually happening in there. And now we actually reaching a point where we have so much data that we actually now have trouble even making sense of that data again, because, okay, you have this whole universe of data in front of you. How do you extract the meaningful information from it?
That's a big problem. And currently, actually, I don't really know what, is like a really great approach for addressing this issue.
[00:48:52] Speaker G: And there's a model behind the data already. You decided to sample some things and not others and all that. So there's a lot of theory behind the data already. And I think I'm not a neuroscientist, but in biology, in Drosophila, developmental Biology, they created these live imaging constructs for all the genes that are expressed during development. And they made these movies of the entire development only sample size, one for each gene and a huge data set. And nobody can do anything with it because you can't register the embryos against each other, you can't use the data in the end. So we created this data set and again, then it stopped short of creating insight. Right? And I guess you have similar examples in neuroscience.
[00:49:32] Speaker E: I mean, one issue that we have is the selection bias, right? Like that people are selecting certain types of stimuli because that's what they think is relevant, but we have no idea which stimuli are relevant. Right.
In machine learning, people have been almost decades now, for more than a decade, been working with models that are built for discriminating hundreds of dog species, right? And that's obviously not something that you want to be using for generating very specific insights about what the brain is doing, right? If you have dog detecting units, that's not necessarily something that's very specific. Some people are claiming that there's dog preferring parts of the brain and it's obviously maybe coming back to your question about the pets, right? Maybe for some very specific people, it's the case, but obviously this is not going to help us generate the kinds of insights that we want.
Yeah, but then it comes back to the question so what are actually the stimuli that we're supposed to be selecting?
[00:50:36] Speaker G: Right?
[00:50:36] Speaker E: And if you do a completely random kind of selection I'm coming from the object recognition side, of course, right? If you're doing random stimulus selection, you go on YouTube, let's say, and you would select some movies and people have done this in the past and then you say like, okay, I'm doing data driven approach and the data will tell me what's going to come out. Guess what one of the really important components was cats because there's just a ton of cat movies. So we're coming back to the same kind of issue here that yes, we can generate data, but we can generate masses of data. But what the data is going to give us back is also going to be reflecting the selection biases that we're putting in when we are actually selecting the stimuli.
[00:51:17] Speaker D: I want to provide a slightly complementary perspective. Like one way to think about the whole scientific process in my mind is as model comparison, right? I mean, the data itself is not the end in itself. It's about the insights we extract in the form of models. And how do we do that? We compare different models. We compare them with respect to their explanatory power. And so just having lots of data doesn't buy you anything really, right?
It only gives you insights in the light of models. And so two things are needed. Number one, for the model, candidate models that you have, or candidate Hypotheses, is you need data that can actually distinguish between them and there can be lots of data that is equally or similarly explainable by any of these models. And then it doesn't matter how much data you have, you still haven't really gained new insights. And the other part that you need is a process to expand the model space and consider more and different models and then selectively go out. And yes, you may need lots of data to distinguish these more esoteric or advanced models from simpler alternative models. But if that's not a directed process but just a bottom up, the data is going to tell me how the world works. I think that's not nearly as productive as it could be. So I think there is a lot of need for theory and there's a lot of need for models. So model comparison I think is the right way to think about scientific process, not necessarily hypothesis testing by itself, but.
[00:53:01] Speaker E: What I find really interesting about this idea, and I guess you're also playing into the direction of Tal Golens controversial images and experimental designs that really distinguish between different conditions. What's really interesting about this is most of us are coming from this data driven kind of perspective and it kind of turns the whole approach upside down again. Because now what you actually need to do is you need to test instead of saying like, oh, I can just use the systems identification approach and just testing all of these models against each other. Now you actually have to go back and you have to specifically collect data in order to distinguish individual models. And then you have to go back to the more traditional cycle of how science is working, which is what I.
[00:53:45] Speaker D: Would call the right side up. Yeah, not upside down, right? I mean, this is direction you should go. And that's where I think machine learning and simulation, et cetera, can be really helpful to exactly find those experimental conditions that are actually going to generate new insights.
[00:54:04] Speaker F: Maybe I want to come back to Johannes's point about the lack of reflection, and I think there's a lot of very careful science done in neuroscience, right? But I agree that maybe we all should reflect a little bit more about especially now that we have the possibility to record as much data as never before. And if only, then in the end, the most important thing is, like what you said, we reflect on the biases that exist before we do the experiment, right? We need to be aware of them before we do it and try to maybe eliminate them or not. But I think reflection is definitely something I guess all of us can use a little bit, or at least I can tell for myself. Sometimes we are trapped in our own little thing we do. And then this is, I think, the best thing I take away from yesterday and today.
[00:54:56] Speaker E: 100% agree, I think, to add one little thing to this. I think we also have to come together because what we're seeing at the moment is that lots of different initiatives are starting, everyone collecting their own big data set because people can collect these huge data sets now. And I think you could actually gain much, much more if people were really coming together, deciding which directions they should be exploring, and then actually making it a bit much more targeted exploration rather than just maybe several labs competing, collecting the same kinds of data, rather than just saying, hey, okay, you go this direction, I go this direction. Both directions are equally interesting at the moment, and then we just together are actually going to figure out which direction is actually the right one.
[00:55:42] Speaker G: So at this point, I just want to put a complaint in about the academic system that we've set up because it's not allowing this to happen. And I'm really happy for this workshop because it's one of the rarest exceptions that has an unusual format that allows us to have this conversation in front of other people, but it's happening way too little. And that's also the fault of a system that only incentivizes the production of results, not insights.
[00:56:10] Speaker B: What is happening way too little time for reflection.
[00:56:16] Speaker G: We had this conversation yesterday about how, especially as a junior researcher, you don't have time for this. And it should be part of your training to set apart time for this because the most important thing is to learn how to ask the right questions. And that's going to save you a lot of time. Save you I don't know how you gain or save time. It's a bit weird concept, but it's going to save you a lot of time, let's say further down the line, if you actually do something that has an impact.
So I hear this over and over again that people say, oh yes, I'm interested in philosophy. I'll do that when I'm retired. And no, this is not the kind of philosophy you want to do. The point of philosophy, again, philosophy is a kind of a theory that is underlying your assumptions. And it's supposed to do practical work in that it informs what you're doing in your practice. And there is a lot of philosophy that doesn't do that. And so we need more targeted interactions between those philosophers that are useful for scientists and those philosophers that pay attention to what scientists are doing. They're a minority over there with the people, the minority over here that are in this workshop and interested in these kind of philosophical questions. And I think there really needs to be a synergy and a space. But what I'm saying is it's really hard to create that space at the moment in an academic system that is not built for that.
[00:57:41] Speaker E: Is it that what you want people to do is use the insights that they've learned from philosophy to also motivate their way of doing research? Or is it more that you want people to learn actually how to think in the structured way for approaching problems in a similar way as a philosopher would dissect the theoretical way in which we actually are approaching the kind of the worldview that we're imposing upon our research.
[00:58:14] Speaker G: I mean, both because one simple example is that the systems we're studying are extremely complex and underdetermined by the evidence. So there's often multiple ways of completely validly interpreting them. So it's not a competition of who has the better model and who wins out. Often innovation, most of the time, innovation doesn't come from the mainstream of a field, but from fringe ideas that are not lunatic but are well argued. And we need to have more awareness of that. But if you try to get stuff published that's not in a mainstream, I don't know if you guys have this experience, but it's very difficult. There's a lot of gatekeeping because there's so much pressure on people to publish and so on and so forth. But I could go on for days. It's just like if you have such a highly competitive system, it doesn't allow for these things to happen. And if you had more people that are a bit more aware of these issues that science may not really work like the way they think it works. It's not quite as simple as that. Practitioners of science, then, we could build a lot from the bottom up and create those spaces even in the current system.
[00:59:19] Speaker B: What Martin was saying earlier about needing more shared reflection, shared data, methods, in some sense, that's what we do have because everything is open source now. And there are these international Brain lab. I think there are these initiatives and huge collaborations, but the worry there there's still value in doesn't have to be fringe, but in, I think, what Tony Mufshon calls it kind of a cottage industry approach, where you have these single researchers asking their own questions, generating their own data, doing their own experiments. Because when you come together, you're losing out on potential creativity. Right. So just in terms of how science progresses, don't we need both?
[01:00:01] Speaker D: Yeah, I completely agree. And I think when I look at the field, I would say I would love to have more diversity of different theories that then got tested, rather than I think there's too much of an approach of going out to collect data to confirm one's own favorite theory.
And there I agree with Yogi. I mean, the structure of how the whole scientific process is implemented in a human system is not conducive to this. And I think we need to actively take measures to overcome that. And I did that yesterday a couple of times, and I really want to advertise again this CCN mechanism of these generative adversarial collaborations that encourage researchers that come from very different, maybe theories, schools of thoughts, et cetera, to get together, find a common language and identify experiments that could actually tell apart the different perspectives, rather than continuing what they have been doing and trying to find evidence in favor of their view.
[01:01:08] Speaker E: Exactly. But I mean, that's exactly what I also meant, and I think this is not the exact opposite, let's say, of these massive data collection approaches that are happening. But another really great example that's currently ongoing and many of you probably heard of this, is like this also adversarial collaboration that happened where people are now testing different theories of consciousness against each other and of course, all the debate around it and how it's been working out so far. Yeah, people discussing.
But hey, it's really shifted the field forward by bringing people together and discussing about what are actually ways in which we can test these theories against each other. I'm actually very much looking forward to the second round of testing because the first one seems to be pretty vile at the moment.
[01:01:58] Speaker G: Talk about an underdetermined field of science.
[01:02:01] Speaker F: Yeah.
[01:02:02] Speaker B: Rob, it's taking you three years to write a short paper with someone from that adversarial. Right. So debates are fun, right, but often they're not productive in that people simply talk past each other.
And that's. Kind of my cynical outlook on the adversarial collaborations that that is just simply what's going to happen. But you find the opposite.
[01:02:30] Speaker D: Absolutely.
I'm part of two, and both of them, I think, have really moved us forward.
Looking at published papers is one way to evaluate the value or the contribution of these collaborations. But I think the much larger one is for the scientists that are involved, where it's clearly, I think, expanded everybody's perspective in at least putting more probability mass on the other hypothesis and going forward, even in their own papers, that they're not necessarily written with the other people using a more understandable language that both sides can understand.
I don't think we've talked past each other. I mean, it would have been much easier to write a paper if we had just put in our perspectives and essentially talked past each other. But what has taken time is agreeing on one language. But the main reason why it's taken so much time, we could have written that paper presumably over a few weeks if that's the only thing we had to do. But the way science is set up, this isn't the main thing in our everyday life. This is not what pays the bills or what maintains our jobs or gets us tenure. And so it's clear that for everybody, right, it's not at the top of their priority list. And so necessarily, it'll take longer and longer. I think it's actually testament to how valuable it's been that we are still, after three years now, with the new semester, setting up weekly meetings again on Zoom, despite all the other demands on our time, to keep pushing this forward. And hopefully Mark Webert published this before Christmas.
[01:04:14] Speaker B: And as we all know, meetings always push things forward, right?
[01:04:19] Speaker D: But they're completely voluntary for all of us. Right. And to how many meetings do you go voluntarily if they're not productive?
[01:04:30] Speaker B: As few as I have to go to.
[01:04:31] Speaker D: Exactly. We all try that.
[01:04:34] Speaker B: We're just going to do a really quick roundup, finishing off. But we're going to start with a question from the audience if it's a good question. If it's not, we won't.
Okay, let me restate your question. So there's no theory right now in neuroscience, we're pre Newtonian.
If you didn't study what your mentor studied and you studied what you wanted to study, would you get to a first principle that would allow the development of a useful theory in neuroscience?
[01:05:07] Speaker G: But you mentioned Newton. This is exactly the theory we don't need in life and neuroscience, the sort of overarching theory of everything. And I would repeat again, that theory doesn't need to be like that. Right. And I think I would sort of maybe provoke a little bit, say, cognitive science has too many theories, symbolism, connectivism, embodied in action, stuff like that. It's not even clear whether they contradict each other or whether they're just different perspectives on the same thing. And you're right. I mean, the field is in a state where there is no decisive way of saying this is better than that. And so it's maybe good to explore. But I would encourage you to open your mind towards what theory is and say that there are different frameworks and different models, and they are also theory, just not the kind of theory you would maybe expect from learning about the.
[01:06:02] Speaker H: Okay, you make me defend the history of physics, but I can take it down a similar route. So I think what you're saying is that we are pre capillarian.
There are empirical laws to be found that tell us something about general principles. And that should happen whether you believe there would be a Newton of the graphe or not.
[01:06:33] Speaker B: But this is why physicists should not become neuroscientists people because you think in laws, right?
[01:06:39] Speaker H: Well, then there's another thing. So I think we shouldn't short sell some of the achievements of theoretical and computational neural sites. So there is a theory of expansion recoding and if you don't understand that, you cannot properly think about certain parts of the brain, like dental jarvis and there's a theory of pattern completion. There's a kind of day to day notion for claw of pattern completion, but there's also mathematical Theore of pattern completion. And if you don't understand that, you don't know how to think about CA three in hippocampus. And we have a kind of.
[01:07:19] Speaker D: Question.
[01:07:20] Speaker H: Theory of how systems colonization work, that memory traces are shifted from one part of the brain to the other. It's not a complete coherent whole, maybe it never will be. But I would really contradict that there is no knowledge in neuroscience that has taken some form of theory.
[01:07:42] Speaker D: So I'm going to say something controversial too.
I think we may have something right. And I think the closest to it may be something like neural sampling based on the idea ha, right, based on the idea that stochastic biophysical system.
So evolution has been basically created in the form of a brain that then learns to shape its intrinsic stochasticity in such a way to look like it may be approximating something that we call sampling from a more mathematical or machine learning perspective. And now how close it really gets to that is an empirical question or how far that idea takes us. An empirical question. But I think that might actually serve as a unifying idea for how stochastic neural activity comes about and can be interpreted functionally as performing a computation. If I may say that.
[01:08:54] Speaker C: You have a question? Yeah. Now I would like to ask is that basically coming down from the idea that you can describe behavior as a statistical inference, basically because there's uncertainty in the world and we have to navigate world and hence we have to perform probabilistic inference in order to sort of extract the right variables and recognize things the right way and proceed the right way. And then that comes down to what individual neurons are doing in order to support such probabilistic behavior. And hence is that the line of thought we use in order to support?
[01:09:31] Speaker D: Exactly. I mean, the evolutionary forces will act on the behavior or the output of the system and the closer that the output of that system to optimal by whatever useful definition of what it means, the closer some parts of that system, maybe close to the sensory periphery they may be to what we would call statistically optimal inference.
[01:09:58] Speaker B: Okay, so we're up against time, but final thoughts again, the title is how can machine learning be used to generate insights and theories in neuroscience? So my final question to you and Yogi, you can choose to answer this if you want to, especially you two in the middle.
How can machine learning be improved to help generate insights and theories? What would you like to see? What's holding you back in terms of machine learning models, if anything?
[01:10:28] Speaker F: So I think one thing from my research specifically is more like multimodal models so that you predict neuronal response not only based on one modality like the visual input, but also include other parameters that drive the activity, like we discussed that before about the variance that we can explain. And I think that would be one important direction.
[01:10:53] Speaker D: Yeah, from my perspective, machine learning models that are a little more closely modeled on aspects of the brain that I happen to think are important and beyond the distributed and possibly hierarchical aspects that are already incorporated.
It's the stochastic nature of neural activity and the spiking nature of neural activity. And obviously there is a part of machine learning already that looks at spiking neural network, recurrent neural networks and I think more work in that direction, how to train them, how to do inference in them or use them to inference I think would be really useful.
[01:11:37] Speaker B: Fred, do you have anything to add?
[01:11:39] Speaker H: Yeah, well, I think machine learning should be used well, and I have a specific wish. So one thing that's really low hanging fruit, I believe, to gain insight is the direction of comparative work like human brain representations, non human primate brain representations, other model and animal comparative work in the past. My feeling is that the mainstream is to look for the general principles. But there's so much in these comparisons about key decisions in the design of neural systems and so that's from a brain evolution perspective should be low hanging fruit that can be really informative on what are the strengths and the advantages of different experimental settings and that give us insight into the deep history of this machine that we're all using to.
[01:12:34] Speaker B: Think, okay, I don't want to keep us. So thank you to the panel again for entertaining the discussion and thanks for coming, everyone.
[01:12:58] Speaker A: I alone produce brain inspired if you value this podcast, consider supporting it through patreon to access full versions of all the episodes and to join our discord community. Or if you want to learn more about the intersection of neuroscience and AI, consider signing up for my online course, NeuroAI the Quest to Explain Intelligence. Go to braininspired co to learn more. To get in touch with me, email Paul at braininspired co. You're hearing music by The New Year. Find them at the New year. Net.
[01:13:26] Speaker B: Thank you.
[01:13:27] Speaker A: Thank you for your support. See you next time.