BI 220 Michael Breakspear and Mac Shine: Dynamic Systems from Neurons to Brains

September 10, 2025 01:25:05
BI 220 Michael Breakspear and Mac Shine: Dynamic Systems from Neurons to Brains
Brain Inspired
BI 220 Michael Breakspear and Mac Shine: Dynamic Systems from Neurons to Brains

Sep 10 2025 | 01:25:05

/

Show Notes

Support the show to get full episodes, full archive, and join the Discord community.

The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.

Read more about our partnership: https://www.thetransmitter.org/partners/

Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: https://www.thetransmitter.org/newsletters/

To explore more neuroscience news and perspectives, visit thetransmitter.org.

What changes and what stays the same as you scale from single neurons up to local populations of neurons up to whole brains? How tuning parameters like the gain in some neural populations affects the dynamical and computational properties of the rest of the system.

Those are the main questions my guests today discuss. Michael Breakspear is a professor of Systems Neuroscience and runs the Systems Neuroscience Group at the University of Newcastle in Australia. Mac Shine is back, he was here a few years ago. Mac runs the Shine Lab at the University of Sidney in Australia.

Michael and Mac have been collaborating on the questions I mentioned above, using a systems approach to studying brains and cognition. The short summary of what they discovered in their first collaboration is that turning up or down the gain across broad networks of neurons in the brain affects integration - working together - and segregation - working apart. They map this gain modulation on to the ascending arousal pathway, in which the locus coeruleus projects widely throughout the brain distributing noradrenaline. At a certain sweet spot of gain, integration and segregation are balanced near a bifurcation point, near criticality, which maximizes properties that are good for cognition.

In their recent collaboration, they used a coarse graining procedure inspired by physics to study the collective dynamics of various sizes of neural populations, going from single neurons to large populations of neurons. Here they found that despite different coding properties at different scales, there are also scale-free properties that suggest neural populations of all sizes, from single neurons to brains, can do cognitive stuff useful for the organism. And they found this is a conserved property across many different species, suggesting it's a universal principle of brain dynamics in general.

So we discuss all that, but to get there we talk about what a systems approach to neuroscience is, how systems neuroscience has changed over the years, and how it has inspired the questions Michael and Mac ask.

0:00 - Intro 4:28 - Neuroscience vs neurobiology 8:01 - Systems approach 26:52 - Physics for neuroscience 33:15 - Gain and bifurcation: earliest collaboration 55:32 - Multiscale organization 1:17:54 - Roadblocks

View Full Transcript

Episode Transcript

[00:00:03] Speaker A: And I hope that we are starting to put down some of the principles of how the brain works that aren't just a trend, but are actually going to, you know, stand the force of time. Like some of the things that were achieved in physics in the early 1900s. [00:00:24] Speaker B: The dynamical systems to me was such a bedrock and it is now of all my work, because it says something a little deeper, which is, what could a neuron even do or a population of neurons do, given the information that they have or what they want to do next or something? You can kind of frame problems from a neuron's perspective. And I found that so, so much fun to think about. [00:00:47] Speaker A: Plasticity plus criticality in very general circumstances can lead to self organized criticality in which you no longer need to tune anything because the system will self tune. [00:01:02] Speaker B: So I think we need to kind of keep that humility as we go through and realize it's a really difficult thing to go across scales. But we, as Michael said, we need to take the neurobiological detail and the computational, so what? And try to make them connect as many ways as we can. So it's exciting to me. [00:01:26] Speaker C: This is brain inspired, powered by the transmitter. What changes and what stays the same as you scale from single neurons up to local populations of neurons, up to whole brains. How do tuning parameters like the gain in some neural populations, affect the dynamical and computational properties of the rest of the system? Hello everyone. I am Paul. This is Brain Inspired. And those are the main questions my guests today discuss. Michael Breakspear is a professor of systems neuroscience and runs the Systems Neuroscience Group at the University of Newcastle in Australia. And Mac Schein is back. He was here a few years ago. Mac runs the Schein Lab at the University of Sydney in Australia. Mac and Michael have been collaborating on the questions I mentioned, using a systems approach to studying brains and cognition. The short summary of what they discovered in an early collaboration and is that turning up or down the gain across broad networks of neurons in the brain affects integration working together and segregation working apart. And they map this gain modulation onto the ascending arousal pathway, in which the locus coeruleus projects widely throughout the brain, distributing noradrenaline. And at a certain sweet spot of gain, integration and segregation are balanced near a bifurcation point, near criticality, which maximizes properties that are good for cognition, which you know about if you've been listening to this podcast over the past few episodes. In their more recent collaboration, they used a coarse graining procedure inspired by physics to study the collective dynamics of various Sizes of neural populations going from single neurons to large populations of neurons. Here they found that despite different coding properties at different scales, there are also scale free properties that suggest that neural populations of all sizes, from single neurons to whole brains, can do cognitive stuff that's useful for the organism. And they found that this is a conserved property across many different species, which suggests it's a universal principle of sorts of, of brain dynamics in general. So we discuss all that, but to get there we talk about what a systems approach to neuroscience is, how systems neuroscience has changed over the years and how it has inspired the questions Michael and Mac ask. Thank you for being here. You'll find links to Michael and Mac's information and many of the papers that we discuss in this episode in the show Notes, where I also link to a few good reviews for kind of an overview of this kind of approach to neuroscience. And at BrainInspired Co, you can also find out how to support this show through Patreon. Thank you, Patreon supporters. All right, here are Michael and Mac. You both are in the business, it seems, and you can correct me if I'm wrong of studying brains and cognition across scales, across levels. And that is a systems approach. However, and when people ask me this, I don't like. It's hard. I don't like to be pinned down to one certain phrase. Like when someone asks me what I do, I generally say I'm a neuroscientist, but something feels unsatisfactory about that. However, Mac, last time, actually the episode title, which was three and a half years ago, was Systems neurobiology because that's what you say or have said that you do. And Michael, at least on your website you, it says you do systems neuroscience. And so I'm wondering what the difference is between systems neuroscience and systems neurobiology. Michael, you want to take a crack at it first? [00:05:18] Speaker A: Yeah, thanks, Paul. I mean, I think systems neuroscience is just a very broad umbrella term that, yeah, you can start with detailed neurobiological descriptions of the brain and start stripping away some of the details. And I mean, in our group we're searching for unifying principles and underlying theoretical frameworks for describing how the brain works. And so sometimes we strip away a lot of detail and we don't leave that much of the neurobiology left. So it would seem to be doing a disservice to use the term neurobiology if we're studying very, very simple mathematical systems that somehow relate back to what we think is happening in the br. So that degree of abstraction and that search for underlying principles that might relate something that we're looking at in the brain to other systems that aren't the brain living systems or even complex behavior and non living systems. So maybe that lack of bound is why I still use this term Systems Neuroscience After 25 years of starting to use it. [00:06:35] Speaker C: Mac, it sounds like what Michael does just sort of encompasses what you do and then some more. Do you agree with how he describes it? [00:06:43] Speaker B: Yeah, I think that's how I think about it. I think of systems neuroscience as this sort of really big broad space that defines a sort of perspective that you would take on the system, the kind of ways that you'd like to frame your questions because of the ways that you'd like your answers to look. And I think the reason that I sort of pushed on the neurobiology part back in the day, I probably do so less now that I've kind of come to learn just how broad of a church system zero science is, was really to kind of differentiate the fact that I just at the end of the day really cared so much more about the kind of biological implementation level details than the math side, which I wasn't coming from. And I didn't think a little bit imposter syndrome. I didn't want to kind of come across as a kind of card carrying computational neuroscientist with a background in applied mathematics or physics, but rather to say I want to take really seriously the kind of complexity of all the biology and then think how on earth could that give rise to something as interesting as cognition or attention or consciousness or something, but really starting from that point. So to sort of differentiate it from other groups that I saw that were really advancing the computational side, I wanted to sort of push on that. But I think in retrospect it is, I think all encompassed by this term systems neuroscience in general. [00:07:59] Speaker C: So what is the systems aspect? I mean, Mac, you and I were just conversing a couple weeks ago and you said you really like to, I think you said geek out maybe. I'm not sure if that's a phrase you would use. But you know, on the neurophysiological details and the biophysical details and the cell types and all that jazz. And in that sense it's almost like a bottom up sort of approach. But I, but, but that, that belies the actual systems approach. So what, what's. So then there's neuroscience, neurobiology, but then both of them are preceded by systems. So what's special about a systems approach relative to, and this is for both of you, relative to like the historical approaches in neuroscience and neurobiology? [00:08:39] Speaker A: Well, I think the biology matters and maybe, you know, there's a nice tension in the field with the neurobiology versus just the pure systems neuroscience is that the implementation does matter. I mean, you might get underlying universal principles like self organization, complex dynamics, multiscale coordination. But when you look at it in the brain, there are specific differences and additional levels of complexity that matter, like neuromodulation on different timescales, the way the actual substrate of the brain can change in response to its activity because of long feedback circuits. These do matter and they differentiate the brain as a system from other systems. So I think there's a nice tension there that sometimes the implementation is really important and pulls the framework of what you're looking at back towards the specifics of the brain and even that circuit that you're looking at. And in other instances you can strip away some of that detail and say, hey, in this way the brain actually looks a lot like other self organizing systems. And that's I think been part of the creative use of collaboration with Mac when we're having these discussions. And my tendency to pull away into a theoretical framework and Max sort of groundedness in bringing back all the beautiful neurobiology. [00:10:19] Speaker C: Well, you never think, oh, I wish he would leave off talking about the 1700 neuron cell types. What does it matter? I want to abstract them away. [00:10:31] Speaker B: All the time. All the time? [00:10:34] Speaker A: No, I mean, yeah, sometimes there is, when, if we're writing a paper together and we're editing, doing this iterative process of editing, there'll be more detail in it after Max edits and then some are less detail and then we converge on some sweet spot. But Max already thinking of brain from a systems perspective. Whereas, you know, sometimes you find yourself in a forum and people are like, well, what about the gap junctions or what about this cell type? And that's sort of when the dialogue begins to fracture, when there's a sort of insistence that a particular piece of detail is really important. That's where the system's perspective sort of disappears or is no longer that useful. But what we've seen over the last few decades is that the systems approach is finding more and more applications and showing more and more deeper insights. [00:11:39] Speaker C: I think, yeah, it has been a few decades now. Mac, was I interrupting you? Were you about to add something? [00:11:45] Speaker B: Oh, no, I was just going to sort of echo what Michael said. And I think it's really fun to kind of go back historically and read some of the really kind of impactful papers that sort of set us on the particular course that we're on now. Like, I just so happened to be reading the old Hopfield and tank paper from 1986 when I was four years old the other day. And it's a beautiful paper, really kind of one of the sort of starting points of much sort of modern deep learning neural network kind of parallel distributed processing ideas. But Hopfield and Tank say in their paper, it's amazing how far you can get by just ignoring all the details of the neurons. And so then they go off and have this really interesting hypothesis that led to this idea of parallel distributed processing. But it's also not clear that all those details don't matter too. Right. It could be that the gap junctions end up doing something computationally that we don't understand yet that will end up being really beneficial, but we don't know. This is why I really like to push on the neurobiology to say there's all this hard won stuff, all this broken symmetry down at the level of biology and some of it will end up mattering for the computational outcomes that the brain's capable of and some of it won't. And right now we've gotten really far with one subset of it, which is really, really cool and interesting. But let's go and see what else we can get from the rest of it. And maybe there'll be some stuff to learn there. Odds on there will be. Because evolution and natural selection really wants to find good answers. Right? It's like water finding the cracks on a pavement. It's going to find the solutions that work. So I think at the end of the day, taking that historical perspective, but as Michael said, not just saying, oh look, I found a little squiggly wiggle on the brain, maybe it works. Think to yourself, what on earth could that do for the system and how could it benefit an entire organism wandering around its world? And I think the thing that's really exciting to me is that we're now starting to get these kind of paths between the scales that weren't there for a long time. Right? You can now say, look, here's one hypothesis about what a chemical like noradrenaline could do. Let's go see if we can make that, go up the scales and make some predictions and test them. But it's not to say that it's the perfectly correct way. The Ptolemy model of the solar system was really popular for a long time before we realized that we were thinking about the solar system. The wrong way. We had this sort of Earth centric rather than a solar system centric perspective. So I think we need to kind of keep that humility as we go through and realize it's a really difficult thing to go across scales. But we, as Michael said, we need to take the neurobiological detail and the computational, so what? And try to make them connect and as many ways as we can. So it's exciting to me. [00:14:20] Speaker C: Yeah, well, you. You were actually alluding with the neuroadrenaline to one of the first projects that you. Or maybe the first project that you guys had together. And I want to get to that and. And sort of reflect on that. But what Michael was saying earlier sounded like also it's really dependent on what perspective you're taking and what questions you're asking, whether you're going to add the details from gap junctions or not, or whether you. You're gonna consider them like, within the system. Because it might. It depends on, like, the timescales you're asking about and, and the. Well, I mean, if you're going across scales, you have to somehow look at all of it. And there need to be systems, properties. But. But it does seem to like the noradrenaline thing that you're. You were just mentioning, Mac, in that particular system. Yes, it eventually goes across scales, but it's. But it begins by studying noradrenaline. And, you know, so there's a. I don't know. Does it depend on the perspective that you're taking for the particular project you're working on? I guess is the question. [00:15:22] Speaker B: I mean, I think undoubtedly that's a lesson I learned from Michael. I still vividly remember a talk he gave at human brain mapping about the sort of observational lens that we take as a scientist and that if we don't realize that we're looking through a lens, we can think that the way we're looking at the system is just the way that the system is. But if we step back from that and take ourselves into the picture, then all of a sudden we're like, oh, actually, the whole world isn't just 40 Hz oscillation synchronizing together. That's just one way that you have to look at the system. And that might be how it looks through that lens. But if we go look through a different lens, like through the BOLD signal, or if we go all the way down scales and we're looking at electron microscopy, it's all the same system. We're just looking at it in these different ways. So I think that's a really important step for us to take as a field to kind of step out of our epistemological clothes into a more ontological space where we can say what is the actual system underneath all of this? And how am I bringing something to the table? But how is it different? Or how does it make a different prediction than what you would have made? And how can we reconcile those? To me, that's a real important step for us as a sort of maturing as a field in neuroscience. [00:16:27] Speaker C: Michael, you've been doing this for a couple decades, right? And I mean, you know, you mentioned, it's fun to like, you both mentioned, alluded to, you know, the early systems papers. Right. That are a few decades old. But my, my historical context is that probably because of the reductionist molecular biology revolution that the mainstream in neuroscience, yes, the systems approach has kind of always been there, but it's maybe kind of waxed and waned a little bit. But I grew up in a very reductionist perspective approach to neuroscience and that seemed like the mainstream. Is that historically, like thinking back, Michael, has it waxed and waned, this kind of systems approach in popularity? [00:17:15] Speaker A: I'd say more waxing than waning. I mean, you're right. I did medicine in the. It sounds like I'm very old, late 80s, early 90s. And everything that we were taught in neurophysiology and physiology and neuroscience was this reductionist approach. Yeah, at that stage, you know, the single gene leading to the single knockout neuron, leading to the phenotype was still very much the dominant way of looking at neural disorders, including schizophrenia. And it was also the time of the so called grand, what was it? The grandfather cell. [00:17:52] Speaker C: Grandmother, Grandmother cell. [00:17:54] Speaker A: Sorry, A very particular person in your life. And so when I started doing my PhD in 99 and started, I basically was in physics and maths at that stage I wasn't really interacting a lot with the neuroscience community because I wanted to learn all this beautiful nonlinear dynamics. And at that stage, it was apparent already that there was some pioneers applying this for decades in neuroscience. People like Walter Freeman, Paul Nunez, many others. I worked with Peter Robinson and met Vic Yrcer, who was starting to do neural field theory. So this was an active field, but it was published in journals like biological cybernetics, physics journals that sort of been the rise and fall of chaos theory in a very sort of cartoonish manner. When we started then applying our mathematical tools to neuroscience, if it was time series analysis, you could get it published. But if it was a Model. I remember sending a paper into Journal of neuroscience, probably 2005 or something, and one of the reviewers saying, look, this looks like an interesting paper. I don't find any fault in it, but this type of paper just does not belong in the Journal of Neuroscience. [00:19:18] Speaker B: What year was this? [00:19:20] Speaker A: I think it was about 2005. Okay, so there was still a lot of gatekeeping from the type of modeling material or the type of material that could get into a mainstream neuroscience journal. [00:19:33] Speaker C: Now you can't publish without having a model. [00:19:36] Speaker A: Yeah. Plus computational biology. Yeah. And so it was hard to get the mainstream neuroscience field to engage in what we were doing. And we started doing epilepsy, neural field modeling. And bit by bit, through presenting the work internationally and making connections, and suddenly, slowly, these types of papers started to get published. Journal of Neuroscience, Human Brain Mapping, Neuroimage, Cerebral Cortex. And then there was a sort of watershed moment in the mid-2010s, like 2015. And from then on, people like Mac and others have really pioneered what the field can do and come into mainstream neuroscience. That's my recollection of it, anyway. [00:20:41] Speaker C: Mac, are you part of mainstream neuroscience. [00:20:43] Speaker A: Now, publishing in mainstream neuroscience, or at least pioneering work with that lens? We talk about lenses and apertures to look at the brain, but there's also a lens and an aperture to look at neuroscience as a process. And that's kind of what I'm talking about. [00:21:03] Speaker B: It was really interesting, Paul, sort of growing up in the human brain imaging community, where Michael was already really well established and respected kind of person in that community. And I think Michael, by your own admission, you were, you know, kind of a little bit different than a lot of the other groups. Right. There'd be groups out there doing whole brain imaging. And, you know, look at this map that I found. I found this network of the brain or something like that. And it was, you know, anti correlated with that other one. And then you'd go to Michael's tutorials and he'd be like, okay, here's a bifurcation in a little simple toy model of a neuron. Now what happens if we scale it up? And so he was really coming from this completely different perspective. And I remember when I first saw you, actually, Michael, it was my first ever human brain mapping at Quebec City. So this is really way back in the day. And it was an education course again. And it was just overwhelming. Like, as someone who came from a background in medicine as well, we'd been taught all these really beautiful stories about physiology in different organ systems, like how cells in the heart can combine together to form a pump sort of systems level. And you'd go to the brain and you wouldn't find those same stories. But in part, it's because it's so complicated. Neurons are this incredibly difficult mathematical object on their own, and then when you put them together into groups, they do even more complicated stuff. And then even that's an approximation of the actual complexity you see at the whole systems level as it's interacting with the world. And, you know, it's just this sort of bewilderment that you have when you go, wow, how is this thing that I'm scanning in an MRI scanner that looks, you know, this sort of like this is. How is it related to this thing that's completely different. It feels like it's different languages. And I think something I've always admired about you, Michael, and why I was so excited to start working together back in the day, was that you were sort of fighting that fight for that community. Right, this community. You could see how it could benefit from that constraint that was imposed by the reality of a neuron or neural systems as they were. But they were. Had these beautiful access to the whole brain as well, in a way that a lot of other communities didn't have. And I think it's that connection between those that's so exciting. The new scientists that are coming to the field now, you can now come in with a mathematics background or a physics background and immediately get shown four or five just fascinating questions that you can come in and try to solve. And so now you don't have to go and stare in a telescope at some binary star system that's wiggling around each other in a weird way off in the distance. You can just come right in front of you and look at the neural data and try to see if you can apply principles from your training to that problem and help solve it. So I think the fact that that's been set up by this tension that's been held onto, rather than just like their fields separating, they've tried to come together. And I think you really were a pioneer in that front, Michael. To me, that's what's so exciting about this sort of next generation that we have in front of us, these new students. [00:23:50] Speaker A: Yeah, I think there's a really good analogy there, Mac. If you look at the field now with neuropixels and other wide field. [00:23:58] Speaker B: Whole. [00:23:59] Speaker A: Whole field, calcium imaging and optical imaging, and this is a paper that Mac and I recently published as well, with Brandon Munn, you know, the complexity is overwhelming. So you need whatever tools you can avail yourself. And you know, PCA is going to get you a little bit of the way. But if you really want to understand the dynamics, you need to bring in the tools that have been used by physicists to understand other complex theory, systems, renormalization tools which were used in this paper. Bifurcations, phase transitions, lots of beautiful methods that are available. [00:24:40] Speaker B: Network science and graph theory. [00:24:42] Speaker A: Yeah, yeah. Without those, the data itself is so complex. And I think this is a good analogy for the brain, that if the brain was really as complex, complicated as all of its components looked at through a microscope would suggest, you know, we wouldn't be sitting here having a conversation. And as three different brains making sense of what each other is about to say and thinking and the narrative and moving with fairly low dimensional arcs through space, you know, our motor system has to collapse all these bewildering degrees of freedom. And so one of the really important principles from physics is the notion of the enslaving principle and how many systems with many degrees of freedom can contract down or collapse onto low dimensional manifolds and be enslaved by the order parameters on those manifolds. So it's sort of like a reverse causation where now you've got the mass action of many components on a low dimensional manifold. And the tiny little systems, the tiny little guys, the neurons can't fire independently anymore. They're trapped in this low dimensional manifold. So their firing rate can go up and down slowly, but it's really enslaved by what's happening around them. And through this massive collapse of the degrees of freedom, I mean, we're still maybe left with seven or eight or nine degrees of freedom. I mean, it's very hard to imagine that in your mind, even 4 degrees is 4 dimensional space is pretty bewildering. But yeah, that's kind of like that's required for humans to converse and to make sense of each other and to make sense of other brains. And if that's what the brain is doing, collapsing down its degrees of freedom to make sense of other brains and other agents in the world, then that's a sensible method to use in the laboratory. So there's a nice sort of hermeneutical principle there. [00:26:51] Speaker C: I think you just said a lot, and I was sort of furiously taking notes because I wanted to interject and ask questions about almost everything that you just said. One of the things that came to mind as you're talking. So the physics inspiration has been very successful. I mean, you've used it right? And from dynamical systems theory essentially and studying what you called studying complex systems. But it made me wonder, and this is sort of a naive question because I love the dynamical systems theory stuff too. I love making things low dimensional, I love manifolds, all of that stuff. But then I thought, you know, so there's this huge influx of physicists into neuroscience and you know the cartoon where it's, you know, why are you looking for the keys under the light post? It's because that's where the light is. I wonder if there is in a sense, and I know right now it's a burgeoning time and there's a lot of influx and using inspiration from physics to study the complexity of brains and brain functioning. I wonder if in a sense there is this. Well, this is what we've done in physics, let's apply it to this other system. And it's a sort of analogous, like looking. That's where the light is, is in physics. Right. I'm not trying to insult anyone, but I'm wondering if that even makes sense. Right. [00:28:14] Speaker B: I would say the analogy is a bit closer to the phys. The physics approaches are all flashlights or torches or different types of light emitting objects that all have their own sort of challenging lens with them. So one of them is a laser pointer, the other one's a kind of big waffle light. And your job is to then go around looking in the space to see whether or not any of those particular lenses are helpful for you to find the keys. I think the drunk looking for the keys under the spotlight to me is a little bit more apt to think about the way that in neuroscience we've been looking at parts of the system and assuming that the whole thing is in there. [00:28:48] Speaker C: Well, that's the obvious example. Yeah, I mean that's. [00:28:50] Speaker B: Yeah, yeah. [00:28:52] Speaker C: Sorry, I didn't mean to interrupt you, but yeah, I. [00:28:54] Speaker A: Not at all. [00:28:55] Speaker B: But I think we still, it's hard not to do it right, because end up if we are as pedantic as we need to be and then everything is just overly defined and hyper precise. Like the classic example is we found examples of this, this and this. And then you have to go dot dot dot IN mice. Right. And another one that I love is this idea of calling a neuron that's involved in pain. A pain neuron. Yeah, well, pain is a, is a concept that we use to describe how a person, an agent, feels. And it's a very, very complex, coarse grained description of that state. And that neuron might very well be involved causally in that process. But I don't think if you took that neuron out and put it in a dish that there'd be any pain anywhere near it. It's just part of a big complex system and we're not, we're not going to hold each other accountable to being that pedantic. But we have to be careful, because otherwise we start looking for a function in a microcosm of the system, of which it's not really responsible in the way that we want it to be. So now all of a sudden, as scientists, we've already got hard enough jobs going across all these scales and learning all these technologies. Now we've got to be philosophers too. And I think it just suggests that we need to be really humble and bring a lot of different perspectives onto problems and be open to the idea that neuroscience is still really young. We need to learn how to think about the system and ask the best questions of it. [00:30:09] Speaker A: I think Mac answered that very nicely. But physics itself is not static. It's not a set of tools or one illuminating light. And it's interesting to see, as physicists find their place in the neuroscience world, how the questions and challenges of the brain are also motivating new types of physics and new types of mathematics, particularly this interplay of stochastic high dimensional systems and low dimensional systems. So we know the brain certainly not evolving on a very low dimensional chaotic like manifold. There is a lot of noise and sometimes the noise is really important and this high dimensional noise gets in and explodes what the brain's doing and that's really important as well. So there's quite a lot of interesting stuff happening about stochastic dynamics, the interplay of high dimensional systems and low dimensional systems. And that's partly being motivated by the questions that the brain and field of neuroscience is asking. But I do think we're in a pretty golden period at the moment. We've got the beautiful tools of physics, we've got wonderful new ways of measuring the brain, you know, neuroimaging behavioral neuroscience, and of course these whole field recordings. And, you know, then we've got AI. What's really interesting in AI is these very high dimensional deep neural networks are doing brain like things and at the heart of them, we don't really know what's what they, what, why and what they're doing. And there seems to be interesting dynamics going on in deep neural networks like phase transitions and, you know, latent manifolds and dimension reduction and all this stuff, which is brain like and maybe not what the brain's doing. But it's also very interesting for physicists. We just saw a Simons foundation grant last week and the physics of artificial intelligence. But that's a very fertile area to be working in. And I hope that we are starting to put down some of the principles of how the brain works that are just a trend, but are actually going to stand the force of time. Like some of the things that were achieved in physics in the early 1900s. And we could mention a few of those and explore a few of those. None of them are watertight. But, yeah, I think we're in a pretty good period at the moment, and I'm still quite young, thank you very much. And I'm excited about the future as well. I'm excited about my own future in neuroscience, so. Yeah. [00:33:14] Speaker C: All right. So Michael alluded to this most recent collaboration that you guys are a part of, 2024, the Mun et al. Paper, which looks at multiscale organization across multiple different species. And I want to come to that. But let's see, that's 2024. Your first collaboration was in 2018 is when the publication happened, which was six years before. And this was the noradrenaline work, basically, that Mac was mentioning earlier. And, Mac, you had told me that this is your first. That was your first foray into that computational world that now you're an expert in, of course. [00:33:55] Speaker B: Whatever that means. [00:33:56] Speaker C: Yeah, so, but. But that kind of started your collaboration. Right? So how did that come about? And how have you guys sort of weaved in and out since then? [00:34:06] Speaker B: Yeah, it's a great question. So this sort of takes me back to my postdoctoral years. I was at Stanford University working with Ross Poldrack, who was just a fantastic mentor for a kind of young scientific mind. He really kind of showed me how to do modern science at a really high level. He's a big proponent of open science practices and working in a kind of really community effort to make the science as a whole better. And I have to give Russ a lot of credit here because we had essentially been working on a purely neuroimaging project together. I was interested in dynamics in brains and dynamic network perspectives. So people have been applying things like graph theory to neuroimaging data. Michael had some really beautiful pioneering work in our space. Again, the Honey et al. Paper was a real groundbreaker there and really opened up our eyes to the fact that the patterns that we've been looking at in the neuroimaging, we've been taking these static snapshots of something like 10 minutes of data and describing those static snapshots. But if you start to go and look down at subsections of the data, it actually looks quite different. And people had just started really kind of pulling back the curtain there a little. And so we did some work there, applied graph theoretical tools to this data, found some really fun patterns in the data that kind of fluctuate in these different extremes, and spent, God, the better part of a year really just scratching our heads trying to think what on earth could be causing these fluctuations in the graph signature of the brain. And it was a roundabout way. We kind of came across this hypothesis that it was the ascending arousal system in the brainstem, sending these projections up to the whole brain, dowsing them in neurochemicals, that could then change how well the two different areas could communicate in sort of neural terms. And we got to this hypothesis, we're like, oh, cool, this is really interesting. But as soon as you get to that point, you think, well, what I've just done is describe to the system. I'd really like to causally interact with it. And back then, Opto FMRI was really just starting. No one at the time that we know it had done stimulation of the locus coeruleus, the main norogenergic hub in the brain. There was a group at Stanford that had done some cool stuff on the Zona inserter, but nothing was quite there. The other two options we had, one was to go and do pharmacological fmri. We ended up working with Sandra Newenhaus on a really fun project where people were given atomoxetine, which is an oradrenaline reuptake inhibitor, and you can look at what happens in the brain with that. But the other idea was to go into computational modeling, where now you're much more abstract, so you have to make a lot of assumptions, but you have direct control over exactly what you do in the system. And I talked with Russ and said, look, I'd really like to get into this. There was this really cool technology, this tool out there called the virtual brain, which was created by Vicky Oerser and colleagues, Petra Ritter, which was out there and available. And it's kind of a bit of an easy access for someone without the mathematical background. You can kind of immediately download it, start playing around with all the different types of models that you care about. But it's also just so overwhelming in terms of all the choices you have to make. And so I played around with a little model. Russ and I had made one we'd swept A parameter that we thought related to noradrenaline, this sort of slope of the sigmoid function, transfer function. But the first thing we learned together was like, man, if we touch this one parameter, everything changes. And then we touch this other one, nothing changes. Rather than all of a sudden it changes. We're like, whoa, whoa, whoa, whoa, whoa, pump the brakes. We need to get someone in who can help us do this at a high level. So Russ, I think, in his infinite wisdom, suggested that I reach out to Michael, who at that time I was terrified to speak to, because I used to really put Michael up on a pedestal. Since we've had a few beers together, the pedestals now diminished slightly, as I'm sure mine has as well. But it was such a great idea. Russ basically said, you're about to head back to Australia. Go link up with Michael. Let's do this properly. Michael had a really talented grad student, Matt Ayburn, who jumped on the task, took all the code, redid it, we built it up again. It was really my first foray into this space. We used what's called a Fitzhugumo model, which is a simplified reduction of a reduction of a Hodgkin and Huxley. So it's, it's got the elements of neurons in it, but it's kind of simple enough that you can still play with a couple of parameters. And it was beautiful. It was just this thing that you could control. These super complicated equations, tiny changes that had interpretable outputs. It was just this completely different world to me. And it really opened my eyes to this access to this intuition about what's happening deep down at the micro scale, in this case probably about the meso scale, where you can now say something interesting about what's happening. And it was a completely different language than the one that I'd learned doing neuroimaging, even in the context of cognitive tasks. And so I really owe Michael so much because that, that eye opening moment for me was like, oh, wow, I need to learn more about this. I need to bring people in around me as I build my group that can help me do this at a high level. And it really was a huge catalyst for my early career. I don't know what it was like from your perspective, Michael, but no, I. [00:39:18] Speaker A: Think that story brings out a few really important insights. One is, you know, bringing a question that you want to ask into the modeling space. As we all know, the Human Brain Project set out to model the brain, but there was not really a question about what to model. And so When Mac and I started having a chat about this particular project, it was clear that Matt wanted to understand how changing neural gain in a population of neurons could change the integration and segregation of the system as a whole. So there was already a really well defined question. And the second aspect is to take a model and strip it down to something that's simple enough, but not too simple. So not a Hodgkin Huxley model, but a Fitzhugh Nyaguma model. But nothing simpler than that. Otherwise you end up with something that's basically not a neuron at all anymore. So just finding the sweet spot in terms of the modeling. So that's one thing is bringing a question and taking the model at its right level. I think the other thing is if you just not working with someone who's had some experience in doing this, the model can be as complicated as the original system. In fact, there's this sort of acronym. The best model of a cat is a cat, and preferably the same cat. I mean, it's a nonsense statement because the model of the cat, which is the same cat is as complicated as the original cat. And there's this Louis Borges short story about cartographers making more and more complicated model maps of the world until they set about making a one to one map of the universe. So the adage of all of that is that models can be very complicated and can be too complicated. But the advantage of being in the modeling space is you can use your insights and make reductions and make simplifications and come up with something that will work. And when you're working with someone like in this case, Mac and Russ, you've got a question to ask. You can make pretty, pretty good progress and you can get some really nice innovative insights, as I think that paper brought to the field that, you know, changing neural gain in individual neurons change their individual bifurcations. But you didn't need to get them all to bifurcate because they are quite heterogeneous. When you get enough of them to go from one type of behavior to another at some point, then the whole collective action of the whole ensemble goes through that phase transition. So there's a really nice insight that Mac brought to that paper. But then you could say, well, can. [00:42:14] Speaker B: I say something about that, Michael? Because I think it's worth clicking on this. That was such a fascinating thing to me that I hadn't heard about since what, year 11 chemistry or something, right? You learn in year 11 chemistry about boiling water and it goes from a liquid phase and you put temperature on it consistently, and then all of a sudden it starts to boil, this sort of phase transition. But never in my wildest dreams that I think that that was what was underneath the hood. I wanted to see whether or not if I change this neural game parameter. And then I took this neural model and I put it through a balloon model so it looked like the bold signal. Could I see the integrated segregated network change? You could, with some caveats. That's a very complicated model. There were pockets where it looks like that and other pockets where it didn't. But the really crucial thing that Michael just brought up was that the change happened abruptly. A small, smooth change in a control parameter and a rapid change in an order parameter. That is indicative of a concept that you brought up on the podcast already, Paul, of criticality, which was not something that I was familiar with as. As a scientist, I'd never really come across the term before. I hadn't been exposed to it. And what a gift to be given, right? As a. As a curious scientist, something you've never even thought of before could be an explanation for what you're trying to understand. And so to me, it was just this beautiful moment of, like, now I've got to go learn all this other stuff. I mean, it's like the curse of knowledge a little bit right now. I know that the world is more complicated than I thought it was beforehand, but I say this to my students all the time. Try to surprise yourself. Try to try to put yourself in contact with a world where you don't already know what the answer even looks like. And I think that's been kind of a kind of clarion for my group. We've been trying to really hold on to that notion of the sort of being afraid of not knowing what's going on and leaning into it rather than running the opposite direction. And I don't know if I've ever thanked you enough for that, Michael, but it was like such an incredible exposure for me to that whole new world in a lovely way that made sense. [00:44:14] Speaker C: Let me jump in here real quick. I want to make sure that we actually. I don't think we described, like, the ascending arousal, like, what the paper was about. What I mean is, how does it relate to the brain? [00:44:25] Speaker B: Oh. Oh. So the idea was there's actually a really beautiful paper from back in 1990, servant Schreiber, and John Cohen was the senior authority where they were trying to understand this kind of classic Yerkes Dodson relationship that's really pervasive in the psychological Community, which is that your performance is sort of optimal at the intermediate stages of arousal. So if you're too sleepy, you don't do very well. If you're too stressed out, you don't do well. But right in the middle you do quite well. So we all kind of know this from taking exams. And they've done this really sort of elegant model where they use the slope of a transfer function connecting nodes in a little toy model to demonstrate that you could get this kind of sensitivity to the conditions in the model. At an optimal point there was kind of like a working regime. And if you went too far to the left or the right of that regime, then you're in big trouble. And the way they did that was by mimicking the slope of this transfer function. So we took that notion. We knew that there were transfer functions in the neural models. Michael and I chatted a little bit about it. And then there's ways that you can sweep that at an optimal sort of starting point. That actually ends up being one of the hardest things to do in a model is to start, know where to start. Like where should I start to look? Because if you're in the wrong regime, pushing the parameter might do nothing at all. I think Eve Martyr's work is a really beautiful example of this. Right. If the temperature is in this zone, it's doing one thing and you flip it, it's a completely different thing. But yeah, that was the idea, was to sort of take this mathematical abstraction, put it onto a biological system and see whether or not it could recreate the imaging patterns we saw. And the answer was yes, it could. But it also then did all this other cool stuff that then became, you know, the start of lots of other questions. [00:46:00] Speaker C: And this is the criticality road that you were just mentioning, for one thing. [00:46:04] Speaker B: Yeah, and really it's where neuroscience meets nonlinear dynamics. I think that's neurobiology and non linear dynamics, the sort of intersection between them. Here's a thing that a brain has been shaped over evolutionary time to be able to do with cells. It can do these sort of set of things and here's the language that you need to describe them. And before that paper though, most of the work that I'd done was really descriptive of a system. You know, oh, here's the things that are subdoor, super threshold or here's the set of all the things and here's the configuration. The dynamical systems to me was such a bedrock and it is now of all my work because it says something a little deeper, which is what could a neuron even do or a population of neurons do, given the information that they have or what they want to do next or something? You can kind of frame problems from a neurons perspective. And I found that so much fun to think about the implications of that. And why would you have a basal ganglia which looks so different than a cerebellum or a thalamus? What kinds of constraints would they impose on dynamics? And off you go into this weird and wonderful world of trying to guess what the emergent property of a system will be. And so it really kind of opened my eyes to that possibility. [00:47:18] Speaker C: I don't think. You mentioned the locus coeruleus and the ascending arousal system, which is sort of the neuromodulatory analog of these computational systems. Right. Even when you're describing how it relates to the brain. But one of the things that you guys claim in the paper, I believe, is that this, and you mentioned criticality just a moment ago. Mac and I did as well, that this, this ascending arousal system sort of controls the nearness to criticality in different regimes. Do I have that correct? And then. Oh, go ahead, Michael. [00:47:54] Speaker A: I probably use the word tune, so don't think. [00:47:57] Speaker C: Okay. Effects. [00:47:58] Speaker A: Yeah, because the, the behavior is emergent. So no one, you know, there's not really a controller, there's tuning into different regimes. But yeah, sorry, just to jump in there. [00:48:10] Speaker C: Well, that's, I was thinking about control theory. So control theory might tune it. Right. But okay. So anyway, my, my question though was. Well, and I'll. I'll use your language, Michael. Then what tunes the, the tuner. What tunes the local locus coerus there? Right. If, if this, you know, parameter is it, is it hunger, is it, you know, is there a circular causality? [00:48:33] Speaker B: Yeah, well, I think it's a good kind of circular causality and the kind that Michael was talking about alluding to before, this idea of constraint closure that you can imagine that the locus coerulus activity and the release of noradrenaline will have an impact on downstream targets. Anything that expresses a particular type of receptor, which will then change the excitability or the susceptibility of that individual neuron or population of neurons, which then have to compete in this massive space, almost like a little new species for their survival. And then they do that by setting projections elsewhere, which then recruit more help, or they send projection that then ends up bringing online the competitor that then kills them off because it's a better fit to the challenge at hand. So in some ways these things end up being the constraints that imposed on the network change the way that the network can evolve. But then you can play the other game and say, what are the areas that are downstream to the locus coerulus? There are local populations. Michael Bruha's group just had a beautiful paper in Nature where they studied the perice, a bunch of GABAergic neurons that sit just underneath and nestle in on the locus coeruleus. They found that the different subpopulations of the perice are actually inhibitory of the locus coeruleus targets in particular behavioral context, and can actually be recruited by different neuropeptides. And so now you've got this really beautiful complex circuit which is neurochemical and electrical all working together to change the context in which different systems, subsistence of the brain can be brought online to energize the rest of the system. So I think this is just going to get even more complex as then you look at the cortical targets or the, you know, the habenula inputs to the parasolis, all these different complexity will become just overwhelming. And what we need is more hypothesis driven science there. We need to be able to say, what if I knock out this receptor? What if I take an animal that's now the dominant one rather than the passive one and I turn this neuropeptide on? What changes is it via this circuit? And as Michael was alluding to, we have the tools now where we can do this. Not saying it's easy, it's just doable, right? The kinds of technology we need is there now. We need time to really refine this. [00:50:42] Speaker A: I would sort of pull back in the other direction as well and say these critical transitions are information rich. And that's in the paper that we're talking about, the Shine 2018 eLife paper. In some of the subsequent figures we pulled out that at the boundary between these different regimes is when the capacity of the system to do computational work is highest. So it's a very complex information rich emergent dynamic that comes out of the system. So then you just need to imagine a slower feedback circuit that says, you know, if your information poor, if, if the noise is too flat, just slowly increase the gain and you'll slowly push into this information rich regime. Whereas if the noise is too high, if the entropy is too high, just pull back the neuromodulation in the other area. [00:51:41] Speaker C: But you need something to read the noise in that case, right? [00:51:43] Speaker A: You need something to read out the behavior that the system's exhibiting and feed that back And I'm going to come to that in a moment. And so at that level, I think I put my systems neuroscience hat back on and say, look, it doesn't really matter what the circuit is. This could be enacted in a whole way of different mechanisms. But what's important is the overall information landscape and the notion that if you read it out, you have a slow feedback circuit. It'll be self tuning. It could be, you know, it could be some of the mechanisms that Mac just mentioned, or it could be a mechanism that we don't yet have on the board, but the principle will be the same. And then there's another principle is if you put a bit of plasticity into the, into the synapses, the system will start self organizing even without these slow feedback loops. Plasticity plus criticality in very general circumstances can lead to self organized criticality in which you no longer need to tune anything because the system will self tune. And this is another principle that came out of physics, self organizing criticality. Not just criticality in which you can tune the system to be critical, but self organized criticality in pretty generic circumstances. You just need a little bit of plasticity, local plasticity, blind to the global behavior of the system and you'll start to get these self organized critical dynamics. In fact, that was exactly the topic of the paper 2009, I think Rubinoff, blah blah blah, and Breakspear, that was Gatekeep kept out of mainstream neuroscience because it was a modeling paper that wasn't relevant to what neuroscientists were currently looking at. So that's a funny reflection to have. [00:53:43] Speaker B: It's also an example of what we call Breakspear's Law in my group, Paul, which is that anytime we think of an idea, he's got a paper from 15 to 20 years before that. [00:53:53] Speaker A: That's awful. [00:53:54] Speaker C: That's awful. Michael. [00:53:56] Speaker A: I have become a bit of old mate who goes, yeah, thanks, that's an interesting idea. But actually, yeah, I try and be humble and in fact, if you go back and you look at the old breaksphere paper that they actually didn't do what I believed. I'm filling in a lot of details here and I would have been very rudimentary, if at all, compared to the sort of full richness of what's been being proposed in the modern era. If I was doing important stuff back there, it was because I'd learned stuff from other people and other, you know, luminaries or luminaries of the field who many times now have been forgotten. People are rediscovering you know, prescient insights that people were having in the 80s and 90s that I was drawing from. [00:54:52] Speaker C: So Michael, you don't know me, but so I've had a few guests on to talk specifically about criticality and I've become very interested in it in my own research. But I'm also interested in metastability and the interactions between criticality and metastability. So and I'll point people to this, I think it's Nature Reviews Neuroscience or Nature Neuroscience, the Hancock et al. 202025 paper which I really enjoyed and talks about the differences between criticality and metastability and multi stability and all these terms and dynamical systems that you can really go down the rabbit hole in and maybe we'll come back to that. But I'm aware of the time and I want to make sure that we fast forward six years and talk about this most recent effort that we've all I think discussed a little bit already in the podcast, but in our conversation and this is led By Mun, it's 2024 and this is the multiscale organization which touches on a lot of the principles that we've been discussing. And maybe one way to lead into that is so I wanted to ask about. All right, so Michael, you've been studying neuromass models for many years now. And there are neural mean field models, neural mass models and other. So that's there's sort of in like this iterative coarse graining procedure that you guys use in, in the 2024 paper. And these are. Okay, so then there's like that kind of set of analyses. And now I'm going to and you mentioned PCA earlier, Michael. Then there's this, the popular low dimensional dimensionality reduction techniques these days, pca, other nonlinear dimensionality reduction techniques. But the, the neural mass models and so on that I mentioned before are our dimensionality reduction techniques in a sense. So I'm hoping that you would just contrast like neural mass model approach, the mean field approach as a dimensionality reduction technique approach as opposed to like PCA and other the some of the more common and popularly commonly used techniques because I think people, the neuroscience community are less aware or less well versed in the neural mass models and iterative core screening approaches, et cetera. [00:57:26] Speaker A: Well, that's a huge area to explore. But try to simplify things. At some level you say this is the level I'm going to engage this particular complex system and I'm going to take a mean, I'm going to assume the Mean and maybe the variance of this system in this local area will tell me a lot about how the system is behaving. And then using different approaches, you can then come up with this neural mass or mean field approximation at that particular scale. It might be the scale of a cortical mini column or a column. It might be the scale of a couple of centimeters. And then you get some principled approach for what that small system, how it behaves, its inputs, its local operations. There's normally a sigmoid curve in there, there's some filtering, and then there's an output. And then, you know, Mac mentioned the Honey paper. This is the 2007 paper. We took hundreds of these little local patches and using best possible connectomics at that stage, or COCOMAC, we wired them together. So even though we had three or four dimensions in each area, we still had 80, 90 these days, 500, 1,000 of these little populations. So you've still got a system with say 3,000 degrees of freedom. But that's when the nonlinear dynamics takes over. And then you get principles of synchronization, you get underlying low dimensional manifolds, but that's what the dynamics are giving you. There's no mathematical tools anymore. The system itself is finding these low dimensional manifolds, these symmetry manifolds, and contracting onto them. And we had a paper, Roberts 2016, on metastable brain waves, which had these 500 nodes and it contracted onto something maybe eight or nine dimensions. You get these metastable traveling waves breathers, rotating waves breathers. [00:59:43] Speaker C: Are those the one that kind of diffuse, diffuse out? [00:59:46] Speaker A: Yeah, these are things that applied mathematicians have been finding in other systems. So that's where you no longer are forcing a dimension reduction. The underlying symmetries in the system are then doing the work of doing their own dimension reduction. And that's the really beautiful stuff. [01:00:05] Speaker C: Okay. [01:00:06] Speaker A: But for a long time, these assumptions had been somewhat untested. There was a leap of faith saying at this stage, at this level, we can throw away a lot of the statistical moments, the kurtosis, the, the skewness, et cetera, et cetera, and we could just look at the mean and maybe the variance. And when I was visiting Max Group and Brandon Munn was saying, oh, we can use iterative coarse graining on these whole field recordings. And, you know, this is a way of no longer having the leap of faith, but by iteratively coarse graining, starting to see what are these beautiful whole field recordings, what are the statistics, what are the moments. We know from this is what Mac wrote in the paper. The single cells are very sparse. They spike, and then they don't spike, and they don't spike together at the same time. Very often. They're very keratotic and relatively uncorrelated. [01:01:10] Speaker C: Define kertotic, please. Just real quick for the audience. [01:01:14] Speaker A: A keratotic system is a very high central moment. It's very peaked close to the mean, and it's got very long fat tails. Most of the activity in that system will be close to the mean, but every now and then, there'll be what you call an extremal event, which is a long way from the mean, much further than the standard deviation would normally suggest. [01:01:40] Speaker C: All right, thanks. [01:01:42] Speaker A: So through iterative coarse grading, and I'll let Matt tell some of the narrative as well, you begin to build up the statistics, or you let the behavior of the system to tell you the statistics. And what we found in that system is that the mesoscopic scale, finally the system became more or less Gaussian. So the kurtosis, more or less. The. What's called the excess kurtosis, disappeared. And it wasn't at the massive scale. It was a few scales down the mesoscopic scale, which was beautiful for someone like me because we were doing mesoscopic neural mass modeling for years. [01:02:22] Speaker C: What's mesoscopic in the brain, what counts as mesoscopic? [01:02:28] Speaker A: Tens of millimeters. Millimeters to tens of millimeters. [01:02:31] Speaker C: So this is spatial. And I just want to say the iterative coarse graining approach, it's. It's really. It's sort of. It's very basic and simple. Right. So you take. Just to say what it is. I mean, you're just. You take one neuron and then you find the next neuron that is most highly correlated with that first neuron. And then you pair them and, like, you sum their activity or something. Right. And then you look for something that's the most. So you're just iteratively taking the most correlated signals with each other and building up from there. Because iterative coarse graining sounds kind of fancy, but in. But it's a really simple procedure, from what I understand. [01:03:05] Speaker B: Yeah, that's right. It's actually kind of fun to go back one step here to kind of explain how we came to this, because I think, like many really fun things in science, it actually comes from disagreement rather than us all kind of like joining together in a kumbaya circle and talking about how great we are. When Brandon joined my lab, he had a background in physics, but had worked on neuroscience applications in some really fun work looking at the Hurst exponent in thalamic recording cells that they had in slice from marmosets. So he was already kind of neuroscientifically quite literate. All of his intuitions about how the brain worked were built on the things that he'd seen in spiking data. Right? He'd seen this network, this cell spike. It was very different, this other cell spike or something, and it would change with the brain state. For me, all of my intuitions were based on whole brain imaging, like taking the BOLD signal and all of its sort of smooth, kind of lilting, it looks like tides sort of sloshing around or like a lava lamp or something. And so I had all these stories about robustness and he had all these stories about sparsity and efficiency. And when he first joined the lab, we just had like a series of five or six, just arguments on the whiteboard. No, you can't be right. There's no way that's right. Well, this is. No, it's not. [01:04:24] Speaker C: Oh, I thought you meant like someone would post something one day and then the next day there'd be an argumentative statement. [01:04:28] Speaker B: No, I wish we were that clever. No, we're not. We're not mathematicians, Paul. Okay, we're biologists. But anyway, we had these really great, you know, very respectful but great debates. And Brandon, in his kind of, in his own unique way thought, screw that, I'm going to go find a thing that can actually test and show Mac that he's wrong, that it's not. The system looks the same. It looks different depending on how you look at it. And so he had read really beautiful work by Bill Byelech, which kind of introduced this idea of the renormalization group particular way of doing it, which is this coarse graining procedure. In their case, they actually renormalize at every level so that they can then directly relate the different features to one another across scales. What we wanted to do was not renormalize. We wanted to be able to say something about how the variance or the timescale scaled as we went up the different levels. But as you say, it's very, very simple. You take all of your data, you create a correlation matrix of all the different unique elements in the system, whatever scale it is, you then rank it according to the highest correlations. You pair those two neurons together, get the sum, and then you do it again and again and again. So every single level you're sort of potentially going up and iteratively doing this procedure. It's very similar to something like hierarchical Agglomerative clustering. But what you're doing is you're forcing yourself to only pair once at every scale. So in agglomerative clustering, what you do is you say, I'm going to pair neuron 1 and 2, but they're the most correlated with neuron 3. So I'll pair 1 and 2 with 3 and then maybe 4. 1, 2, 3, 4 become a blob and then you keep going until you have done the whole system. So that's great and that's going to show you local populations, but it doesn't tell you how something like the variant scales as you go up the different levels. The other thing that we did was a little bit different. A lot of the old coarse grading work that was done in physics that was really helpful, say for taking the movement of a fluid and trying to come up with a mean field approximation of it does local spatial clustering. So what they'll do is they'll say, I'm going to take the pixel of my video here and right next to it and I'm going to put them together and then I'll do it for 4 pixels and then 8 and then 16 until I get to the whole level of coarse grain. What we did instead is we use the temporal similarity as our guide. We're not doing special coarse grain, we're doing temporal coarse graining. So we're looking for Pearson's correlations. One of the things you find really early on is that it's not just like blobs that grow in size in the brain. You start to see very early these distributed networks that cross anatomical boundaries. Part of the tectum is connected to the cerebellum, is connected to the brains, and it's connected to the telekaphlon of these little larval zebrafish. [01:06:58] Speaker C: And these are what's reason with no lags between the brain areas. I'm sorry to interrupt, but. Yeah, it's just. [01:07:03] Speaker B: No, no, it's all zero lag at this. [01:07:05] Speaker C: Yeah, zero lag. [01:07:06] Speaker B: We played around with a bunch of different things using different measures than the Pearsons. Playing around with the lag, playing around with, using triplets rather than pairs. There's a whole bunch of things. But in all the cases we see basically the same type of idea, which is that there's really no privilege scale. I think Michael's comment about the mesoscale being where you start to see the Gaussian sort of statistical representation is accurate. But one of the other really interesting take homes is that the variance scales as A function of the size of the system. So it really has this beautiful relationship where it's not scaling such that all of the different elements of the system are just noise. Like we would maybe assume in a mean field approximation. If we did that, then what we'd expect is that the variance would scale with a particular exponent. And it could be the opposite thing, which is that there's no unique information at all. Whatever you see at the bottom is exactly the same thing we see at the top. But it was somewhere right between those two extremes. There's a really beautiful set of intuitions that come from that. A little bit like back in the day with the modeling paper in Elife, where you can start to say to yourself, how would you build a system that could do that? How could you build a system that would have that feature? Because a lot of our toy models, if you went to build them, let's say I put a network of a particular size in my system. Well, guess what? It's not going to scale like that because it's going to scale together until the network. Then the resolution of the core screen is above the level of the module you put put in. And then it'll scale randomly as if you're just adding random noise in. And so now all of a sudden you have this tool that you can use to interrogate what a built system looks like. And that's why it was so exciting to me, right? I'd come from this world of whole brain imaging and networks and constructing null models of networks, like small world networks and things. And we can now start to say how many of those could actually give rise to what we saw. So it's this really, really great opportunity to essentially take an argument and then let it play out with data and just see again, I didn't see this coming. Was surprised and really excited by where that sort of led us in terms of a conversation. What's coming next? [01:09:13] Speaker A: I think Max summarized it really beautifully. And I think what is interesting is that it is a really simple tool. It's just dyadic, in other words, pairwise pairing at each scale and off you go. And to this day I still wonder, because we have null, you know, we have no models to see if it's trivial. If you just had any fast scaling with the same sort of microscopic statistics, would you get this? And the answer is no. You do need this delicate balance of order and disorder and synchrony and non synchrony to get it. And that's what the brain's doing. It's Just tuned in the way that Mac described. But because the tool's so simple and all of this pops out, I think that's the power of this insight. It's just a very simple tool. It's not high dimensional, nonlinear PCA stuff going on here. [01:10:23] Speaker B: Can I actually just add to that quickly, Paul? One of the things that I just want to kind of give a shout out to just how hard Brandon in particular just worked on this, because this was a classic example of him having to explain a complex physics concept to me, a non physicist, and then deal with all of my annoying questions. And then after he got through me, he then had to deal with Breaky's annoying questions. And then we had other collaborators annoying questions and he really, it was a Herculean effort. Probably the thing that I love the most about this is I think, again, one of the more surprising results. We challenged Brandon to take this really simple approach and go look across species. So we started out in the larval zebrafish, which is just beautiful. Some data from Ethan Scott, a collaborator, where they essentially have access to pretty much the whole larval zebrafish's brain. [01:11:15] Speaker C: And that's why you started off with that data set, because you could see the calcium signaling across the whole brain, essentially. [01:11:21] Speaker B: That's exactly right. We wanted to get the sort of recordings that we got the most of the brain and the highest resolution. But Brandon then went and said, oh, actually there's some really beautiful mouse data out there. I think you took a string of data set, analyzed that data, found the same scaling relationship and like, oh, wow, okay, again, this was now just V1 rather than the whole brain, but still amazing to get that many neurons. So then we thought, well, how far elsewhere else could we go and look? And he looked at ferret data, he ended up looking at Drosophila, right? So like an insect. And then he even went down to a C. Elegans worm and found the same scaling relationship. So this is fascinating to me. If you look under the microscope at these different nervous systems, they couldn't look any more different. Drosophila's nervous system, all of the cell bodies sit on the outside, like in a shell, and all the inside is neuropil. And their synapses look fundamentally different than a mammalian synapse. And yet somehow, some way, the temporal organization of their system followed the same type of principle that we saw in both the whole and a part of a mammalian brain. And as someone who grew up with an evolutionary biologist father and has a deep reverence for thinking about nervous systems across Phylogeny. What a beautiful result to think about and to think about. Why would you want to organize a brain like that? And I think the answer is really simple. The world is really complicated and it has many, many different things in it of lots of different shapes and sizes, different lengths. Some things happen quickly, some things happen slowly, some things are small, some things are big. And you don't know what you're going to need to deal with next. And so what you need to do is have a system that can handle whatever comes in next. You should be optimally susceptible to all different shapes and scales such that when the change happens you're ready to take advantage of it and adapt to it with plasticity or whatever inbuilt features you have to solve that problem. I know that's an oversimplification, but what a beautiful idea to explore further and to think about and to start to ask questions about in this space. It's, you know, took me into a world that was way beyond where I thought we were going to get when we started from an argument between each other on a whiteboard. But a really lovely paper to be involved in. [01:13:30] Speaker C: So let me see if I can summarize the take home and then you guys, please correct me. So at the very microscopic scale, a few neurons, single neuron, a few neurons, the activity between those entities is fairly uncoordinated, sorry, uncorrelated, sparse, et cetera. And as you grow to the larger scales, meso scales, then you start to get a sort of, sort of a robust and more highly correlated signals at these larger meso scales. And what this is good for is that it puts you at, we talked about criticality earlier. It puts you at sort of a critical point that, where you can do integration well and you can do segregation well. So at the small scale promotes segregation, the larger scales promote integration. But you're at this sort of point bifurcation, perhaps point critical point where you could go either way depending on the needs of your current environment. Is that, where did I go wrong there? [01:14:36] Speaker B: Yeah, I think the thing I'd add to it, I'd love to hear your thoughts on this Michael, is that it's kind of that, but all the way up and down. It's like that old turtles all the way down idea. So at every single level you need to be sort of perched at this balanced state. You can't really afford to have hyper ignorant neurons. There are a fat tail of that kertotic distribution that Michael mentioned before. They do have temporal zero lag temporal correlations. With one another. It's just that the vast majority don't. And, you know, if we just took the vast majority to be the sort of coding principle of the brain, we might be persuaded by something like Barlow's efficient coding hypothesis and say, you know what? Neurons don't want to do the same thing as any other neuron. That's a big problem. Why waste the energy if we go the other direction and only measure local field potentials? All that we'll see is that everything's correlated together and we'll say, oh, what you really want is redundancy and robustness. I don't. If I lose a neuron, so what, I'll just use one of the other ones that I've got in my big pool. And the answer is it's actually not either one, but it's kind of both. The system as a whole is poised such that it can have the benefit of both of those different features of efficiency and robustness all the way, all the way across the scales, from down at the microscale to the macro scale. It's a really beautiful idea. [01:15:50] Speaker A: Yeah, I agree. And I mean, as a nice example of that, if I hold up a phone and it slips, the firing of a single cutaneous slip receptor can help very quickly me detect what the slip is. So there are these examples where a spike can matter. A single spike or a few spikes are really important for the way that we adapt in the world because of these complex multiscale systems that we're embedded in. But if we did that, if it was always, if every spike mattered, I mean, there's so many billions of spikes, we wouldn't be sitting here having this sensible discussion. And so the way the brain appears and other nervous systems, as Mac has pointed out, the shared unifying principle is to allow information to percolate very quickly up across all these scales by having this multi scale sort of balancing act. [01:16:57] Speaker B: It's actually led to some fun discussions with my wife recently. She's been listening to a lot of podcasts on cults. And you think about a cult socially as a group that kind of pushes itself off from the rest of the world and just listens to its own little story. There's no outside input. That would be like having a particular scale that's imposed on it and it doesn't allow inputs from all the different areas to affect it. So it doesn't allow the little word from the side to kind of like nudge it in the right direction. And I think it actually makes you start to think about you know, how we can construct this world that we live in with the Internet kind of leading us into this sort of gossip heavy kind of he should he, she said, she said, whatever gender said ideas. How do we build that better so that we're now open to actually kind of communicating more effectively and not getting swayed by these little pockets? It's really fun to think about all this. In other words, the physics is a tool that you can use to look at whatever system you want. It doesn't just have to be a brain. It could be anything. [01:17:54] Speaker C: Okay. Mac and Michael. I know. So we kind of started late, so this is going to be on the shorter end for episodes. And Michael, I know we've only touched on sort of this one facet of your work, and we didn't talk about Egan strapping and things of that nature, aligning maps, all of that good stuff. But I know time is of the essence. And it's. It's Monday morning for you guys, it's Sunday night for me here in the Northern hemisphere, so maybe we can just end on this. If. If. If you will. So we talked about, like, these modern approaches and the systems way of thinking and approaching, but we've also talked about how complex everything is. Well, what for both of you? What are you stuck on right now? What's. What's. What obstacles are you facing at the moment? What's in your way? Mac, you want to go first, and we'll end on Michael, perhaps. [01:18:47] Speaker B: Yeah. What obstacles are in the way? [01:18:50] Speaker C: Put you on the spot. [01:18:52] Speaker B: Yeah. So my. My strategy as a. As a group, group leader is to meet my students, slash research group, where they are and find out what's interesting to them and let their natural motivation for a problem lead The. The question that we're interested in, and one of the benefits of that is that I don't have to spend very much time motivating my group. They're all really passionate about what they're doing. The challenge is that they're all so clever and hardworking that they end up working in different directions. And so I have to read so much literature to be around the ideas that they're interested in to help curate and shape and. And hopefully make their science more profound and robust. And so I find that enjoyable. But it's. The biggest challenge is just digesting the amount of information required to do integrative neuroscience is. I think it's just like a career challenge that is, if anyone has, like, really good. Yeah. ChatGPTs that are trained up, let me know. [01:19:49] Speaker C: That is a good that is a career challenge. But I was actually wondering. And that. And thanks for sharing it. But, you know, is there something. So I know you have your hands full. Is there something, you know, research wise, your next project wise, like, if I only had this. [01:20:01] Speaker B: This. [01:20:01] Speaker C: If I only. If there was only a couple more Michaels in the world, I could, you know, something like that? Is there something like that? And if it's. It's fine if there's not, because I know you're juggling a lot. [01:20:13] Speaker B: Recordings of all the different cell types in the cerebellum during complex behavior. That's the one that is really fascinating to me at the moment. The cerebellum is this beautiful neuronal machine, but for reasons that are quite practical, it's really difficult to actually measure cerebellar neurons and actually know exactly which part of the kind of complex circuit they come from, because the cerebellum is so convoluted and sort of smushed down into this tiny space that the elegant patterns that you see when you kind of blow up the cerebellum like a balloon that are there are all hidden deep into the structure. And so if you wanted to understand the algorithmic basis of cerebellar function, you need to both record from it, which is really hard because it's so deep. And the parts within the brainstem of the circuit are deeply related to a lot of physiologically deeply important structures. So it's really hard physically to do it, but even if you do it, it's very hard to untangle it. So there was actually a really awesome paper that came out in Cell a few months ago from a consortium group where they're using deep learning to essentially, they take optogenetics, they can turn on and off all these different cell types. They can then have a neuropixels probe down, and then they can learn what they actually did and how to infer that from the complex signal they get in the neuropixels probe so that now they can go through with a new data set that hasn't got any optogenetics and say, what was the likely cell type that I was actually recording this waveform from? Now what you can do is you can go in with a kind of guess ahead of time, a little bit like spike sorting, but using powers of deep learning. I think they use a variation autoencoder or something like that to essentially get a guess about what cerebellar type you're measuring. So to me, that's deeply exciting because the cerebellum is kind of terra incognito for me and I just love thinking about it. [01:21:55] Speaker C: I was going to say it's too bad. The cerebellum is something that is easily abstracted away when building any useful models of brain activity. [01:22:01] Speaker B: Right. [01:22:02] Speaker C: No, I'm just. [01:22:02] Speaker B: We'll see, we'll see, we'll see. [01:22:04] Speaker C: Michael, is there anything that. That is keeping you from going to sleep at night, aside from getting the surf the next day? [01:22:14] Speaker A: I did have a surf this morning, actually. [01:22:16] Speaker C: I was wondering. [01:22:20] Speaker A: No, look, there's process issues similar to Mac and funding and governance that I could complain about, but putting them aside, actually, I'm really excited about systems and theoretical neuroscience and what we're working on in my group and through collaborations at the moment. I think the work that we've done that we've discussed here, the eigenmode stuff, the neural field stuff, it's a real golden era that I'm really excited about. And we've got some papers, one about to be posted with Mac on cortical hippocampal neural field modeling. And that's where my passion suddenly currently resides, in this relationship between a relatively low dimensional hippocampus and the relatively high dimensional cortex and how that interaction is underlying so many of our complex cognitive behaviors. So I'm excited. I just need time to write it all up and continue to learn from my colleagues. [01:23:30] Speaker C: That is the best thing about what I do. And it sounds like you guys agree. It's just that you can just continuously learn, learn super interesting stuff. It doesn't become boring, which is so wonderful. All right, so this is a very high level and kind of a race of a discussion here today, but hopefully I'll fill in some of the details also in the introduction to sort of frame it. Thank you both for being here and I appreciate the time and have a wonderful Monday. [01:23:58] Speaker A: Okay, thank you very much, Paul. And thanks, Mac. [01:24:09] Speaker C: Brain Inspired is powered by the Transmitter, an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives written by journalists and scientists. If you value Brain Inspired, support it through Patreon to access full length episodes, join our Discord community and and even influence who I invite to the podcast. Go to BrainInspired Co to learn more. The music you hear is a little slow jazzy blues performed by my friend Kyle Donovan. Thank you for your support. See you next time.

Other Episodes

Episode 0

August 15, 2024 01:27:51
Episode Cover

BI 191 Damian Kelty-Stephen: Fractal Turbulent Cascading Intelligence

Support the show to get full episodes and join the Discord community. Damian Kelty-Stephen is an experimental psychologist at State University of New York...

Listen

Episode 0

August 05, 2022 01:24:53
Episode Cover

BI 143 Rodolphe Sepulchre: Mixed Feedback Control

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord...

Listen

Episode

June 09, 2019 01:29:52
Episode Cover

BI 037 Nathaniel Daw: Thinking the Right Thoughts

photo courtesy of adam fields Show notes: Nathaniel will deliver a keynote address at the upcoming CCN conference.Check out his lab website.Follow him on...

Listen