BI 212 John Beggs: Why Brains Seek the Edge of Chaos

May 21, 2025 01:33:34
BI 212 John Beggs: Why Brains Seek the Edge of Chaos
Brain Inspired
BI 212 John Beggs: Why Brains Seek the Edge of Chaos

May 21 2025 | 01:33:34

/

Show Notes

Support the show to get full episodes, full archive, and join the Discord community.

The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.

Read more about our partnership.

Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.

To explore more neuroscience news and perspectives, visit thetransmitter.org.

You may have heard of the critical brain hypothesis. It goes something like this: brain activity operates near a dynamical regime called criticality, poised at the sweet spot between too much order and too much chaos, and this is a good thing because systems at criticality are optimized for computing, they maximize information transfer, they maximize the time range over which they operate, and a handful of other good properties. John Beggs has been studying criticality in brains for over 20 years now. His 2003 paper with Deitmar Plenz is one of of the first if not the first to show networks of neurons operating near criticality, and it gets cited in almost every criticality paper I read. John runs the Beggs Lab at Indiana University Bloomington, and a few years ago he literally wrote the book on criticality, called The Cortex and the Critical Point: Understanding the Power of Emergence, which I highly recommend as an excellent introduction to the topic, and he continues to work on criticality these days.

On this episode we discuss what criticality is, why and how brains might strive for it, the past and present of how to measure it and why there isn't a consensus on how to measure it, what it means that criticality appears in so many natural systems outside of brains yet we want to say it's a special property of brains. These days John spends plenty of effort defending the criticality hypothesis from critics, so we discuss that, and much more.

Read the transcript.

0:00 - Intro 4:28 - What is criticality? 10:19 - Why is criticality special in brains? 15:34 - Measuring criticality 24:28 - Dynamic range and criticality 28:28 - Criticisms of criticality 31:43 - Current state of critical brain hypothesis 33:34 - Causality and criticality 36:39 - Criticality as a homeostatic set point 38:49 - Is criticality necessary for life? 50:15 - Shooting for criticality far from thermodynamic equilibrium 52:45 - Quasi- and near-criticality 55:03 - Cortex vs. whole brain 58:50 - Structural criticality through development 1:01:09 - Criticality in AI 1:03:56 - Most pressing criticisms of criticality 1:10:08 - Gradients of criticality 1:22:30 - Homeostasis vs. criticality 1:29:57 - Minds and criticality

View Full Transcript

Episode Transcript

[00:00:03] Speaker A: The heart of criticality lies in the laws of physics. There's something about setting up a brain or a flock of animals, or, you know, the ear, setting them up at this point where information that comes into the system is not extinguished. It's sort of preserved for as long as possible. It lingers. And you don't amplify, over amplify the response, and you don't over damp the response. You just let the information kind of echo within the system for as long as it can before it dies out with a power. The reason why I think the cortex is probably most likely to be critical is because it has to simultaneously optimize multiple tasks. It's got to be good at transmitting information, it's got to be good at storing information, it's got to be good at dynamic range. It's got to be good at computing all these things at the same time. The field has benefited from the blowback. I have benefited from the blowback. I have grown in my appreciation for the subtleties and the nuances of criticality. [00:01:13] Speaker B: This is brain inspired, powered by the transmitter. You may have heard of the critical brain hypothesis. It goes something like brain activity operates near a dynamical regime called criticality, poised at this sweet spot between too much order and too much chaos. And this is a good thing because systems at criticality are optimized for computing. They maximize information transfer, they maximize the time range over which they operate and a handful of other good properties. John Beggs has been studying criticality in brains for over 20 years now. His 2003 paper with Dietmar Plenz is one of the first, if not the first, to show networks of neurons operating near criticality. And it gets cited in just about every criticality paper that I read. I think every single paper that I have read cites this paper. John runs the Beggs lab at Indiana University, Bloomington, and a few years ago he literally wrote Criticality, called the Cortex and the Critical Understanding the Power of Emergence. I highly recommend this book as an excellent introduction to the topic. It is filled not only with great explanations, but also points to a lot of the historical and very recent literature related to criticality. And John continues to work on criticality these days. On this episode, we discuss what criticality is, why, and how brains might strive for it as some sort of, like, homeostatic set point. We talk about the past and the present, of how to measure it and why there isn't a consensus on how to measure it, what it means that criticality appears in so many natural systems outside of brains, yet we Want to say that it's a special property of brains. So these days, John spends plenty of effort defending the criticality hypothesis from critics. So we discuss that and much, much more. You'll hear that John is super scholarly about the subject, even when I ask him about topics outside of his main wheelhouse. So he drops a lot of references. And I link to many of the papers that he discusses that he mentions. I link to them in the show notes. I really enjoyed speaking with John and I continue to feel lucky to get to speak with so many interesting and really kind and well meaning people in this field. In this broad, broad field. You would think, based just on the guests that I have on this podcast, that when you walk out into the world, everyone will be kind and well meaning. Oh, I'm not sure. Is that so? Is that so? We can only hope it's trending toward that. Anyway, thank you to the transmitter for helping support this podcast. Thanks to all of you Patreon supporters who do the same. Carry on and be well and enjoy. John. John, I am. I have been awash much thanks to you and others in criticality stuff for the past couple months. I'm working with data sets. So. And you know, your book is. First of all, nice job writing the book. It's a really readable, it's very enjoyable. The cortex and criticality. Is that the cortex and the critical. Yeah, yeah, the cortex and the critical brain. So. And since then, kind of like in, you know, once you start thinking about low dimensionality and manifolds in neuronal research, you see them everywhere. And now I see criticality everywhere. So I want to get to the bottom of this. Like, what does it mean? So anyway, so I'll point people to that book and was it 2003 when the Beggs and Plens original. Is that the original neuronal avalanche? [00:05:22] Speaker A: That's when we first talked about neuronal avalanches. But as I explained in the book, that wasn't the first time that anybody came up with connecting neuroscience to criticality. There were some many people before that who were working on it. [00:05:34] Speaker B: I see, but so that was in a petri dish and across electrodes. Yes, right. In a culture preparation. And so that's, oh, 20, 21, 22 years ago now. You had your 20th anniversary. So I mean, is that the main thing that you think about is criticality these days? What I want to know is like how your views have changed over time. First we should talk about what criticality is. Sure, maybe let's. All right, so let's Start there then. So why do I care about criticality and what is it? [00:06:04] Speaker A: Yeah. Okay, so first, what is criticality? So criticality is a special setting on complex systems. And let me start out with a simple example. So let's imagine I have a whole bunch of neurons that are connected to each other. And now I can excite one neuron and I want to measure something really simple. I want to find out if I excite that one neuron, how many other neurons does it excite in the next time step and if exciting one neuron leads to more than one neuron being excited, and so on and so on, then what you'll get is, you know, I activate 1, then I get 2, then I get 4, then I get 8 and activity will just spread. Okay, this is the real simplistic version of it. So that's not good for a brain because it would lead to seizures. On the other hand, if I have one neuron that's excited and it leads to less than one neuron being activated in the next time step, then activity is going to die out. So you don't want to over amplify and you don't want to dampen the activity. So what you want is something where, if I stimulate one neuron, it leads to one other neuron, which leads to one other neuron on average, sometimes it'll activate two, sometimes it'll activate one, but the average is about one. That's really simple to explain, but it actually has really complicated and interesting implications. So if you're going to transmit information through, let's say, a network of neurons, you don't want that ratio, we'll call it the branching ratio. You don't want the branching ratio to be greater than one, because then what happens is I have some input for an input layer and then it goes through a layered network at the output, it's all going to be saturated and then you won't be able to guess from the output what the input was. It will be lost. Because almost all cases the output's going to be totally all neurons are on. So that's bad for information, but it's also bad for information if it's damped for more obvious reasons. If I activate one, it's going to die out and I'll have all zeros at the output. And so you don't get anything. And so if I want to transmit information through a multi layer network, it's best to do it with the branching ratio kind of near one, as close as possible. And that's also where they Run nuclear reactors, you know, so you talk about your branching ratio. If it's greater than one, you've got a bomb. If it's less than one, it's subcritical and you don't generate energy, so you want to keep it kind of steady, percolating along like that. So that's the real simple picture of what criticality is, and a little hint at why it's important. If you want to transmit information through a neural network, it's best right at that spot. But one more little bit that I should throw in that maybe if people like complex systems, and I know you do, and you've been working on those things, if you. You've heard of the Mandelbrot set. [00:08:36] Speaker B: Yeah. [00:08:37] Speaker A: And so that basically is very similar to this idea of a branching ratio equal to one. So what you do is you take a complex number and you square it and you see if it's equal to this complex number by itself plus some constant. And what gives you the Mandelbrot set is if you iterate this again and again and again, and it doesn't grow and it doesn't shrink. And so if you zoom in and you see these pictures of little snowmen and fractal snowmen at smaller scales or bigger scales, that little boundary line is determined by something that essentially has a branching ratio of 1. It has complexity at all scales. So a very simple rule can lead to really complex outcomes. [00:09:21] Speaker B: So I guess I'll just jump right into it. I mean, it seems like criticality is everywhere. And what I want to figure out is why is it. What's different about criticality in brains relative to other systems? Right. There's fractality everywhere. Is it special to living systems? I know, per Bach, the original sand pile stuff. So there's criticality there, which is not a complex life system. So. But, but so. And then the idea is often criticality is stated as being at a phase transition, Right? [00:09:54] Speaker A: Yes. [00:09:54] Speaker B: At a transition between states. And you and others have written about perhaps there's a homeostatic set point in populations of neurons or neurons basically trying to maintain near criticality. Whether it's exactly at criticality, which we'll get into. [00:10:14] Speaker A: Get to that later. Yeah, sure. [00:10:17] Speaker B: So I mean, maybe what is special then about brains with respect to criticality? Why do I care about brains if it's everywhere, then if it's also in brains, is it that special? I see criticality everywhere now. [00:10:33] Speaker A: Yeah, so that's an excellent question. And it is often raised by reviewers of papers and grants deal with it in certain ways. So let me see if I can start with the brain business. Okay, so brains are inherently interested in information processing. And so what I talked about earlier was just this idea. If you want to transmit information through a network, you want to be near the critical point and then you'll minimize your losses. But other things that brains do would be, let's say, computations. So this is a little bit harder to measure. But some people have been playing with this wonderful idea of reservoir computing. And I don't know if you're familiar with that or not. [00:11:11] Speaker B: Liquid state. Liquid. [00:11:12] Speaker A: Yeah, liquid state machines. That's another echo state network. So this, this whole idea is you take a system and now you ping it with some sort of stimulus and now it's going to propagate through the system. Now imagine if the system is too excited. So I ping it and now everybody just turns on. So it's an explosion that would be like a branching ratio greater than one. Or if it's a very damp system, so like it's just molasses, I drop something in here and then no waves go anywhere. That's an over damp system. That'd be a branching ratio less than 1. Well, it turns out if you want to get optimal computing, and there's a famous paper by Legnstein and Moss, there's also Bertzinger and Notchlager, I can give you the references and you can link them to this later. But basically what these people have done, it's really beautiful work, is you take a network, a reservoir, and you want to use it as an input output mapping device. And you can get the best input mapping, input output mappings, the most versatile input output mappings, if you have a reservoir that has complex dynamics. And what they do is they take their artificial networks and they're tuning them and they tune them basically to the critical point. And when they get at the critical point, it can compute the widest variety of input output mappings. [00:12:25] Speaker B: So it's a capacity issue. [00:12:28] Speaker A: Exactly, it's computational capacity. So information transmission computational capacity. If you talk to Woody Hsu, he's going to talk about dynamic range. So brains are faced with this issue of dynamic range. So we go out to, I don't know, a beach in Belize, a white sand beach in Belize, and we're getting trillions of photons. You lock yourself in a room and you're trying to go to bed and you black out your curtains. Your eye, once it's adapted, can still detect one photon. So this is many orders of magnitude. Right? So if you Want a system that can basically respond to huge dynamic range. Again, you want to be at the critical point. So for sensory reasons, you want to be there. Mauro Capelli has also looked at this kind of thing. He's published a paper a long time ago that was, how do you optimize dynamic range? Operate at the critical point for sensory systems. So computational capacity, information transmission, dynamic range for sensory stuff. The other thing about being at the critical point is if you're subcritical, you kind of go into an attractor. You're sort of sucking into some really stable state which might be good for memory recall. [00:13:37] Speaker B: Oh, see, I was thinking of the critical point as itself an attractor. Is that the wrong way to think of it? [00:13:43] Speaker A: Well, it could be. You could rig it up that way. But if I go back to the simple branching model, so I could stimulate some neurons and then see how they. How activity travels through the network. Now, if I've got a network that's got varying strengths of connection, so some are stronger and some are weaker and so forth, what will happen is when I stimulate this thing, it'll go through a bunch of paths and. And it will tend to pick the path that has the strongest connections. Most often, it'll sometimes go through the paths that have weaker connections. That's fine. If I make the network really subcritical, in other words, I make all the connections really weak. I really don't get any repeating patterns at all because the signal just dies out. If I make all the connections strong, then what happens is I do have an attractor, but it's really just one giant state. I stimulate and the whole brain goes on and everything's lit up. I have the widest variety of potential repeatable paths above chance. When I'm at the critical point, and you can show that with computational models. I had a paper with Clay Haldeman back in 2005 where we did that with different models. And these repeating pathways, you have the widest variety of them when you're at the critical point. And that's related to memory storage. So if you want to store information with networks of this type, you're actually better off being at the critical point. So transmitting information, computing dynamic range, storing. [00:15:11] Speaker B: Information universality is another one that you talk about in the book. [00:15:16] Speaker A: That is another interesting thing. Yeah. Now that, I don't know, maybe we want to bracket that for a minute, because that's more of something that is good for us in terms of understanding the brain. But I don't know that the brain itself directly benefits from It. But. But it is a juicy topic. Yes. [00:15:33] Speaker B: So. And one of the interesting. Well, one of the confounding things about this notion of the critic critical brain hypothesis, Right. Is there's no one measure. So you've done a lot of work measuring what are called neuronal avalanches, which is what you were talking about when you. When activity goes from, let's say 0 to then 1 unit is active and then 2 and then 1 and then 3 and then 4 and then 0, that's a neuronal. That's defined as a neuronal avalanche. [00:16:01] Speaker A: Right. [00:16:02] Speaker B: And the way that one measure of one indicator of criticality is if the distribution of the sizes and the durations of those neuronal avalanches follow a log. Log scale free plot dynamic. [00:16:19] Speaker A: Right, exactly. [00:16:20] Speaker B: But you're in. You mentioned branching ratio, which is another indicator. Long range temporal correlations are another indicator. Why can't we just measure. Why do we have to triangulate about criticality? How do we. That's the thing is like, different analyses have occurred over time. And I told you offline, I'm talking with Woodrow Shoe as well. And you know, this is a cottage industry of creating new analyses. And so where are we with all that stuff? Why isn't it easy? [00:16:51] Speaker A: Right, Right. Why so many toothbrushes? Why can't we all just use the same toothbrush? So. Well, first of all, I would say this, that in part, there's a good reason why people want to come up with different measures. So this is scientific rigor. And then there's another reason, which we'll get to in a minute. But in the beginning, when I started out as an assistant professor, I was kind of naive and I thought, okay, I looked at the data that Dietmar and I got and I said, oh, it follows a power law, therefore it's critical. And I was reading Per Box book, How Nature Works. Very modest title. Right. [00:17:25] Speaker B: You know, do you recommend that book? But I haven't read it. Do you. Do you recommend it? I can take this out of. [00:17:32] Speaker A: I recommend it as sort of a historical perspective of the ferment at that time. And so they were explaining everything from stock market crashes to evolution extinction. [00:17:40] Speaker B: Everywhere. It's everywhere. [00:17:41] Speaker A: Yeah. To the brain, to everything. Yeah. So I think it's a useful way of framing things. But Purbach was later criticized, and we can talk about that more, for perhaps being a little too sweeping in his claims. But anyway, this idea of measuring whether something's critical, I started out and I was swept up with Per Bach and other advocates of this. And I said, oh, as long as I got a power law, then it's critical. And I kind of said that in talks and almost said that in papers. And then I started getting a bunch of blowback. And this is while I'm an assistant professor and really severe criticism and so forth. And then I was about to thinking, oh my gosh, I'm not going to get tenure, I'm getting ripped to pieces and this is not going to survive. And so then I started looking at other measures. And fortunately I had really good colleagues who were talking to me about things. And so one of them was Karen Dahman. She's a condensed matter physicist over at University of Illinois. And she and colleagues Myers and Sethna came up with this idea of the crackling noise relation. And what that basically says is, okay, let's say you get avalanches and you get the exponent for the avalanche sizes, and then you get the exponent for the avalanche durations. Those things can be related by a simple algebraic equation to another power law, which is the average avalanche size for a given duration. And so all these three things form power laws, and there's a simple algebraic relation called the crackling noise relation that relates them all to one another. And you should really only get this if you're close to the critical point. And so that was one of the things that kind of rescued me. So I collaborated with her and we said, oh yeah, you can get that. And our data ended up showing that. And that's a more stringent test for whether you're really at the critical point, because you can get power laws, scale free power laws, even if you're not at the critical point from a different type of process. So let me give you a quick example of that. There's this observation that meteorite sizes or crater sizes on the moon follow a power law. You get lots of little ones, you get a decent range of medium ones and very small number of huge ones. How do you explain that? So one way of explaining that is through something called successive fractionation. So let's say you have a stick, and now you crack that stick at some random location. And then you take the fragments and you crack them at random locations and you do this n times. Now what you're going to get is you're going to get some small number of fragments that were not short, that rarely happens, that this one didn't get cut in half, but you're going to get a lot of little ones that are like dust. Okay? And if you do this, you get A power law of lengths. You just take a stick, you could do a matlab program and just divide it, pick a random spot, then divide the fragments and keep doing it. You get a power law of lengths that isn't. I mean, you could call it critical, but it's not. It's really just sort of a. Some process, you know. So a meteorite's coming in, it gets hit by something, it fragments into two parts, it fragments again, and then whatever ends up hitting the surface creates a crater. So there's no need to posit a phase transition or a critical, you know, tuning thing that allows you to have optimality right there. It's just something that's produced by randomness. So what you got to do is you got to distinguish a phase transition power law from a power law that could come through some other way. And so this crackling noise relation that we just talked about, that's something that you're not going to get with these other mechanisms. [00:21:14] Speaker B: What do you get with. What does the crackling noise look like? If I'm not near criticality or. Yeah, if I'm not near criticality. [00:21:20] Speaker A: Yeah, yeah. Okay, so what'll happen? And this is something that Keith Hengan and colleagues have looked at. So he's at Washington University, St. Louis. They're looking at homeostasis of criticality. And they basically say, let's plug in the exponents for the size and the duration and see what it predicts the other exponent should be, and then let's empirically measure the other exponent and we'll see the difference between them. He calls that the difference in criticality coefficient. Okay, so the distance to criticality coefficient, the DCC tells you how far away you are. And so you're right, you aren't always going to be a criticality. And you can measure how far away you are from criticality using that. [00:21:58] Speaker B: But isn't criticality itself a. An infinitesimally. Like, it's an abstract notion that you can't be. You can never be exactly at criticality, or else we would all explode or something. [00:22:08] Speaker A: Right, Exactly. You're correct. Yeah, you're correct. So it's really sort of a concept that only works for extremely large numbers. And it works in the thermodynamic limit where something's infinite. Now you can approach it, and so you can get a power law over some range. Maybe you get it over two orders of magnitude, or if you're really good. There's some work that Michael Breakspear and others have done on sort of seizure type events in infants. So if infants are exposed to hypoxia, so they have episodes where they don't get enough oxygen when they're coming out of the womb, they can have these bursts and you can capture these things with EEG caps and these bursts. If I remember right, they have like five orders of magnitude in terms of fractal scaling. They'll see a burst and they'll see another one bigger, another one much bigger, five orders of magnitude. And so you get a power law of five orders of magnitude, but it's not infinite. Right. So in the ideal, right, you're only going to get something that's a perfect power law spanning over all orders of magnitude if you're at the perfect critical point. We're never at the critical point. Right. We're never going to be there for lots of reasons. We're finite size. But the other thing is I'm constantly being driven by sensory inputs and that will perturb the system away from criticality, but homeostasis might bring it back. [00:23:29] Speaker B: So let me, let me. Now I'm being selfish. What I want to do is share my screen and show you all my data here. But that's okay. [00:23:35] Speaker A: We can. [00:23:36] Speaker B: It's a podcast, but so, you know, I measure. So I'm recording in mouse motor cortex and basal ganglion while they're doing various things, and I get, and I get exponents for the size and duration distributions and I get the crackling noise exponent. And they're not, they are logarithmic, Right. But they're slightly away from criticality. What, you know, 1.2, whatever that range is. Like, it's kind of frustrating because, I don't know, like, is 1.6 not critical? Does it have to be, you know, what is that range and how close to critical am I? Does that matter? Or is how like the slope of the line? Does that actually matter? You know, as long as it's a straight line. Right. As long as it fits. But anyway, this is so what I wanted to ask you about. I think I was. I didn't articulate this well when I was speaking with Woody this last week is. So we can't ever be a critical at perfect criticality. Criticality, because that would be across all scales, Right? You'd have infinite dynamic range. Does it make sense that the. To ask if a system is in criticality within the dynamic range for which it is functionally operative? Right. So our brains operate at a certain speed, Our behavior is at a certain mesoscopic temporal scale. And so it would make sense then to have a fairly narrow dynamic range in which, let's say, a population of neurons is acting in a critical manner relative to the information that's sending to other parts of the brain or to the spine or. Or otherwise. Does that make sense? [00:25:16] Speaker A: Oh, it totally makes sense to me. So if criticality is a place where healthy operation occurs, then you'd expect the brain to on average get near it, but not always be exactly at it. In the same sense that my heart rate has a certain optimal range, but sometimes it's going to go up and sometimes it's going to go down. But I have mechanisms that kind of bring me back into this range or my blood pressure or my ph or anything like that. These things are within some band that we consider healthy. And that's what I think you can see in beautiful experiments by Keith Hengin and his colleagues. They basically take animals and they perturb them. So one of the examples they gave is they close one of the eyes and they record from the contralateral visual cortex. They show that right after this thing being closed, it goes subcritical. And then they look at that crackling noise relation and he uses the difference distance to criticality coefficient. And then over a little bit of time, I think it's like slightly more than 24 hours. Basically, the brain restores itself to being critical even though it's not receiving any inputs anymore. [00:26:22] Speaker B: And I think if I, and you can correct me, the criticality markers occur before the. So when they close the suture one eye, then the contralateral side of the visual cortex, the firing rates go down also, and the metric of criticality resurfaces before the firing rates catch up, which is really cool. [00:26:45] Speaker A: Yeah, that is pretty cool. And we're still trying to understand that. But I mean, one of the things it suggests is that the distance of criticality is at least as important as firing rate. Right? I mean, so the brain gets that in line really quickly before firing rate comes online back into its zone. So, yeah, we don't know fully what that is, but I saw that paper and I reviewed it and I wrote like the intro piece for it in Neuron 2019 Ma et al. And I love that paper. I just thought, wow, he's really nailed it. He's really shown that if you perturb, the system will come back. And they've got much better stuff since then. I mean, I don't know if you've seen this, but they were looking at a tauopathy model of Alzheimer's and So they had this, this also appeared in Neuron, I think maybe last year. But what happened is they have these mice and they express tauopathy and that causes them to lose their memories. Right. And so little mice, they only live two years. And what they're doing is they're looking at cell to cell correlations and they're looking at firing rates, and they're also looking at distance to criticality. And what they find is the distance to criticality. That metric predicts better than any other low level metric. Looking at single cells, when the rat will go bad, how bad the symptoms will be. Right. So it's a better biomarker than anything you. It's an emergent biomarker that is a population signal that is a better indicator than anything you look at in terms of single cell firing rate or pairwise correlations. So I think it's very useful for potentially biomarkers in humans. And you can record things with EEGs on humans. Right. Or Meg, Right. [00:28:29] Speaker B: You originally got excited about this idea and you're an assistant professor, and then you start getting the blowb. I mean, how has your. And it seems like these days you're still writing sort of pieces, defending. Right? [00:28:43] Speaker A: I'm still in the fetal position, yeah. [00:28:45] Speaker B: Oh, is that okay? So where's your company? [00:28:47] Speaker A: No, not really. Not really. But yeah, I mean, I think this is part of the scientific process is to have skeptics and to throw stuff at you. And I can tell you a story about Alain Destache. He is, he is the gadfly. He's constantly biting me. You know, Socrates called himself a gadfly of Athens. He was asking them questions, you know, and I think that was good for them and I think it's good for me too. And he and I are friends and, you know, how bad can it be if he invites me to Paris twice to give talks and then takes me out to dinner? Right. So he writes things that are. [00:29:19] Speaker B: Yeah, but he'll knife you right when he's doing it, right? [00:29:22] Speaker A: I don't know. I don't know. He and Jonathan Tabboul have been extremely nice. Like, they would send me advance copies of what they were going to put out and they'd say, hey, look, I want your feedback on this. I mean, these guys are scientific gentlemen. And so they'd send it to me and I'd say, well, I disagree with this, blah, blah, blah, blah, blah. And they'd incorporate some of my comments, but not all of them. And then I'd write a rebuttal piece or Something like that. So I think that the field has benefited from the blowback. I have benefited from the blowback. I have grown in my appreciation for the subtleties and the nuances of criticality by the things that Alain and Jonathan have challenged me with and others. There have been others out there as well that have said, you know, hey, how do I know this isn't that so that in part led to all these different. This proliferation of measures. How do I really know if I'm critical or not? And that's still going on, even now. [00:30:17] Speaker B: Yeah. One of the issues that I've thought about, it seems like, because there are a lot of different measures now. If I want to find criticality in my data, I can find it by Herca, by crook. And that worries me. [00:30:36] Speaker A: I would disagree with that. And here's one of the things that we did very early on is, for example, we take the power law distribution, and then we'd shuffle the data in time and show that it would not produce a power law distribution. I think you should always have some kind of control measure. If you shuffle your data in time, your criticality measures will at the very least drop in quality. Your distance to criticality will go up. And that's something that Keith Hengin does. That's something we do. Woody does that. A lot of people do that, and they're constantly looking at it. So I think you have to be rigorous about it. Now, if you have too small a data set and you shuffle and then it looks kind of the same, then I would say you have too small a data set, then you're just not really in the game to play criticality. So if it's looking like there's no difference between the actual data and the shuffle, then let's not talk about this. But you have to have numbers large enough to play this. And that's why as people record more and more neurons, it's more of an option, I think, to look at, where. [00:31:42] Speaker B: Are we right now in the critical brain hypothesis? Are we in the heyday now or are we in maximum. [00:31:50] Speaker A: Criticism peak? It's passed, right? [00:31:53] Speaker B: You think it's passed? [00:31:55] Speaker A: I don't know. I don't know. No, no. I think here's the thing. So a lot of the work that's happened roughly up to this time has been, I would say, more fundamental science, where you're looking at the question, the scientific question of is it viable to say the brain might operate near criticality? And what does that mean scientifically? How do we measure it? How do we verify that how do we make sure that we're not talking rotation? And then once it gets accepted to at least some degree, then medical people start testing it to see if it's related to neuropathologies. And so in 2022, there was a really nice review by Vincent Zimmern in Frontiers. And what he did is he just basically looked at the medical implications of the critical brain hypothesis. He said, okay, it could be relevant for schizophrenia, for depression, for Parkinson's, for epilepsy. And he just went down a whole list of things. And that paper, even though it was published in 22, at least on Google Scholar, has something like almost 180 citations now. So I think what's happening is clinical people are picking this up and they're beginning to say, okay, how can I find out if, when somebody has a bout of depression, do they go subcritical? How can I find out if someone is showing signs of schizophrenia? I mean, maybe they're in some ways departing from criticality. I don't know. I'm not a clinician, but I'm delighted to see that people are picking this up and starting to show interest in it. So that might be the second wave of this stuff. It would be much less focused on the fundamental science of whether in some sense, when they start using it, at least in their minds, they think it's settled, that it's good enough to at least try as a hypothesis for diagnosing things. [00:33:35] Speaker B: So I'm trying to think also about causality and criticality. Like, is criticality causal or is it epiphenomenal? How do we think about that in terms of function? [00:33:47] Speaker A: Yeah, yeah, yeah, okay. So it could be epiphenomenal. That's possible. That it's just something that happens. It's sort of like noise that comes out of a. I don't know, you use a radio. And then if you were to look at just one resistor in it and measure the voltages, you'd say it's just noise. It could be a byproduct of what the brain is really doing. Now, if that were the case, though, then you wouldn't expect it to show such nice homeostasis. So when you perturb it, it comes back. So if it's an epiphenomenon, why does the brain care about it more than firing rate? Right. So I would offer those as things. Now, the truth is we don't have enough data to know how to answer exactly on that question. It's a valid question. I think we need more data I think we need causal interventions. If I perturb criticality, does it always come back? So what Keith and his colleagues did is they basically did what I would say is a negative perturbation. So you take it where it's critical, then it's subcritical and it comes back. But what if I do a positive perturbation? So somehow I throw it into supercritical state? Does it fight its way back down? So is it symmetric both ways? I think that's a little bit harder to test. [00:34:56] Speaker B: Why is that harder to test? [00:34:57] Speaker A: Well, because it's a little more unnatural. So at least the first way that occurs to my mind is you put a bunch of bicuculline on the brain, and so now you knock out the inhibition and you're going to have seizures. So if you have a bunch of seizures, does the brain fight its way back from seizures to coming down to. [00:35:13] Speaker B: But it has to in order to survive, right? I mean, it's not. You can't stay in that state. So it has to go somewhere. [00:35:20] Speaker A: Right. Right. Now, maybe it does, but maybe the mechanisms of coming back down are very different from the mechanisms of coming back up. Right. So they could be different. But I agree with you. Right. If you're going to survive, you got to somehow overcome seizures. So I don't know how it does that. And there might be different constants, time constants, on the feedback from positive, coming from positive and coming from negative. So there's interesting questions, and you could even come up with sort of a generic model. And phenomenologically, is it better to come back fast from below or do you want to come back faster? Like, here is seizure territory. If this is critical here and you're up here, that's seizure territory. So you might expect this is a more urgent thing that you've got to deal with it. Whereas this is. So come hell or high water, I'm going to get that thing back down. Whereas here, you don't want to come back up too fast. Maybe you'll screw up your wiring and you'll erase memories or something. And so you do it over 48 hours. I don't know. Right. These are still open questions. They're prompted by the idea that maybe the brain is homeostatically tuned to operate near criticality. But, you know, and then maybe that's a golden age of research where people are taking it. You know, it's no longer kind of under scrutiny, like, is this a real idea or not? But then people are using it as an operating, working hypothesis to Investigate clinical things. [00:36:38] Speaker B: I mean, I'm not supposed to be biased as a scientist, but I love the idea of the homeostatic set point being criticality, essentially. But then how. How does that set point get set and how does it achieve it? And that's the big question, right? [00:36:53] Speaker A: Yeah, yeah, yeah. So Keith has some ideas about that. You should have him on this podcast as well. And I think he's. So what I can probably, without giving too much away, is that they've done a lot of really high electrode count recordings in mice, and they have also done some really sophisticated models working with people from the Allen Institute. And I think they'll be coming out with some statements about how that works. Even their 2019 paper, their MA et al paper in Neuron, they did some decent models. And I should say Ralph Wessel is on some of this stuff as well. Ralph Wessel is another. He's a physics guy out at wash U in St. Louis. And one of the things that they found was they created a number of different classes of models. And what they found was that the classes of models that matched the data best were ones that had inhibitory neurons, basically driving the recovery. And so that's a big clue. Okay, so you could think that the excitatory neurons drive the recovery, but you. [00:37:51] Speaker B: Mean by, like, settling down or speeding up or whatever, like, as an inherent process? I mean, the excitation inhibition balance seems to be a critical. Forgive the term here, but it seems to be. It's always a critical parameter in models. And so. But what you're saying is, like, it's actually the inhibitory neurons that's allowing the system to get back to that criticality. [00:38:13] Speaker A: Right, Right. So that's at least kind of the latest that I know about it. I'm not an expert in this, but I think it's fascinating. And, yeah, they will probably come out with some stuff that indicates exactly what type of inhibitory neurons and what are the dynamics of this. And I'm sure they've got a model of. But that's a really great question is how does it come back? And is it working the same way above criticality and below criticality, do those inhibitory neurons only pull up and do you have different neurons pushed down? I don't know. There could be distinct circuits for that. [00:38:48] Speaker B: That's one of the fun things, is there's so many unanswered questions, so much work to do. It's ripe, a lot of fruit to pick. Yeah. As an aside, I mean, so we're talking about brains, but I Want to expand the conversation? For example, if I record my mouse wandering around in a box, I can look at the kinematics, the behavioral data and some of the tracked positions and some of the metrics we use have these long range temporal correlations. So the behavior itself is scale free. In addition, parts of it anyway, in addition to the neuronal activity is all biology. I mean, do you have to be. Is critical. So. Okay, stepping way back, I'm part of a discussion group and we've been talking about evolution lately. And you know, I think people talk about evolution as if it's like a force. Right. But really it's like a description of what happens, what has to happen for things to survive. And it's. Is it right to think of criticality in that same way that like to be alive is to be. You have to be at criticality just to be a living organism? [00:40:03] Speaker A: Well, that's a really interesting question. Okay, so I don't know enough to speak authoritatively about that. I certainly have some opinions. I'll offer them in a minute. But there's a really nice review is by Mora and Bialik and I think the title of it is something like Are biological systems in general poised at criticality? And they look at things like the immune system. There's work by Magnasco at all on the hearing system. It's poised at a critical point at a hop bifurcation. So it's amplifying things but not too much. So it's just hovering at that sort of spot. They look also at the neuronal avalanche stuff. They also look at flocks of starlings or I guess I should say murmurations of starlings. Yeah, that's the. Or a murder of crows. Crows don't act like that. Right. So there's an English word for each type of flock of birds, which I've already exhausted my knowledge. But anyway, they show that there's lots of things also like bacteria and different things that swim flocks or schools of fish. So a lot of these things seem to organize kind of at a point where they're. Yeah, they're. They're operating near a critical point. Why is that? It might be for information processing purposes. So let's say if you're in a murmuration of starlings and some hawk is coming down, if you had maximum susceptibility, then that would allow these a little, let's say a bunch of these starlings notice that the hawk is diving, they turn and then the turn propagates through the entire flock very quickly. Best at the critical point, and then they break up. So for survival, it might be useful. So if information processing is important for evolution, then being near the critical point would be something that evolution could push toward in multiple ways. And the way I like to think about it is it might be something like wings. You know, so wings appear in dragonflies, bats, birds, of course, and even some snakes and lizards. Right? Some lizards have certain ways of flying, and some snakes, I don't know, in Southeast Asia, they make their bodies really flat and they kind of fall from trees in a certain way. Flying squirrels, too. So what the heck is going on? Well, I think there's some evolutionary pressure to say, okay, let's use Navier Stokes equations to, you know, gain some distance here, and we can fly. We can fly if we have the following things. And so you get some sort of surface that catches the air and maybe creates lower pressure on top, and now you can fly. And so evolution independently arrives at this in multiple organisms, at multiple epochs in evolution, because it's just there in the. [00:42:39] Speaker B: Laws of physics, it's best. It's the best thing to survive. Yeah, it works. [00:42:42] Speaker A: It works. And so if we were to take this as an analogy, you might think that, okay, the heart of criticality lies in the laws of physics. There's something about setting up a brain or a flock of animals or the ear or a swarm of bacteria, setting them up at this point where information that comes into the system is not extinguished. It's sort of preserved for as long as possible. It lingers. And you don't over amplify the response, and you don't over damp the response. You just let the information kind of echo within the system for as long as it can before it dies out with a power law tail. And so now new information that comes in can be. These things can be jointly processed, I don't know. But it could have something to do with the laws of physics. That's my generic hunch. [00:43:33] Speaker B: It has to have something to do with the laws of physics. [00:43:35] Speaker A: Well, yeah, in that generic sense, but. Okay, so let me put it maybe a little bit more forcefully. You're challenging me to say a little bit more than that. The laws of physics have very interesting properties like symmetry. So whatever. If I do an experiment in this orientation now, I rotate, it's the same there. If I do it here, I translate it over here, the same. If I do it yesterday or I do it today or tomorrow, it's the same. So these symmetries are related to very deep things in physics through Something called Noether's theorem, which says that we're always going to minimize the action. And so the action is, in typical sense, it's the difference between the kinetic energy and the potential energy. And so the difference between that integrated over time, you always want to minimize that. [00:44:31] Speaker B: Yeah, that's a weird law. I've always found that strange. [00:44:34] Speaker A: But, yeah, it is weird. And you know what's funny is I'm talking to all these people in my department and others, and I say, why is the action always minimized? Why is that? And they go, oh, we don't know why, we just know it always is. Okay, so that's something in the laws of physics that leads. So there's a deep connection between minimizing the action, which we find works for, strangely, classical mechanics, quantum mechanics, and general religion, relativity. [00:44:58] Speaker B: Another way of saying that is you always take the shortest path, Right? [00:45:01] Speaker A: Correct. Yeah. Okay. So for people who may not know what the minimization of action is if they listen to the podcast, it's always taking the minimum path. So this minimization of action is a fundamental observation of physical law. It leads to these symmetries. These symmetries lead to conservation. Conservation is like the branching ratio. If the branching ratio is one, things are conserved. If the branching ratio is greater than one or less than one, things are not conserved. So conservation, if you have a system that's conservative and it's wired up in the right way, so it's some population, you've got links between all these guys, it'll be critical. Okay. But it obeys this idea of conservation, which is linked to minimization of the action and linked to the symmetries that we find. There's an interesting paper by Lynn and Tegmark, and the title of it is something like, why does deep and cheap learning work so well? And this is related to deep learning. Why the heck can we get away with just firing up a network with all these layers and just training it on a bunch of examples? And now it seems to have the concept. It's not just memorized everything that it's seen, it now has the concept of what a cat's face face is. And it can see new versions of the cat's face that it's never been exposed to before and correctly identify it. So it really seems to be capturing the concept. Almost like Plato's forms. Right. You know, what is the essence of Katniss? It gets it. Okay, why does it do that? And one of the things that they argue there is that because there are These symmetries in nature, because things are rotationally invariant. A lot of things are like that, translationally invariant. There's all these symmetries. And so basically, when you want to write an expression to explain the dynamics of the system, the equation that you write for the dynamics is very compact because of these symmetries. And so they say, oh, the Hamiltonians are very, very compact. They're small. And we know this is true. So if you want to explain the whole world, from quantum mechanics to general relativity and classical mechanics, you could write three equations on a T shirt, Schrodinger's equation, General Russell relativity, and then, you know, minimize the action. You cover almost everything there is to cover. Right. Why is the world so compact in that way? It's related to conservation and symmetries, and that allows us to learn things. But I think it also my hunch is, is that it's related to criticality somehow. [00:47:35] Speaker B: Well, okay, so I had David Cracker on the podcast a few episodes ago, and he has written this book. And I asked him, because he talks about broken symmetries a lot in the book. [00:47:46] Speaker A: Sure. [00:47:46] Speaker B: And I read the book and I'm like, yeah, broken symmetries. And I realized I didn't know what he was talking about. Right. It's one of those things where you read the phrase over and over and then you realize, oh, wait, I actually don't know what he means. But what you're talking about with the symmetries are idealized physical laws. [00:47:59] Speaker A: Yes. [00:47:59] Speaker B: But anything interesting that happens is a broken symmetry. Right. So there's this conundrum. [00:48:06] Speaker A: Yeah, yeah, yeah. Okay. So, sure. I'm glad you brought this up. All right, so, right. If everything is perfectly symmetrical, often it's totally boring. Okay, so, for example, you know, at the Big Bang, you get a certain amount of antimatter and matter created. If that's totally symmetric, if it actually is perfectly balanced, nothing happens. Cancel. We have nothing. Okay. The brain. We talked about EI balance. Okay. So you've got excitation and you've got inhibition. If they are perfectly balanced at all moments in time, we have no activity. The thing that happens typically is there's a pulse of excitation, and it's followed by a ring of inhibition afterwards. So there's a slight asymmetry, and that asymmetry allows activity to go through. The same is true with anything at a phase transition point. So at a phase transition point, typically what happens is. Let's take this. Let's say you have a bunch of molecules and they're in a volume here. And they're just bouncing around. Okay. So they're in a gas. And now what happens is, so basically you have an equal probability of, for every little voxel in there, equal probability that there's a molecule in there. So in some sense it's isometric, it's symmetric. But now if I slowly condense this thing into a fluid, what you get is fluid appearing on the bottom. And now there's a much higher probability that things on the bottom are going to be occupied, there's. Than the things on top. You've broken the symmetry and you've reduced. [00:49:33] Speaker B: The entropy as well. Correct? [00:49:34] Speaker A: Yes, yeah, yes. And this is related. So now what you do is you break the symmetry. But right at that point where you're breaking the symmetry, at the gas liquid phase transition, that's where you get power laws, that's where you get all the interesting stuff. Right at the point where you're breaking symmetry. So yeah, I'm not going to contradict Mr. Krakauer, head of the Santa Fe Institute. I know him and we've, we've met before and he invited me out once and it was great. But I think that he is absolutely right. You got to break the symmetry. He's totally right. Now the symmetries though are fundamental and interesting and you need them in order to get close to this phase transition point. [00:50:15] Speaker B: So we are far from thermodynamic systems, let's say just human being, right? We're far from thermodynamic. We're constantly. Is the right way to think about it that we are shooting for symmetry, but we're far from thermodynamic equilibrium. So we're never going to get there. And it's just a constant battle to be at that homeostatic set point. Man, that was a mouthful. [00:50:36] Speaker A: Yeah, yeah. So there's a lot of ideas in there and people are grappling with them. These are the things that we're all talking about. So first of all, I'd say there's. Let me just make one distinction. There's a difference between what I would call an equilibrium model and a non equilibrium model. So an equilibrium model is a model where let's say you've got some system and now you just let it cool down. You're not driving it, you're not adding anything in there, you're not heating it up, you're just letting it cool down. And that type of model is very useful and it can approximate many things in the brain. For example, Bill Bialik and Elad Schneiderman have used maximum entropy models to map the Ising model onto the brain. And you can get a lot of traction with things like that. It's really wonderful work and it even points toward criticality. But the brain, as I'm sure they would agree, is fundamentally a non equilibrium system in the following sense. It's constantly receiving inputs. So this would be like your system and you're not letting it cool down, but now you're heating it up. So you've kind of put it on a hot plate and now you're turning this little hot plate up and you're driving the system. And when you drive it you can get boiling or you can get roll cells of the fluid formation. There's all kinds of structures, structures that can appear when you drive it, that don't appear if you don't drive it. Okay, so you got equilibrium models and you got non equilibrium models. You can approximate the brain with an equilibrium model and the Hopfield network is one of those things and maximum entropy models are. But a non equilibrium model is one that's being driven. And I think if you take a look at a little patch of cortex, it's being driven, but it's being driven. [00:52:14] Speaker B: Also by itself, right? [00:52:16] Speaker A: Correct. Yeah, both. Both things. It's being driven by itself and it's being driven by other inputs from other areas. So you need equilibrium model. And if you have a non equilibrium model then it gets more complicated because people have made great progress in equilibrium statistical mechanics. But non equilibrium statistical mechanics is not a settled discipline by any stretch. [00:52:42] Speaker B: Why is that? Just the complexity of the causality. [00:52:44] Speaker A: It's very, very hard, right? It's super hard. Now we have attempted to say, okay look, these ideas of criticality at a phase transition point we use a branching model, that's a non equilibrium model. But even more than that, it's being driven. So any patch of brain is getting driven. So we've gotten into this idea called quasi criticality and that's essentially trying to accommodate this idea that a given patch of cortex is driving itself as you point out, but it's also being driven by external inputs and that's going to push it away from the critical point. [00:53:17] Speaker B: What's the difference between quasi and near criticality? [00:53:21] Speaker A: Yeah, so you could have a classical system that's not being driven outside, it's just near criticality because maybe it's not at the critical temperature yet. Okay, so I could have some system that has a. Let's take water. So you take water and you put it at the right pressure. And now I bring it to some temperature. I basically can bring it close to the critical point. That's an equilibrium system. I'm not putting anything in. I've got this little cell, I've got water in there, and it's a mixture of gas and liquid. And now I can change the temperature and I can either bring it right to the critical point or I can be slightly away from the critical point. Now that's an equilibrium system that is slightly away from critical. Now imagine I had that system and now I put a little hole in it. And now I can put water droplets in. If I take that system and I put it at exactly the critical point and now I start squirting little water droplets in. That would be what we're trying to describe by quasi criticality. So we're saying that through homeostasis and everything, what the network does is the network brings itself as close to critical as possible, but it's got a little leak and water is dripping in there. And so now it's not really at the critical point anymore. It's close to the critical point, but the reason it's pushed off is because I'm squirting water in there. It's a non equilibrium system. And so now it's. Things are happening that aren't quite right. It's definitely not perfectly symmetric. We're breaking symmetry. And that's where interesting stuff happens. But the picture of it being apart from criticality because it's driven is slightly different from the picture of it being apart from criticality because it's not at the critical point yet, the temperature is not there. [00:55:02] Speaker B: I see. Okay, so this is going to be a bit of a left turn here, but we're going go for it. Why the cortex? Why not the whole brain? [00:55:12] Speaker A: Yeah, that's an excellent question. So first of all, I think that other parts of the brain could be critical, subcortical parts could be critical. And there's a really nice paper out by Miguel Munoz and colleagues, and it was in Proceedings of the National Academy of Sciences maybe last year, something like that. And what they did is they looked at neuropixel recordings that were put in striatum amygdala. [00:55:40] Speaker B: I'm afraid you're going to review my paper because I have striatum recordings and this could be like a huge conflict of interest that I'm on this podcast now. [00:55:48] Speaker A: Well, anyway, anyway, they record from all these different regions and then they have a way, not using avalanches, but using principal components and things to look at these different regions. And what they say is that a lot of these regions are really close to the critical point point. And so there might be an argument for that. I don't know enough about that to rule one way or another. I'm just commenting just to say that some people would argue that many parts of the brain are. Okay. Now here's why I think it might not be that. Okay, so there's nice work by Viola Priestman and colleagues on neuromorphic computing. And what they do is they take this sort of neural like chip and they train it on many different tasks. They train it on complicated tasks and they train it on simple tasks. And then they ask, after they train it, is it close to the critical point? And what they find is that for really simple tasks, it's not. It's not close to the critical point. But for really complicated tasks or more complicated tasks, it is closer to the critical point. [00:56:49] Speaker B: That's because you need the capacity to accomplish a more complicated task. [00:56:54] Speaker A: That's the thought. Right? Right. Okay, so let's say I've got a specialized circuit and let's say all I need to do is, I don't know, keep a rhythm. So there's this pre botzinger complex in the brainstem. It just kind of keeps a rhythm and it's just got to keep you breathing at a certain level and maybe gets inputs about your oxygen, maybe breathe faster or breathe slower. That thing might be. I mean, maybe I'm wrong, but I have no reason to believe it needs to be critical. Right. It just needs to be reliable. [00:57:23] Speaker B: Yeah. You don't want your heartbeat to be critical, Right? [00:57:26] Speaker A: Maybe not. Now, on the other hand, heartbeats do have long range temporal correlation, so there might be something going on. Okay. So you can't escape it. It's everywhere. But I wouldn't expect that the pre botzinger complex has to be critical in the same sense that the cortex does. The reason why I think the cortex is probably most likely to be critical is because it has to simultaneously optimize multiple tasks. It's got to be good at transmitting information, it's got to be good at storing information. It's got to be good at dynamic range. It's got to be good at computing all these things at the same time. And a given patch of cortex, before you're born and exposed to the environment, you don't know what kind of associations you're going to learn. You might learn that red means stop and green means go. But if you live in another country, maybe it's different or maybe you learn in America we're going to drive on the right side of the road, but in England you drive on the left side. Right. So there are a bunch of arbitrary associations that we all have to learn to get along in the world. And that cannot be pre wired into the cortex. And so the cortex has to be generic. It has to be a generic computational unit that's largely specified by its inputs. And so if you have something that's not highly specialized but needs to be ready to do anything decently well, Jack of all trades, then being critical makes sense. [00:58:50] Speaker B: Okay, so I'm born. We've been talking mostly about neuronal activity, but there's criticality in structure as well. And you talking about being born, you're massively connected when you're born. And then there's a pruning that takes place. I bet you know the answer to this. I mean, is that pruning, does it get to like, you know, so eventually we end up at like a small world network, optimal state. And I know small world network is related to criticality, but, but is that pruning itself? Do you know? Is that, do we end up with a fractal critical structure? [00:59:27] Speaker A: This, this is something that has been investigated in primary cultures. So there's a paper From, I think 2010 by Tetzlaff et al. And Uli Eggert is on that. And one of the things that they do there is they grow these cultures. They basically take neurons from let's say rat hippocampus or cortex, I can't remember which, they enzymatically dissociate them. So they're floating in a solution and then they pour it down over a 60 electrode array and they record these guys from right after making the culture to four weeks later, something like that. And what they find is that generally the picture is it goes through some pruning, it goes through exuberant connections, and then some of these things are pruned. [01:00:09] Speaker B: But they're not tasking the culture with anything. It's just letting it grow. [01:00:12] Speaker A: They're just letting it spontaneously be active and listen to itself. And over that period, it does eventually approach criticality. I think it goes, if I remember the cycle first, it goes supercritical, which would make sense if you're over connected. And then it goes subcritical, which would make sense if you've pruned a lot, and then it gradually approaches criticality from below. If you start now learning and you strengthen those connections that have not been pruned, you get closer and closer to that. So that's an interesting thing. And there have been models to look at that kind of thing. And so a really good question would be, does that happen in deep learning? Right. So you train a really deep network and you find out, does it go. Definitely. They prune them, definitely. The weights change. But does that recapitulate the same thing that Tetzlaff et al. Found back in 2010? Does it basically act like it's supercritical at the beginning and then go subcritical and then grab gradually approach from below? I don't know that that's an open question. [01:01:09] Speaker B: Okay. Since you mentioned AI, I mean, this is ostensibly a podcast about neuroscience and AI, and I've only read about criticality in biological organisms in my studies thus far. But AI, you can turn it off, turn the computer off, come back the next day, turn it on. There's no necessity of dynamics, of ongoing dynamics. Right. And so in that sense, criticality is not part of that story. I suppose if you run the model, maybe. Do you know, is there work in artificial intelligence looking at criticality and whether tuning to criticality improves model performance, for example, or generalizability? [01:01:48] Speaker A: There is a little bit of work on that. [01:01:50] Speaker B: Okay. [01:01:50] Speaker A: But it seems that they are unaware of the work in neuroscience, and so, in a sense, they're independently discovering it. [01:01:57] Speaker B: That's good. [01:01:58] Speaker A: That's exciting. Yeah, yeah. [01:02:00] Speaker B: So AI, always unaware of neuroscience. Always unaware. [01:02:04] Speaker A: Yeah, well, whatever. Yeah. But, you know, there are a lot of smart people working in that area, and then they eventually hit on this. It may be that that's kind of where you want to poise your networks for optimal training. In fact, that's the. I think there were two papers at Neural IPS that a colleague of mine showed me, and I'm blanking on the guy's name, but I could probably send you the paper titles after this. And they were looking at that, and basically they were looking at something kind of related to that, which is if you set up the connectivity matrices in such a way so that you don't take. You have an input vector. Okay. So you've got all these X's that are X, you know, ones and zeros. You take the length of that input vector, and now as it goes through the layers, you want to look at the length of that input vector. And these networks will learn best if the input vector length does not grow and does not shrink over time. It's when it is roughly keeping this. It's preserving. [01:02:57] Speaker B: It's conserving a branching ratio of 1. [01:03:00] Speaker A: Branching ratio of 1. Or a conservation principle. Yeah, something like that. So that's very interesting. And I look forward to hearing what people have to say about those types of potential linkages between these areas. [01:03:12] Speaker B: I mean, just zooming out. What is your take on. I'm not going to ask you what your take on current AI is, but. [01:03:19] Speaker A: But. [01:03:23] Speaker B: What do I want to ask you about this? I mean, I didn't prepare any questions about artificial intelligence with you because I'm so wrapped in the biological world. But do you look at AI and think, oh, they need criticality? Or what are your thoughts there? [01:03:39] Speaker A: You're doing really well without critical. Although interested and. Yeah, sure, I'm very open to this. That kind of thing. And I'm increasingly interested in that. Yeah. And I have colleagues who are interested. Colleagues who are getting me interested in it. So, yeah, we'll see how it all turns out. Yeah. [01:03:56] Speaker B: Like we mentioned before, you spend a good deal of your time sort of in defense mode. Right. Because of objections and criticisms to the criticality hypothesis. Are there any that are more worrisome to you or what is the big obstacle right now that keeps you up at night? [01:04:14] Speaker A: So at the beginning, all of them are worrisome to me. [01:04:18] Speaker B: You have tenure now. [01:04:19] Speaker A: Yeah, I do. Right. So I can do anything. Huh. Well, I think that what happens is they should all be taken seriously. And I think the latest one that I addressed was interesting work by Alain Destash and Jonathan Tabboul. And I don't know if you got to see this or not, but essentially what they said was the following. Maybe it would be instructive for me to go over it. They said, hey, look, you think that satisfying an exponent relation is important for criticality. You think that avalanche fractal avalanches at different sizes so that they can all be collapsed and look like the same shape at different scales. We call that avalanche shape collapse. You think that avalanche shape collapse is really important. But guess what, John? We can get this with an Ornstein Uhlenbeck process. [01:05:03] Speaker B: What's that? [01:05:04] Speaker A: Well, I'll simplify it. It's like a. Let's say we're flipping a coin, okay? And every time you flip the coin and you get a heads. So here's our timeline. It goes like this. Every time you flip a coin and you get a heads, you go up one, and time is like this. So I go up every time I flip a head, and every time I flip a tail, I go down, okay? So now this thing's going to go up and down, up and down over the line and you can look at the times where it crosses over the line from when it goes up to when it comes down. And you can look at those excursions and you can average them over many scales. And they're fractal. I've done it. I've created a program in Matlab and ran it a billion times, and sure enough, I get superb avalanche shape collapse. I get good avalanche power laws. They satisfy the crackling noise relation. So that's all wonderful. It's a fair coin. [01:05:54] Speaker B: What you're talking about right now is in Frontiers in Computational Neuroscience 2022, right? Addressing skepticism of the critical brain hypothesis. [01:06:02] Speaker A: Yes, exactly. Okay, so you've seen. So that is a fair coin. It's a critical process. It is exactly a critical process. And what they were basically arguing is, how do we know that this isn't what's going on in the brain? You just get a random walk, it's a coin flip. And we get random activity in neurons. And they created a model and it matched all this stuff. [01:06:29] Speaker B: However, if you randomize your neural data, then you get different exponents, right? You get different numbers, which is. [01:06:36] Speaker A: Yeah, so, yeah, that's a good point. That's an interesting thing. So let me just address first of all their stuff and then maybe we can go back and address this other business of shuffling the actual data. So in terms of taking their model seriously, the coin flip model or the Orenstein Ohlenbach process, which is kind of like that, except it's got a potential, well, so it's a little bit more likely to come back to the center than a coin flip. I had to admit it satisfied everything. And so then I, you know, I went through a period of like, shock and mourning and. [01:07:11] Speaker B: What is this one of those? Yeah, seriously, like when you get news of that, it's like, oh, another one I have to deal with. [01:07:17] Speaker A: You're right. I ran all the simulations. I'm like, holy smokes, what the heck is going on? So, but then I thought about it more and more and I'm glad they challenged me. And I thought, okay, well, what if I do admit that it's critical? What if I do admit that the coin flip is critical? Because it's, in a sense, it's, it's at a point of view, symmetry, right. Because the chances of getting heads is exactly equal to the chances of getting tails. If you slightly break the symmetry, if you slightly make the heads more likely than the tails, then you lose the power laws or you begin to lose the power laws. Okay, so it's A phase transition between equal probability to something that's not equal, right there is where you get the phase transition. Now, here's what I convinced myself of. The coin flip is critical. The Ornstein Uhlenbeck process might be critical, but it's a different type of criticality. And so this is what I said. The criticality that we get in the brain is a result of the collective interactions of all the neurons. So it's emergent, kind of like the criticality you see in a flock of starlings. It's emergent, but the criticality that they're claiming might work for the brain is just the coin flip. Well, if it is just the coin flip. And so we're being driven by random stuff outside. If I now go into the brain and I cut the connections, and it's all from being driven by a random source from outside. That's like an Ornstein Uhlenbeck processor or a coin flip. If my neurons are just responding to that, like, some of them go on and some of them go off, if I cut the connections and I still get power laws, then I agree with them. The brain is doing nothing but reflecting the random statistics of the input stream it receives. But we know from experiments that if you go in and you block with AP5 and CNQX and other things like that, if you block synaptic connections, you don't get power loss anymore. You don't get avalanche, shape collapse. These things fall apart. And so what seems to be essential for the brain is the neurons and neuron couplings. And the same thing is true in a model. If I go into a model and get it to run at criticality, I've got integrate and fire neurons, all the standard stuff. And now I cut the connections, it also falls apart. So that's not what I think is going on. Although you know how they always say, but see, Butsi Ilya Nememan, Ilya Nemenman at Emory would tell you a contrary story. And he's got a good argument, and, you know, it continues to go on. So his argument is basically, okay, the criticality is coming from some other region over here, and it's very low dimensional. And I said, well, what if I create a culture and I have two wells and now I cut the connection between them? You know, according to your hypothesis, that should severely dampen the criticality over here. I haven't done that experiment, but, you know, this is something we could look into. Is it being driven by an outside source, or is it intrinsically generated by the network itself? [01:10:08] Speaker B: I mean, do you think of, like. So, you know, the different Broadman areas of the brain, or let's just say, like, prefrontal cortex as a whole needs to be at criticality. Visual cortex itself needs to be at criticality. They can be at different. Different phase, different levels, near criticality. And then they have to coordinate. [01:10:29] Speaker A: Yeah. [01:10:29] Speaker B: Or is it just across the whole cortex? Like, how do we add one neuron to a population of neurons? And it destroys criticality? Right. [01:10:36] Speaker A: Yeah. No, no, I think actually there is a gradient of criticality in the brain, and there's some data to suggest that this. So there are people looking at autocorrelation time scales and. Let me see if I can remember this paper. I have a good book here. I can look at. Look at it. [01:10:53] Speaker B: Oh, there it is. [01:10:54] Speaker A: I forget what I said, though, when I wrote the book, but then I forget what the references are, so I got to look at this. [01:11:00] Speaker B: There's a lot in there. [01:11:02] Speaker A: So. Yeah. Okay. So this is. [01:11:04] Speaker B: And I'll just say, like, it's such a readable. It seems to go fast. It doesn't seem like a long book, but there's a lot in there. [01:11:11] Speaker A: Oh, good. It went really slow for me. But anyway, Murray et al, they look at the autocorrelation time scales. So, anyway, this book, by the way, you don't have to pay for it. It's also open access, so you can get all the PDFs for free if you go to the MIT Press website. [01:11:29] Speaker B: How has the book's reception been? First of all, before you go into this, I mean, how has the criticism been of the book? [01:11:37] Speaker A: I have not received anything that said it was awful or whatever. I mean, I think mostly everyone who's gotten in contact with me has said they really liked it. So far, it's been pretty positive. I mean, were you asked to write. [01:11:52] Speaker B: It, or is this an endeavor that you chose? [01:11:53] Speaker A: They asked me to write it. Yeah. [01:11:55] Speaker B: They did. [01:11:55] Speaker A: Okay. Yeah. Yeah. They invited me, and I was thrilled. I'm like, yeah, great. I'd love to do it. [01:11:59] Speaker B: Yeah. It's a cool book. [01:12:00] Speaker A: Yeah. Thank you very much for your. For your help with it. So, Murray et al. What they do is they are a bunch of primate electrophysiologists, and they all get together and they record from different parts of the primate brain, and they look at autocorrelation time constants, and what they find is that in the frontal cortex, the autocorrelation time constant is longer, and in the visual cortex, it's shorter. And there's work by Viola Priestman and I've been itching to see her get it published, but she has also. She and Jens Wilting have come up with a very good way to measure the branching ratio at short time intervals. You should also look at their work. I can. [01:12:36] Speaker B: This is Mr. Estimator. [01:12:38] Speaker A: Yes, exactly. [01:12:40] Speaker B: Yeah, Mr. Estimator. [01:12:41] Speaker A: Mr. Estimator. YesS, exactly. And that they have applied to different brain regions and they haven't published that, but at least as far as I know. I talked to her a little while ago and she's. No, we haven't published it yet, but I want to get to it. But she got distracted by Covid and then published amazing work in science on that and now she's coming back to neuroscience. But anyway, they had results that more or less mirrored the Murray et al in terms of time constant. So in other words, the branching ratio is closer to one when you get to prefrontal cortex and it goes a little bit further away from one as. [01:13:14] Speaker B: You go back lower. Subcritical. [01:13:17] Speaker A: It's subcritical. Yeah, it tends to be subcritical. So one of the things that she consistently says is that you rarely see the branching ratio go over one. It's almost always close to cortex, but not over the line. Because if you're over the line, then maybe you hyperexcite and you risk seizure. [01:13:36] Speaker B: But what would that mean? That would mean like evolutionarily, like a different species who maybe doesn't have a prefrontal cortex like we do. Right. Their most anterior or newest brain region should be close to one, and then everything else can be a little bit under. Right. It's perhaps the longest time range, should be the most abstract concept related kind of brain regions. [01:14:05] Speaker A: Right, Exactly. Yes, that's what I would predict. That's what I think should happen. [01:14:09] Speaker B: Okay, so a snake, Right. It will still have some part of its brain that is at one. Near one maybe. [01:14:18] Speaker A: Yeah, I mean, I haven't, you know, I haven't seen snake recordings. I've seen turtle recordings. You can talk to Woody about that and zebrafish recordings and those both look like they're near the critical point. [01:14:29] Speaker B: Well, that makes like our prefrontal cortex look a little less impressive is if turtle cortex has a branching ratio, which is best brain area. [01:14:39] Speaker A: Yeah, yeah. Well, it's not all about criticality though. I mean, we've got a pretty big prefrontal cortex and they don't. And yeah, it's receiving a wider variety of inputs that have been processed through many more layers. Yeah, of course, you know this. You're a multi neuron electrophysiologist but anyway, yeah, I agree with you. If they're all kind of operating near the critical point, I mean. Yeah. What do we gain from that? I don't know yet. There's still all these other questions that have to be explored. [01:15:11] Speaker B: What's your level of confidence these days about the critical brain hypothesis? Is that an all time high? You seem like a humble individual, so I think that you probably have a measure of self skepticism as well. And how has it changed over time? [01:15:26] Speaker A: I think we should always be skeptical of things we should always be questioning. How do I know this is true? I think, yeah, I more or less. What is it? The Dunning Kruger curve. Have you ever heard of this? [01:15:39] Speaker B: Yeah. Where are you? [01:15:40] Speaker A: I've followed the Dunning Kruger curve. In other words, the first time I saw our data, I was convinced that the brain and all brains were absolutely critical. [01:15:47] Speaker B: You kind of have to have that to pursue something. [01:15:50] Speaker A: Right. So I was totally convinced. Then I went through a really low period where I'm like, oh my gosh, I don't know squat. And that was before I got tenure and fortunately we published a little bit on time and then I got tenure and now maybe, I don't know. Hopefully I'm trying to have the right mix of skepticism and confidence. I mean, I'm confident to the extent that it's being picked up by a lot more people, it's producing seemingly useful results. You know, like, it's a great joy to see someone like Woody Hsu go so far with it. Someone like Mauro Capelli, someone like Keith Hengen. I mean, there are a lot of people out there who are playing with it. Dante Chiavo, he knew Per back long ago, so he's really old guard. He had models about the critical brain when I was still in diapers almost. So he's really one of these people who's been around for a long time thinking about these things and thinking deeply about it. So I think it's good to see it being picked up by more and more people, the diversity of things. But I think everything can be oversold. Right. And you want to do that. [01:17:03] Speaker B: Yeah. I mean, it sounds like a theory of everything for the brain, like the free energy principle. You know, there are other, like, here's the answer, you know, and it sounds sexy and it sounds cool. [01:17:16] Speaker A: Yeah. [01:17:16] Speaker B: And do you resent that at all? That there's. It seems to be a lot of hype around it. [01:17:21] Speaker A: Being sexy is not bad. Being hyped is not bad. As long as you're. [01:17:25] Speaker B: I wouldn't know, man. [01:17:26] Speaker A: Yeah, yeah, right, right. Okay, I. I should retract everything. I don't know either. Right. I've been married to the same woman for a long, long time, and I'm fine with my sexiness as long as she thinks it's okay. But here's what I would say. As long as you're grounded in testable hypotheses, as long as it can be refuted. And this can be refuted. Right. [01:17:48] Speaker B: How. How can it be refuted? But, like, going back to what I said earlier, like, I feel like I can finagle my way to find criticality signatures in my data if I just. It's like P hacking almost, right? That's not. [01:18:00] Speaker A: Worry about it here. I'll give you a couple examples where it's not critical. Okay, so go to the cortex, even, and take a look at recordings from layer two three versus recordings from layer five. [01:18:10] Speaker B: I've got to send you my data afterward because I'm going to disagree with what you're about to say, but go ahead. [01:18:14] Speaker A: That's fine. But I'll at least say, at least in some people's data, layer 5 doesn't produce avalanche shape collapse, and it doesn't produce satisfactory crackling noise relation. So that at least gives me comfort that it's not always everywhere. [01:18:29] Speaker B: Why is that? Why do you think that is? [01:18:32] Speaker A: Yeah, and maybe you'd be able to correct me on all this. Here's my naive idea of what's going on. Based on what I've read in other people's work, layer five might be doing something different from layer two, three. So layer two three, as we know, is where all the cortical. Cortical connections are traveling. [01:18:47] Speaker B: A lot of input from other areas. [01:18:49] Speaker A: And so basically, if you want to go from one mode of cortex or one region of cortex to another, to another, to another, you could Travel along layer 2, 3 connections and go through the visual stream. For example, layer 5 is outputting to targets that are often distant or subcortical. And one of the things that at least some people have found, there's this paper by Peters et al. I think it was 2014 in nature. And what they saw was that there's this increasing orthogonalization of the outputs of layer five. So over time they basically get separated. So you can imagine, you know, you're driving a car and you want to be sure that when you put your foot down, you definitely know the difference between the gas pedal and the brake pedal. So you. You don't want those things to be neurally Overlapped where you something you know, 50% of the time you hit the wrong pedal, you want them to be completely orthogonalized. So the output of motor cortex, if it's coming through deeper cortical layers, that's going to perform a different task that's orthogonalizing the outputs. And so I wouldn't expect that process to be critical. But if you're in some deep network that's basically processing all the way from on off cells to edge detectors to face receptors in the fusiform gyrus, then you probably have a different type of goal. Your goal is to allow information to travel all the way through with minimal loss. Then you want a branching ratio close to one, then you want to be close to criticality, etc. [01:20:20] Speaker B: So maybe a layman's simple way of saying that, and somewhat the way that I think about it is if your job is to do something, enact something, you might go away from criticality. Whereas if your job is to receive, process and transmit for further processing, you might be better served near criticality. [01:20:41] Speaker A: Yeah, that's my intuition for it. Now, there are papers where people talk about focusing on a particular task and as they focus on. There was a paper in neuroimage, I don't know, maybe within the last year, where they gave people audio and visual tasks. And what they did, according to the paper, I haven't read it carefully, but the abstract at least said that when they focused on a visual task, their visual cortex came closer to criticality. When they focused on auditory tasks, that got closer to criticality. So, I mean, these are the types of things that you'd want to look at to see if the theory can be refuted. [01:21:16] Speaker B: Right. [01:21:17] Speaker A: Is that what you'd expect or is that not what you'd expect? I think there's lots of ways you could refute it. Right. Why would there be homeostasis? You wouldn't expect that if criticality were valuable thing. What's the relationship between IQ and proximity to criticality? There have been papers on that. I don't know how good they are, but certainly we could talk about in the future. If criticality is important for information processing, then we would expect people who do better on behavioral tasks would be closer to critical. And those in what area? [01:21:52] Speaker B: In what brain area? [01:21:53] Speaker A: Well, yeah, right. So you got to pick properly. So maybe you give them a purely visual task, or maybe you give them a purely auditory task or something like that. But. But I can at least conceive of ways to refute this. And so a negative response would be that, hey, guess what? They go in and out of being close to critical and it has random relationship to the task that you give them. If there's just chance relationship, then criticality seems to lose on that. Right. On the other hand, if it's statistically significantly related to when they focus, then maybe we got a different story. [01:22:30] Speaker B: John, What? Have I not asked you that I should have asked you. What should I have asked you about criticality? [01:22:35] Speaker A: That, Gosh, I don't know. I mean. Oh, maybe here's something. Yeah. That I would. What's the difference between homeostasis and criticality? [01:22:46] Speaker B: Good question. [01:22:47] Speaker A: Can I just offer a little opinion? [01:22:49] Speaker B: Hold on. John, what's the difference between homeostasis and criticality? [01:22:53] Speaker A: I would be happy to answer. I'm glad you asked. So. So I know many people have come to me and said, well, criticality is just the same thing as homeostasis. And my answer, initially when I heard that, I thought, well, wait a minute, I know they're not the same, but I had to think about why they weren't. And so here's my answer to that. Homeostasis is the process of returning to a set point. So, like if you have a thermostat in your house and you want to set your temperature at 68 degrees, right. If you open up a lot of windows and you lose a lot of heat, your temperature is going to go down and now the furnace turns on. Okay. Likewise, if you come from the other direction. Okay, but being at 68 degrees is just a point, Right. There's nothing magical that happens at 68 degrees. So homeostasis is the process of getting to 68 degrees. Now, what is criticality? Criticality is a point you can get to through homeostasis. But the critical point has special properties that's different from homeostasis. Homeostasis and criticality are linked because it seems that the brain uses homeostasis to get near the critical point. But when you get to the critical point, magical things happen. You have power laws, you have scale free activity. [01:24:13] Speaker B: But isn't the hypothesis then that that homeostatic set point is at criticality for that particular reason? [01:24:20] Speaker A: Right. Yes. Yes. No, I would agree. So they're definitely related. So all the beautiful work that Gina Torrigiano and colleagues did on homeostasis and synaptic scaling and things like that, that basically shows that the brain has a process of getting to some point and it wants to return to that point, whether you perturb it above or below, it's going to come back and get There. But that didn't say much about the point itself. It said that there is something that the brain is doing to get to some location. What is that point? So now Keith Hengin has worked on, he thinks he and Woody have a nice review that I think will be coming out soon that basically posits that criticality is the end goal of what homeostasis is driving at. It's related, but they're two different questions, right? What happens at the critical point is not told by the process of getting there, Right. Once you get there, you get things like fractals, the Mandelbrot set, you get infinitely repeating patterns on multiple scales. Information transmission that is with least loss, you get a correlation length that is infinite. In the Ising model, that means all scales can communicate. That only happens at the critical point. The critical point has special properties, but homeostasis is the way to get there. [01:25:40] Speaker B: How in the world? Okay, so let's talk about internal reference signals, right? And so cybernetics, or like the thermometer example, you set it, I externally set it to 68, and then it's a machine and I've built it so that it goes to 68 with feedback. [01:25:59] Speaker A: Right. [01:26:00] Speaker B: One criticism, people like Henry Yin, who's been on this podcast before, is that where cybernetics got it wrong, is that they are. What they're missing is that the set point is we generate it ourselves. It's like self organized set point, right? And so one of the questions is then if criticality is this space that we want to be near and therefore we have a homeostatic signal to get there, how the hell, how do we even begin to think about how that occurs? [01:26:37] Speaker A: Well, I guess I could imagine some of the mechanisms that Gina Triggiano and colleagues have looked at, right? So if the activity is too high, some way, somehow the neurons realize that. [01:26:54] Speaker B: You'Re going to die off if you're not in the right spot, right? [01:26:57] Speaker A: So it's got to do that. So if the activity is too high, what happens is the neurons begin to pull their receptors, so now they have less input from other neurons. So the receptors are kind of pulled in, or vice versa. If the activity is too low, they start inserting receptors into the membrane so that any glutamate that's out there, they just grab it up. [01:27:21] Speaker B: But you have to know about your neighbors then and their neighbors, right? [01:27:25] Speaker A: So the neuron itself may have a set firing point. And this is. Somebody's probably worked on this, and I just don't know the literature well enough, but I think that it may have a firing rate set point and then it realizes if it's above or below that it either removes receptors or adds receptors. And maybe those set points aren't established very early on. Maybe you have to get the next network to kind of be evolved into a state where it's roughly mapping the external world correctly. And now it says, okay, it looks like we're running properly and it locks these guys in. I don't actually know. Gina would know. She's someone you should have. [01:28:01] Speaker B: Okay. It's going to become the Criticality podcast. [01:28:04] Speaker A: Yeah. And definitely Keith Angan, definitely Woodrow Shoe and Mauro Capelli. I can put you in touch with any of these people if you don't know them. And Ralph Wessels related. Yeah, there's a lot of good people out there doing cool stuff, isn't it? [01:28:16] Speaker B: I mean, it's just such a beautiful. You're a physicist by background, right? [01:28:21] Speaker A: Yeah, mostly. But, well, I have to be careful, so not entirely. Okay. So I got my undergrad degree in engineering physics. Then I get a master's in engineering physics. Then I went off into the Peace Corps and I said, you know, I really want to be a professor. So I came back and then I actually studied neuroscience in a lab where we were. I was patch clamping. I was patch clamping neurons and looking at, you know, time constants and. Exactly, exactly. [01:28:48] Speaker B: No, that sounded like. That looked like a joint thing. [01:28:51] Speaker A: People might think that everybody who does patch clamping smokes pot, but no, this is. This is basically breaking a seal through a little tube. It's hard to believe that, but by sucking the pressure you break through and then you record from the neuron it's voltage, you become electrically one with the neuron. So, yeah, that's what I was doing. And so I was doing stuff that you might call it biophysics. Right. But yeah, I had this real strong inclination to see the world through physics. And so then when I started studying neuroscience, I was always trying to ram neuroscience into a statistical physics package, as. [01:29:27] Speaker B: A physicist would do. [01:29:28] Speaker A: Yeah, yeah. And so I wrote a paper, even before Criticality, I wrote a paper on a statistical theory of long term potentiation and depression. Yeah. So that was my idea. Let's get this into some stat mech framework. But yeah, so that's my preferred framework is physics. But the most curious and interesting object in the whole universe is the brain. [01:29:49] Speaker B: Isn't it so beautiful? It's so beautiful. It's amazing. It really is. [01:29:53] Speaker A: It is. [01:29:55] Speaker B: Oh, last question here. Okay, so and maybe you don't have an answer to this, and that's okay. So we got brains. That's cool. They're complex, interesting things we have. Behavior itself is complex. And we have minds. [01:30:10] Speaker A: Yeah. [01:30:11] Speaker B: Is criticality related to our minds at all? Is there a link there? [01:30:16] Speaker A: Ah, yeah. Again, you're taking me out of my field of knowledge. Right. But. [01:30:24] Speaker B: Well, that's out of everyone's field of. [01:30:26] Speaker A: So I would say that if criticality is related to information processing, then, yes, criticality has got to be related to mind. Now, is it related to consciousness? That's another aspect of mind. And there have been people like Giulio Tononi who have looked at. And Olaf Sporns. [01:30:47] Speaker B: What do you think of integrated information theory? That's been in the news and it's on this podcast, like this past episode and upcoming episodes and stuff. So I'm sorry to derail you. [01:30:57] Speaker A: No, no, it's good. I mean, I interfaced with their ideas a little bit earlier where they had something called neural complexity. And we did a paper on looking at their measure of neural complexity and comparing it to criticality. [01:31:12] Speaker B: I use your code. I use that. [01:31:14] Speaker A: Okay, well, that I shouldn't say. It's my code. That's Nick, Timmy. He was a graduate student. So he wrote the code. He was very good. And Nick's code. Right. Measures neural complexity, and it turns out that they peak at roughly the same points, but they're not exactly the same thing. So you can create a model that has criticality but not neural complexity, that has different curves for these things, so they're not identical. So in that sense, some of the early ideas that Giulio Tononi and Olaf Sporze and Gerald Edelman were coming up with, that might be related to consciousness. Maybe it's related to criticality, at least in that brief thing. I think we don't know enough about what consciousness is, but I certainly would expect that consciousness is related to emergence, that it's a collective phenomenon produced by large numbers of neurons interacting with the world. And so in that sense, maybe criticality will be relevant. [01:32:07] Speaker B: But IIT says that logic gates that aren't even on have consciousness. Right. [01:32:12] Speaker A: Yeah. So I don't know enough about integrated information theory. I've got to study that. It's on my list of things I want to do, but I haven't kept up with it. Yeah. [01:32:22] Speaker B: Okay. John, thank you so much for going down this road with me. [01:32:26] Speaker A: Thank you for asking me. [01:32:27] Speaker B: Lots of twists and turns. Thanks a lot. Keep up the good work. [01:32:30] Speaker A: Thank you. [01:32:38] Speaker B: Brain inspired is powered by the Transmitter, an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives written by journalists and scientists. If you value Brain Inspired, support it through Patreon to access full length episodes, join our Discord Community community and even influence who I invite to the podcast. Go to BrainInspired Co to learn more. The music you hear is a little slow jazzy blues performed by my friend Kyle Donovan. Thank you for your support. See you next time.

Other Episodes

Episode 0

February 20, 2022 01:25:40
Episode Cover

BI 128 Hakwan Lau: In Consciousness We Trust

Support the show to get full episodes and join the Discord community. Hakwan and I discuss many of the topics in his new book,...

Listen

Episode

July 09, 2019 01:02:19
Episode Cover

BI 040 Nando de Freitas: Enlightenment, Compassion, Survival

Show Notes: Nando’s CIFAR page.Follow Nando on Twitter: @NandoDF He's giving a keynote address at Cognitive Computational Neuroscience Meeting 2020.Check out his famous machine...

Listen

Episode 0

October 19, 2021 01:32:09
Episode Cover

BI 117 Anil Seth: Being You

Support the show to get full episodes and join the Discord community. Anil and I discuss a range of topics from his book, BEING...

Listen