BI 191 Damian Kelty-Stephen: Fractal Turbulent Cascading Intelligence

August 15, 2024 01:27:51
BI 191 Damian Kelty-Stephen: Fractal Turbulent Cascading Intelligence
Brain Inspired
BI 191 Damian Kelty-Stephen: Fractal Turbulent Cascading Intelligence

Aug 15 2024 | 01:27:51

/

Show Notes

Support the show to get full episodes and join the Discord community.

Damian Kelty-Stephen is an experimental psychologist at State University of New York at New Paltz. Last episode with Luis Favela, we discussed many of the ideas from ecological psychology, and how Louie is trying to reconcile those principles with those of neuroscience. In this episode, Damian and I in some ways continue that discussion, because Damian is also interested in unifying principles of ecological psychology and neuroscience. However, he is approaching it from a different perspective that Louie. What drew me originally to Damian was a paper he put together with a bunch of authors offering their own alternatives to the computer metaphor of the brain, which has come to dominate neuroscience. And we discuss that some, and I'll link to the paper in the show notes. But mostly we discuss Damian's work studying the fractal structure of our behaviors, connecting that structure across scales, and linking it to how our brains and bodies interact to produce our behaviors. Along the way, we talk about his interests in cascades dynamics and turbulence to also explain our intelligence and behaviors. So, I hope you enjoy this alternative slice into thinking about how we think and move in our bodies and in the world.

0:00 - Intro 2:34 - Damian's background 9:02 - Brains 12:56 - Do neuroscientists have it all wrong? 16:56 - Fractals everywhere 28:01 - Fractality, causality, and cascades 32:01 - Cascade instability as a metaphor for the brain 40:43 - Damian's worldview 46:09 - What is AI missing? 54:26 - Turbulence 1:01:02 - Intelligence without fractals? Multifractality 1:10:28 - Ergodicity 1:19:16 - Fractality, intelligence, life 1:23:24 - What's exciting, changing viewpoints

View Full Transcript

Episode Transcript

[00:00:03] Speaker A: We have fractal structure in the brain, and I think that's not a coincidence. And I think that forks directly into all the other fractal stuff happening across the body. Sometimes we're white noise, sometimes we're fractals. Sometimes, like, we can be all these things and it might matter to help us understand what sort of perception action system we're being. Ontological monism might have, might require epistemological dualism or something. I don't. I mean, it's a mess, but I try not to go back to the big bang. [00:00:41] Speaker B: Welcome to brain inspired. Hi, everyone. I'm Paul Damien Kelty. Stephen is an experimental psychologist at State University of New York at New Pulse. Last episode with Luis Favela, we discussed many of the ideas from ecological psychology and how Louis is trying to reconcile those principles with those of neuroscience. In this episode, Damien and I, in some ways continue that discussion because Damien is also interested in unifying principles of ecological psychology. Ecological. Ecological, I don't know how to say it, of ecological psychology and neuroscience. However, he is approaching it from a different perspective than Louis. What drew me originally to Damien's work was a paper that he put together with a bunch of authors from various fields offering their own alternatives to the computer metaphor of the brain, which has become the dominant metaphor in neuroscience. So we discussed that some, and I'll link to the paper in the show notes, but mostly we discuss Damien's work studying the fractal structure of our behaviors, connecting that structure across scales, and linking it to how our brains and bodies interact to produce our behaviors. Along the way, we talk about his interest in cascade dynamics and turbulence to also explain our intelligence and behaviors. So I hope you enjoy this alternative slice into thinking about how we think and move in our bodies and in the world. Find the show notes at Braininspired co podcast 191 support Brain inspired on Patreon for full episodes and to join the discord community, you can go to braininspired co to learn more about that. Thank you so much for being here. Thank you for your support. I hope you enjoy Damien. So I had recently on, and I'm not sure if your episode will come out before or after his. I had Luis Favela on the podcast talking about his book, and he's all about trying to reconcile, or at least that's what his book is about, trying to reconcile ecological psychology and neuroscience. And so you're an experimental psychologist, correct? [00:02:57] Speaker A: Yes. [00:02:58] Speaker B: And I know that you have a background interest in ecological. Is it ecological or ecological? [00:03:05] Speaker A: Okay, either one's fine. [00:03:07] Speaker B: Psychology, how did you get interested in that? And I'll just start off by saying one of the reasons that I find your work interesting. There's lots of reasons, but one of the things that you don't care to throw out is kind of a computationalist perspective of how brains operate. However, you, like many other ecological psychologists type people, also want to kind of re situate where the mind is, how like what we think of as the mind. And so anyway, there's lots of interesting things that we'll get into. But how did you, were you always interested in ecological psychology or how it seems like people eventually fall into that? [00:03:54] Speaker A: Yeah. So I had never, I did, and I kinda council students now. I did all the wrong things, going to grad school, like to make my application the best and do the best interview. Like, I had no idea what I was doing, and I'm very lucky. And I keep thanking the people who were very kind and generous at a time when I was not doing my share. So I was introduced to my thesis advisor through my undergrad advisor, who used to be a coworker. Jay Dixon used to be at the College of William and Mary. And my undergrad advisor saw my work ethic and was like, you would like Jay. And so he sent, he sort of said, you should email Jay. He was in the developmental division. I had not done developmental psychology. I did some infant research over the summer after I graduated to sort of brush up and get some cred. And I showed up to grad school, and some of the first things Jay shared with me to read were big hard 2004, the dynamic emergence of representation and Gottlieb. So he started saying, like, basically, here's the problem. Like, you start out as like a single cell, and then you get all this stuff. That's the problem. [00:05:13] Speaker B: I mean, that's the problem. [00:05:14] Speaker A: You know, he was sort of, you know, there are other problems too, but just sort of like. So from his developmental psychology perspective, he said, you know, somehow things come out, and the big question is how? And so he wanted me to sort of read those sources as sort of a leaping off point. And then he said, I don't know what they're doing, but sort of in the neighboring division in that department, we have ecological psychology. I was like, what's that? I should have done my homework. He was like, they want to explain lots of cytology stuff without referring to memory. And I was like, that sounds crazy. And then I, so I hung around with them, I looked at the reading list, and then I was like, you know, I want to do the whole thing. I want to see what it's like. And so I guess I converted myself, and then I switched over while maintaining the same advisor. Some people think I worked for turvy, and I didn't. I was sort of like the stone in Turvy's shoe while I was working for someone else. It was sort of bureaucratically, it was a nightmare to pull off, but everyone was very kind and supportive. So then Jay is now director of CeSPA, and so he's sort of taken on that role. So I did sort of bring him in, but he did sort of, like, point them out to me, saying, like, they're doing some interesting stuff. I don't understand it. And he. But so to the point, he had research with embodied cognition where he was looking at the gear system problem, which we published on a long time ago. I had actually participated in the study as an undergrad when he was at the other place, but so he was very interested in this idea. Cause he kept finding that what predicted strategy changes, aha moments, from sort of tracing the gears to an alternating sequence to eventually counting parity, sort of progressively more abstract ways of thinking about the gear system problem. So Dixon was unsatisfied with just sort of the representational change. Cause he was like, I bet it is. But how and where? And I don't see it. Like, how are we just not restating the problem? And so what he sort of. What, what he and I were thinking about, sort of like, those people over in Cesspa do motor coordination, and we're talking about motor coordination of a gear system strategy. And how does that work, and what sort of indicators could we pull out? And then we started doing recurrence analysis. We looked at entropy. We started looking at fractal stuff, and then sort of, you know, that's where. [00:07:42] Speaker B: The fractal stuff came in, because everything's fractals these days to you, right? [00:07:45] Speaker A: Sure. Well, I mean, a lot, yes. And so, but so I think it did come from this idea that sort of an outsider looking in that was sort of hearing people say. Gibson seems to say that, like, information is built out of these bodily interactions that we have. So what are the shape of those bodily interactions, then? But it did sort of crucially start with this very cognitive, representation hungry issue of sort of insight problems, aha moments, that kind of thing. And so I've never really wanted to trade up on, like, I've always wanted to make sure we're. We're all talking about the same issues. I don't I don't. Sometimes what I don't like about some ecological stuff is it becomes a sort of thing. Well, you do your thing and we'll do our thing, and we're just not going to talk about certain issues. I don't think that's not how I want to do it, at least. [00:08:35] Speaker B: So that's an interesting perspective, because I often think that people who have different opinions are just talking past each other, talking about different things. So this idea of, like, you know, kind of re situating where the target of explanation is, I don't know. Are we all just talking about different things? Like. Like, what do you think about brains, for example? [00:09:02] Speaker A: I like them a lot. So I think that they. And what I say to my students sometimes is the same thing I say to them about genes. They do more and less than we think. I think that there's, you know, there's undeniable participation of the brain. The brain has specific structures and elements that do specific things, but it's also, you know, what has this wild neural reuse capacity that Michael Anderson is talking about, sort of like, it has all this flexibility. So I have no problem with there being a brain. I actually have brain data, you know, working on actually publishing someday soon, we hope, this summer. But, and so I think it's a wonderful example of how we have, you know, we have fractal structure in the brain, and I think that's not a coincidence. And I think that forks directly into all the other fractal stuff happening across the body. So, like, I think that's. I mean, I think it's part of the. Part of the whole system, I think. So what I found compelling is that it's, you know, the nervous system is very flexible and where it's traded in, so it's got its flexibility, and it paid for it with speed, so it's a slow system. And so when you look at stuff, the work from in the eighties with Tuller and Fowler and Kelso and Vatiokis, Batesson, I can't ever remember the whole pronunciation of that name. But when they were looking at perturbations of speech sounds, there were corrections and these sort of these autonomous seeming synergies that seem to complete the job faster than neural transmission, even within the head. So I think that the brain is definitely important, but the body is important, too, in a way that it's not just sort of like a meat puppethe that the brain holds. And so I think that there's stuff that the body can do that the brain maybe can't at speeds that it sometimes can't. But I think they, when you start getting into the connective tissue and the tensegrity stuff that turvey and Fonseca and other folks have gotten into, it's sort of like they're working out sort of a way for these things to, like, collaborate and not be, like, antithetical, like. [00:11:20] Speaker B: So not be independent, one could say, sure. [00:11:23] Speaker A: Right. So when Don Ingber was early, early on, or at least long before Turvey and Fonseca got into it, when he was talking about tensegrity structures in systems bio, he came up with a psychology example. He talked about a hair cell, and hair cells in the ears have this wonderful sort of specificity. You have specific hair cells for specific frequencies. And he said, that's, you know, this is one of the. I mean, I think that's one of the coolest examples. And I think it shows. I mean, that was sort of like one of these synthetic, a priori's when we were like, found a Fourier transform inside the ear. Like, it's amazing. And yet what ingber was saying is, if it's not situated in the right, and I'm sort of broad brushing here, the fascia like goo that, the extracellular matrix, if it's not situated in that, that's going to keep it taut enough to fire. Nothing doing. It doesn't matter how much specific it looks like it does. The situation of it, the embodiment, the housing, the context for its activity is actually the premise for that specific, independent activity of that single hair cell. So I think the brain is sort of just like a bigger story of that with lots of different hair cell like activities. It looks independent, and it is at some scale of observation. But then the bigger picture is that it's resting in collaboration with this bigger, the task in the body, much like randy beer was talking about long. [00:12:56] Speaker B: So you're not a brain in a vat or an organoid brain. Organoid kind of person? Yeah. [00:13:03] Speaker A: No. [00:13:04] Speaker B: So do neuroscientists have it all wrong, or are they just looking at a time? So in your view, the brain doesn't have the privileged position of being the seat of the mind. Right. I'm curious like, how you view neuroscientists, because. Let me back up. Because the way that I see ecological psychology is they eschew the brain, right. It's almost like a behaviorism sort of outlook in that sense. But my sense of you is that you do appreciate the brain. But I'm curious if you think that neuroscientists just have it all backwards or wrong or just misplaced. [00:13:46] Speaker A: So I think I, and sometimes it's not even like some ecological psychologists, I would say even it's some specific people, many of them maybe, who are saying things in certain contexts. And there's a lot of, like, politicking that goes into making a scientific movement. And I would say, and it goes on both sides. I don't think it's just echo and I don't think it's just neuro, but I think that folks have, when put on the spot, said very, like, snotty things. And so, and I think, I mean, many people in many camps and many different scholarly disciplines have done that, but I think that some people have said, well, we don't need the brain, so let's forget it. And I disagree with that last part. [00:14:30] Speaker B: Isn't that crazy? [00:14:30] Speaker A: It is. [00:14:31] Speaker B: That's just a crazy thought. [00:14:32] Speaker A: I think it is. So they will cite sort of lots of interesting sort of reflex activity in decapitated or deucera paraded animals, but that doesn't, like, we still have our brain attached. And so I haven't ever gotten on board with that. And I think when you actually sit and talk with some of those sort of extreme sounding folks, they do actually have a nuanced view and they understand that. Yeah, that was a heated moment and it's hard to pick through all those little encounters. Neuroscientists, I don't think they're doing something like, I don't think it is a wholesale wrong field. I think they're doing what many different sort of scientists do. You have a model system. You mine it, you come up with insights from it, and sometimes you bump up again. And this is true for ecological psychologists as it is for neuroscientists and that, like, you know, so there's sort of, there's points where you bump up against the boundaries of your model system and you sort of, you sort of have a choice, well, am I going to go learn something new? And, like, very often this is what I hear. It's not just sort of like I have a theoretical, you know, limitation or objection. It's like, no, I just, I'm, you know, I'm the, I have these, I have limited time to do a certain set of things for an academic job and I can't learn new things. And like, I totally get it. And I have not felt sort of like, I mean, some people will sort of say, well, your stuff sounds weird, but I don't necessarily feel like, well, that's wrong. And I cannot, I'm not on board with this. [00:16:04] Speaker B: Yeah. [00:16:05] Speaker A: So, yeah, like, I don't, so I think that, I think there's a lot of sort of, um, there are a lot of false dichotomies get, that get thrown around. Like, if I will say sometimes that, like, you know, I like embodiment. I want to sort of think not just about how the brain is controlling things, but how the physics of the body move and then the sort of bad faith things on the other side that, that gets, well, we can't reduce it. All the particle physics, it's like, well, that's, there's a very far distance between, let's not just do that and particle physics. And I think that there's, I think, I think there's like, people understand that things need to change and things are siloed, and sometimes people just don't know what to do. And I think they'll just say, well, here's my thing. And like, I get it. I don't really, I'm not worried about like the course of science or something like that. [00:16:56] Speaker B: So what is your thing? [00:16:58] Speaker A: My thing? Yeah, the thing I am trying to focus on and the thing I'm trying to sort of run through as many different model systems. So I guess, so if you took your sort of, like, I mean, the thing I'm trying to do is sort of make a ladder that goes all the way up and down these scales, all the way across different model systems, across different species, and look to see, does this fractal stuff have any leverage in sort of addressing the structure and having any sort of entailment for what? For what someone else who doesn't care about fractals would say it's doing. So I don't like, I don't like the proliferation of sort of like, oh, here's this thing, it's fractal. Oh, here's this thing, it's fractal. Like, that strikes me like I like fractal things. [00:17:52] Speaker B: And I know, yeah. For people listening, he has, was it Mandelbrot? [00:17:57] Speaker A: Mandelbrot, probably Mandelbrot. He was french, but that's the Mandelbrot set. [00:18:01] Speaker B: So, like, I like, so people watching can see a fractal behind you. But I get, you know, just for, just for clarity, what is a fractal? Why, why is a fractal interesting? And why do you see fractal structure everywhere? [00:18:13] Speaker A: So you, so you will see fractal structure. So, okay, so it is a kind of scale invariant process where as you zoom in, you will see progressively more texture. It was first, we sort of first had a glimmer that it was something that we needed to worry about when people were. So the founder of numerical weather forecasting was starting to ask pesky questions about whether or not there was such a thing as velocity of wind. And he said, he wrote in the paper, I know this sounds silly, but just wait, that's the modern version. But he was saying, when you look at an average, you can calculate one. But the actual theoretical definition of an average means that as you get to smaller and smaller scales, that you're going to get stability, and that we don't get that. And so as you zoom in, you get more, you get. You get a very slow diminishing of variability. You get more than you would expect from sort of a scale dependent model where you just expect, okay, here's my scale. Here's the behavior. Fractal systems are ones that have a rapid growth of variability as you scale out, and a very sort of a persistent, slow growth of them as you zoom in, because you keep on seeing more new structure. So sometimes it's called hyper diffusive. And so the wind particles that Richardson studied were hyper diffusive in the sort of the garden variety, sort of physics sense of like a particle spreading through space. And so his views of diffusion were actually different from Carl Pearson's and Einstein's idea in 1905 about ordinary diffusion. And sort of ordinary diffusion was sort of the particle moves randomly in independent directions at each time step. What Richardson was finding around the same time was like, oh, no, actually, things are clumpy and move together in clumps, and then sort of, there's a lot of correlation across space and time. So things can sort of accumulate. Things aren't just sort of like completely homogenously, homogenously random. So that. So, so there are so, so the. The clumpiness of a structure of a movement, of a process, that that's sort of hard, like when you. When you make a geometrical model of that. And Richardson once tried to sort of the, make a model of like a peat field because he was supposed to sort of figure out where it should be cut to get the most efficient drainage. And he realized that, like, you can't do a euclidean model of this that's accurate. If you could, you could, he said, but it wouldn't. It would take so long. Everything would be dead and over by the time you were done. And so, so what he was noticing was that you get sort of like in the coastline of Britain, as he sort of helped point out later, you get these nooks and crannies, and it's really hard to sort of make a perfect sketch with lines, planes, solids and stuff like that, like from euclidean geometry. And so instead, what you have, rather than. So you sort of have to take stock. If you care about this stuff and want to get into it, you have to take stock of how dimension, in the euclidean sense, is really about diffusion. It's about how many different ways or directions a thing is spreading out. So a point isn't spreading out at all. A line is spreading out one direction. A plane has two directions of extent. And so really dimension is about diffusion. And so Richardson and Mandelbrot were sort of saying, yeah, and diffusion could actually go between the cracks. It could be non integer. You could have lines that are craggy enough to be almost planes. So fractal modeling really doesn't have to be anything spooky and new. It sort of goes back to that old problem that Pearson was working on, and people like the solution there. And it's much more popular, the idea that standard deviation grew according to a square root of time. Fractal stuff is like, well, yeah, it can, but it can also grow according to other rates of time, depending on how viscous your circumstances are. So it's a more continuous range of possibilities. And so, yeah, so that's all very interesting for those who care, but I sort of quickly realized that many people were sort of like trying to generate these results, and then they're like, and it's fractal. And like, I'd say 887, 90, 95% of the audience doesn't care because they don't know what it means. And so what I'm trying to do is I'm trying to say, okay, what do people care about? And I want my own sort of precess book, questions answered about how does perception work? When I come up with a judgment, when I come up with an interpretation, I choose a path, can I predict those things? So what I sort of do is I look at the standard explanation account of a perceptual judgment cognitive response, and I try to dig into the task, look for all the usual suspects, and say, okay, here are the usual suspects. And then I'm also going to measure the fractal structure of different parts of the body as they engage in the task. And then I've started slowly getting into like, okay, now I'm going to actually start manipulating the fractal structure of things within that task, or the stimulation that I give to them and all these things, it's like I haven't been able to. So at every turn, and maybe this is my own bias and maybe I'm just seeing things, but it seems that the fractal structure does matter to how people use the existing task structures or constraints that everyone knows and everyone agrees on. So the usual suspects matter. [00:24:26] Speaker B: But then what are the usual suspects? Sorry? [00:24:29] Speaker A: Well, just. Yeah, so I'm just like, for instance, in, so in ecological psychology, they have perceived length as a function of the inertial moments of a handheld object. That's just sort of like, that's a real simple version where it's sort of like we understand when you wield an object that it has an inertia tensor to it, that as you hold that object and wield it around, according to the ecological folks, this information is invariantly specifying the response when you average those responses, as it turns out. But there's all sorts of reasons for. So they also found that, well, some people have different impressions of these, so different conclusions about perceived length. You can give feedback, and you can change how people use those same inertial quantities. And so that's sort of so helping them figure out which inertial properties matter is attunement, and figuring out how to scale those properties to an eventual judgment is calibration. And then, so feedback can help all that stuff. And that's, I mean, sort of, that's like a psychophysical learning study. I don't think there's anything unusual about that. But sort of within that scale, that sort of all the things I named, those, those inertial properties, the presence or absence of feedback, the number of blocks in the sequence, those would be the usual suspects. So I would do sort of shotgun models, full factorial, let it all interact, control for all the different factors, how they might interact across many trials in this study. And then I would add the fractal parameter that I would have estimated off of each person's movement on each single trial. The ecological psychologists when I first did this were like, this isn't going to work. This is bonkers. So like. And I was like, why? [00:26:36] Speaker B: It's like, were they against it in principle, or were they just thinking, I mean, it wouldn't work. [00:26:42] Speaker A: So the thing, one thing that I'm, that I'm struggling with in that on that end is this whole idea of the ecological or behavioral scale or scales. And then, so it's this idea, and it comes in with sort of how we talk about affordance and how we talk about action based variables. I like the idea that intention is real. I like the idea that intention feeds, feeds into it. But ecological scale is a, that's, that suggests a bandwidth below which, and above which we will not go. And if you read, if you read Gibson 79 in the opening pages, he sort of says some pretty brash things about it. We don't need a small stuff. We don't need big stuff. Like, and then, and so anyway, so regard, so he said other things and it's like quoting the Bible. Honestly, you could find various things in there. Yeah, but he had this idea. He also changed his mind. God forbid, like, you know, we're allowed to change our mind. And so he, so basically, this idea that dynamic touch is like a wielding experience that you do intentionally to extract, to detect information you're going to use for a purpose, that's led people to say, well, or that did lead people to say, I don't think this will work because the fractal stuff is scale invariant and the tiny stuff that's not at the ecological scale. [00:28:03] Speaker B: I'm worried that we're losing people with some of the terminology already. But no, no, that's okay because you mentioned cascades and I wanted to jump on that. Just sticking with fractals for a moment. So you see multifractal structure across scales. Right. And you can do this in many tasks. An example that you often give is walking. Right? So your body movements while you're walking have a fractal can be described or modeled as fractal in nature, just simply the movements right within your environment. Sorry, this is going to be a naive question, but is fractality in that case or in all of this, is it causal or is it descriptive of what's happening? [00:28:52] Speaker A: No, I think that is the question. And that's sort of the thing I've been trying to sort of get at, because sort of like people will say, so I think in a lot, like there was a heyday in the eighties where fractal structure became sort of, sort of latched onto self organized criticality. And some people said, it's fractal must be self organized. And they have this model where they have the word self organized in it. And it's about self organized criticality. And that's actually a very limited model. It is a cascade. It's real. It has two scales to my. [00:29:27] Speaker B: What is a cascade? Sorry, I'm gonna. [00:29:28] Speaker A: Yeah, that's fine. [00:29:29] Speaker B: So what is a cascade? [00:29:30] Speaker A: A cascade is anytime. So you can think about it as like an actual waterfall, you have sort of the stream toppling off the edge and it breaks into successively smaller streams. So mathematically, you can think about that big current as like a uniform probability distribution, one big bin. And then as you let it unfold, it breaks and it break. And so it's sort of like, it's like a cookie. You know, as you break parts of the cookie, you sort of, inevitably there are asymmetries. And those asymmetries stay as you break those asymmetrical parts into further smaller parts. You can also get aggregations. It's sort of like branchings of a tree. So you'll often hear about branching. But so anytime you have this sort of like splaying out or sort of siphoning in. So what you have when you do that is if you're measuring downstream, what you're seeing is the product of many successive breakings apart. [00:30:37] Speaker B: Okay. [00:30:38] Speaker A: Rather than. So the opposite or a different thing. Not necessarily the opposite, but one is like white noise, which is independently sampled items across a sequence, each independently selected from the same distribution repeatedly. And though. And white noise can be, can be treated as it. It's, it's producible as the sum of oscillations at many different frequencies. Okay, so that, so sometimes people will say, well, if everything's fractal, it doesn't that mean nothing's fractal? And doesn't that mean we can't. [00:31:13] Speaker B: Yeah, that would be my. Yeah, right. [00:31:14] Speaker A: So white. No, but some, my response to that is white noise was like that. Like before, you know, pesky people brought up pink noise or fractal noise and all these things. It was all white noise and there wasn't a problem. Like, it's not. So it was all white noise and we still survived and we still made progress and learned things. White noise was just sort of the background premise for the linear model that lets us fit these and test these deterministic components. There's no big problem. The question is, what's your background? What sort of, what background framing assumptions are there for your modeling that you do later? So the cascade idea is that, that you can't just assume identically, independently distributed noise. That's just a sum of all the things that you haven't looked at. [00:32:02] Speaker B: So I'm not sure if this is, I mean, our conversation can go anywhere, but one of the pieces of writing that you participated in, and I guess lead, was this list of alternative metaphors for the brain. And your alternative metaphor was cascade. The brain as cascade instability. Can you just describe that. [00:32:28] Speaker A: Yeah. [00:32:29] Speaker B: Well, so, I mean, sorry, the background is that the computer metaphor for the brain is tired and old now. And whatever modern technology is around, we always liken the brain to whatever fancy best technology we have. And in fact, I. A couple days ago, I was participating in a day long retreat for a program that I'm a part of at my university. And in one of the panels, someone kind of nonchalantly said, the brain is a computer. Just definitively, without any irony, without any consideration of anything else. I almost screamed out of, no, it's not. But that would have been the wrong thing to scream out. [00:33:16] Speaker A: Yes. [00:33:16] Speaker B: I said it's not. I should have said, well, it's not only a computer. And then, but then I didn't want to derail us or whatever. So that's kind of the background is that everyone likens the brain to a computer these days. And I know you're a fan of Turing as well. Yeah, yeah. So what is it? Yeah, go ahead. [00:33:31] Speaker A: Well, so I didn't follow. So I think I can sort of weave in. What I didn't answer about your earlier question was sort of, is fractal a descriptor? Is it causal? I think it's an observation you can make about current sets of affairs that can actually entail something farther ahead. So to get to a cascade, one of the big cascades in the room is evolution. And evolution being much bigger than just neo darwinist selection of genotype and selection of phenotype anyway. But so that, so evolution, you can look at those like trees of sort of speciation and actually you can find an imprint of. So like that's a cascade. So you can find an imprint of that cascade in the multifractal structure of the genetic code, which sounds weird. 1995, when I was in academic diapers, someone else was looking at the multifractal structure of genetic sequences across multiple different species and they're able to use that to do cluster analysis of sort of like the different phyla. [00:34:40] Speaker B: And so you can predict using that. [00:34:42] Speaker A: Yeah, yeah, basically. [00:34:43] Speaker B: But that doesn't mean that it's causal, right. [00:34:47] Speaker A: Well, what's, so. What's causal is the, I mean, so it's as much, it's an operation of a cause in which the causes are these nonlinear interactions through a cascade. So the difference between cause and operationalization, we've all got it on all sides of this, where the brain is just a storehouse of operationalizations. There are things happening. So we all got our operationalizations. So I'm not so worried about that. So what I'm saying is the underlying construct here is the cascade process that we are engaged in currently, and that ultimately, like, has a ragged edge where at some point it does kind of matter what you had for breakfast, not in a way that, like, is sort of easily sort of compartmentalized as, as we might like in a linear model. But if we've got a cascade generating all these species, we've got this, this evolutionary cascade made, a brain made, a body made, all these other species made, helped sort of shape this in whole environment. And so, I mean, like, where you. Exactly. So, like, experimentally, you can jump in and inject a manipulation that has multifractal structure and show that when you manipulate that multifractal structure, you can get a difference in response. So that's not sort of like proof positive, and we'll never get proof positive of many of these manipulations ever. But so something about cascade structure speaks to the construction of a cognitive perceptual response. And so I suspect that, I mean, though I cannot latch hold of all the different cascades going into our conversation right now, we can do the same experimental framework where we can grab hold of some of them, change them, and measure the response. So it's at least falsifiable. And so the numbers themselves aren't positive. They are currently the best possible way to describe the cascade that is under the curtain, basically. And so it's the best way for me to say it's Gottlieb again. And what Gottlieb said is true, and it's falsifiable. It's not a theory of everything, as some people think when they see the Gottlieb, it looks like everything is everything. I know it, but, so now we can be clear about, okay, so if something like this is true and it means something, then we can actually quantify that, whether it's in an experimental manipulation or in observable covariates. And we can put that into our model, and it can live right alongside all the other independent factors that we also think matter. [00:37:37] Speaker B: So the metaphor replacement is cascade instability. Where does the instability come in? [00:37:43] Speaker A: Oh, cascades are always instability. It's just sort of, it's a, I mean, you can actually, that's not true. You can have perfectly regular cascades, but we are. So, so you. So, so you can do a perfectly deterministic cascade where you always take one quarter and one and three quarters out of the stream at each step. You could do that very rare. I don't think that's. [00:38:05] Speaker B: But self organized systems that would, I mean, with the recurrence and nonlinearities, that's, that just falls apart immediately. [00:38:12] Speaker A: Yeah, I would say. I would say. Okay, so, okay, so, yeah, so the act of, actually, right now, we currently, my colleague and I have a bunch of cascade simulations trying to figure out sort of like, how many different kinds of noise can you pipe into the cascade to get what sort of diversity of responses? Because, like, you know, we've, we have a long history of sort of empirically saying, okay, this seems to matter correlationally. We have some experiments. We'd like to sort of look at these metrics and sort of have some more theoretical expectations about what kind of cascade. Can you think of the cascade as sort of like, so can, can you start making a model as Paul Bogdan? I'm not sure how to pronounce that name. Bogda nd, um, has, has been sort of showing, like, it's, it's, it could be useful to understand physiological systems by having, like, a network of multifractal observables. And could you look at how, how each of those, those nodes, multifactal contributions, spreads? How contagious is this? And so, so how, so think about sort of like, linking a bunch of actual waterfalls to each other, and, and you make some sort of, like rue Goldworth of sort of, you know, massively interactive fluid dynamical systems. And so we've got cascade simulations where we're sort of starting to ask exactly that question. Like, if you run. So if you have at each generation, if you are fragmenting according to a multifractal noise signal, what are you going to see at the bottom of the waterfall? And then how does that correspond to what we see in our measures? Interestingly, one thing we've been doing, and so is sort of also modeling cascades, where rather than a non linearity, we'll inject generations of additivity as if to sort of like, sort of to sort of begin simulating what it's like to sort of jam in independent constraints, which we have to, if we're going to do clinical interventions or experimental manipulations, we pride ourselves on those being independent. And so, interestingly, some of the most realistic looking series, to our eyes, comes from that. When you have this compromise in the mathematics between interactivity and these add on constraints. [00:40:44] Speaker B: Just backing up a little bit. We didn't talk about this upfront, but your worldview. So I've come to really appreciate process philosophy and viewing everything as a flow. And modern neuroscience, yes, dynamical systems theory is used more and more to explain the activity of massive populations of neurons, and yet the language, and I think the worldview of most neuroscientists is still all about sort of states, and it's much more static thing. And when you're talking about cascades, I just, I imagine you have that same everything is a flow worldview. Is that right? [00:41:28] Speaker A: I like that you're recognizing that. But I'm more of a patean, and I try to cite him often where he is, at least, and I don't know how far he got with this. He's been very kind. When I've asked him if I made a mistake saying certain things with his work, he hasn't said anything negative. He's been, been. [00:41:49] Speaker B: Is he still active? [00:41:51] Speaker A: My understanding is he's retired, and he said he was reading things that he never made time for before. And I don't want to trouble that. [00:41:59] Speaker B: This is Howard Petit. [00:42:00] Speaker A: Howard Petit, yes. SUNY Binghamton, I believe. And so he says that you can't just have flows and you can't just have constraints, and that we need some sort of complementing of that. And I like that idea, and I can't not like that idea as I. And, like, it sounds mush mouth, and it sounds sort of like, well, I guess every, I guess it's another everything matters kind of thing. But I think that's how I actually feel about when I fit those sort of experimental constraints. And then I look at how the body is flowing within that. So, like, my statistical models, I think, are at least sort of trying to do that sort of thing where I'm acknowledging I don't, I don't want to, like, I don't want to say you don't need a brain. I don't want to say that it doesn't matter. Like, you know, when I do single word recognition studies, or, like, I'm looking at, I'm using, like, word frequency, like, I'm looking at the corpora that the psycholinguists have. Like, those are real things and they matter, and they're, they're fit. They're like, for the purposes of the study, they're fixed. So they may as well. I mean, there are constraints. And so I think those. [00:43:01] Speaker B: A process philosopher would just call constraints. And while you're talking about constraints, I was thinking, oh, man, are fractals just constraints? But a process philosopher would say, well, a constraint is just a slower flow than the thing that it's constraining. Right? [00:43:15] Speaker A: Yeah, yeah. And I. Yeah. So, okay, so in that sense, I might be a process philosopher, and I admit entirely that that's entirely frustrating because then it's a question of when scale happened. And I have a colleague, David Farrock, who will continually ask me, sort of like, what came first? And so that's. [00:43:35] Speaker B: Wait, back up. What do you mean it's frustrating? Because it's a problem. Because then you're just saying. [00:43:41] Speaker A: Yeah, because it's because, okay, so there are parts where Pattie says, so he was one of these two, I guess, that the constraints are built. So maybe he wasn't saying they really are flows, but you're saying they're built from flows at another scale. And I like that idea because it's at least not sort of saying, well, I just told you it's a thing, and it's actually just the other thing again. So, but I think, I think that we really, something I'm really struggling with is sort of how we, ourselves, with our measurement circumstance and our experimental paradigm, we put ourselves into the, into the behavior we, when we're, when we're deciding what to study, we do set, like, and within that frame, there are these, like, whether or not they were flows once, or they become the constraints. And that's part of our description, and we sort of can't get away from that, I think. And so that's the sense in which I think I would still call a constraint, a constraint without blinking. And then, so, like people. But, like, when you look at sort of like, so the boiling point of water, people have said, well, that's, that's fixed. That's scale. Scale dependent. I was like, right, but if you, like, change the shape of the pan and you go up a mountain, like, you're going to get different. Like, it's actually not like. And that right there is process y, I guess. But it's also acknowledging that, yes, given the circumstances that you've set up, this is fixed and gravity is 9.8, though we're not changing that right now. So I do struggle with one time, I think I told David Farrokh, like, maybe, what did I say? Ontological monism might, might require epistemological dualism or something. I don't, I mean, it's a mess, but I try not to go back to the big bang. And, like, what was the first? Like, that isn't my, my qualifications, and that's not where I fell in love with the psychological content. So I sort of don't, I'm not an expert on there. It's probably wildly inconsistent as you go back to the big bang, and I don't know what to say. [00:45:49] Speaker B: Let's not do that. Yeah. [00:45:50] Speaker A: Okay. Yeah. [00:45:51] Speaker B: Which may or may not have happened, it turns out. [00:45:53] Speaker A: Right. That's the other thing. That's the other thing. Right. [00:45:55] Speaker B: So, yeah, I lost track of where we were. We were talking about cascade instability and really. [00:46:01] Speaker A: Process. [00:46:03] Speaker B: Yeah, yeah. Well, yeah, I interrupted and you were saying it's more petite, like your view. [00:46:08] Speaker A: Yeah, I think so. And so I think that. I think it's really. So I was looking at your questions, and you had stuff about sort of like AI and what is AI missing? And like, I think. I mean, I think so. Dennis Waters, who's a student of patties, wrote in his recent book that, like, you know, that much of the problem is that it's all constraints. So it's all symbolic logic. I mean, like, you're just doing math. And what is. [00:46:33] Speaker B: Oh, the problem with AI? [00:46:35] Speaker A: What, AI is missing? Yeah. [00:46:37] Speaker B: Oh, okay. Yeah, it's missing constraints. Is that what you're. [00:46:40] Speaker A: No, no, no. It's missing flows. [00:46:43] Speaker B: Flows. [00:46:44] Speaker A: It is actually not acknowledging the flows. [00:46:47] Speaker B: Right. But see, that's the thing. Is our flow, is that important then? Because AI is awesome. Right. It's doing cool stuff. Yeah, yeah. And it's doing the some, I mean, you know, connectionist, static version of computing, essentially. And it's not acknowledging flows, but maybe we don't need flows then. [00:47:06] Speaker A: I think so. We only. We have. So the thing is, our computers have flows in them. We only notice them when our computer breaks. And then we're sort of like. It's like, oh, crap. Yeah. Now I have to think about the material embodiment. Like, we sort of build these things to do a job, and they do them really well. But I still think we're dealing with the same old problems with computer model. It's the same limit. I remember reading stuff about how we solved the frame problem. The GPT powered diagnostic programs don't understand that people have to leave to go to the bathroom. And that's maybe a pain in the butt kind of complaint, but it's showing these programs are great at what they do in a narrow task space. They don't have context, and they are not really cool models. I fascinated by them. I am not convinced that's like, all of it. We're not. [00:48:11] Speaker B: But maybe it's all of quote unquote intelligence. So I'm curious about your views on even what intelligence is like. If AI can perfectly answer all of our questions and will ask us if we want to go to the bathroom every, whatever, hour or two. My own views of thinking. I think AI is awesome also. I think it's missing flows also. But I can't articulate why I find that to be important, except for Moravec's paradox, which is playing chess is easy, grabbing a cup is hard. The things that we think are easy are actually hard to implement in aih and vice versa, it turns out. [00:48:58] Speaker A: I mean, so it does, I think. I mean, I go back to the whole development perspective where I started from, where it's sort of like, where did it come from? And like, how does it get there? Like, I think it's a very cool device we've made. We made very smart software, but it's still just doing math. [00:49:20] Speaker B: But is that what intelligence is? [00:49:22] Speaker A: No, I think intelligence is not just mathematic. I think intelligence has. I mean, it seeks. [00:49:33] Speaker B: Ooh, that's agency, though, right? So. [00:49:38] Speaker A: If I'm reading you right, I think agency is. I have a difficult time thinking about an inert intelligence that is incurious and just has all of what it has. You need to go find what you're interested in and you find the way we under. I mean, intelligence, whatever people report, it's not sort of. I mean, like, so what chat GBT can. It can. It was really good at the psych ap exam really early on, which I find amusing, but it can, you know, it can answer these, it can answer very narrow the constrained questions. It could be. It's a very useful tool as people are ideating and sort of mocking up, you know, scripts that they're going to use somewhere. You know, like, it's not. I don't see it. I guess intelligence for me has more of. Has more of the emotionality and the interest and the bias that we sort of have in our. What we recognize as intelligence. Like, intelligence for us, isn't sort of just having a base of facts you can dump out that's somehow not. Still, we are impressed by people who can do that, but I don't think that is clearly what we think of as intelligent. [00:50:57] Speaker B: See, I worry that my own view is evolving in such a way that I am, in my mind, equating intelligence with life processes. But what I want to be able to do is just let go and say, okay, maybe intelligence is a. Is a thing that we can call, and maybe I'm not as interested as I thought I was in intelligence. Maybe I'm actually more interested in the life processes that seem super intelligence, but then it's just, oh, how can that thing exist? That's pretty damn intelligent. [00:51:29] Speaker A: Right? [00:51:30] Speaker B: So then I just wonder, am I just collapsing back to your worry that everything is everything? And. [00:51:36] Speaker A: Yeah, well, so I remember my students. So I was teaching a seminar in ecological psychology to my master's students here last semester, and, like, they. They had just heard, some of them had just heard about, like, anthro bots. I forget what. Michael Levin came out with some. Some really cool. I forget the proper name, but they were self building, wet robots that, like, learned to solve a maze. I don't have all the details. I'm sorry, Michael, if you're seeing this. But, yeah, but, and so the students were terrified. They're like, wait a. So there aren't any past experiences here. All the stuff they thought about learning and like, oh, you train this. Train this model, give it this experience. And I was like, right. Sometimes, like, I wouldn't. I'm not smart enough to have built what he built, but I'm like, right. Sometimes the output doesn't look like whatever generating process that gave you, like, learning is not just mimicry. You can't. You can't. That's not. So train the model on a set. You can get some really phenomenal stuff. I think we have other cases where you're getting intelligent, it seems to me. Intelligent, like, behaviors, solving problems, coordinating with social group, making decisions. There's stuff that seems to fall out of this paradigm of sort of have the model tested on the heap and then see what comes out. So, yeah, so, and I think some of it comes down to what. What Turing said about how, like, you know, it'd be cool. And he was spitballing and potentially just sort of, like, trying to be funny, but, like, it'd be cool if you could have this, like, these. These nodes sort of walk through the countryside and build up, like, a set of experiences. Like, this would never work because that's not how, like, actual humans work either. So. So, yeah, so, like, I just. I feel like there's. There's embodiment that when Mar and all of them started talking about multiple realizability, it was sort of like, hey, folks, you. You have flows. You have the electrons running through. That's always working. You have the housing. You always need something to make it. But, like, that actually doesn't matter. We're just going to work strictly in terms of constraints that can be built however you like. And that's where it's sort of like, I think that, like, we're able to make constraints do a lot of cool stuff, but, like, we built that and, like, that. And that's. And we. We did that not because. Not because we are a set of constraints, but, you know, there's a lot of exploration and a lot of inquiry and agency, if that's what it is, and if that's a different thing, that's a different thing. But sort of like there's the. The housing matters in a way that I think doesn't always come out in the, in the telling. [00:54:18] Speaker B: I don't. Fairly certain. I could not pass an AP psychology test right now. [00:54:23] Speaker A: Okay. I'm sure I couldn't right now. [00:54:27] Speaker B: So if you were going to build AI, you would start with turbulence? [00:54:34] Speaker A: Well, so that would be cool, I think. And actually Robert Wood and Harvard actually has what, like, he has had. He was sort of working on actually turing instability, reaction diffusion, spiders that were sort of like given a plastic mesh. I think it's really. I mean, I think it's really cool. I think. I think potentially a lot of these robots actually have turbulence in them and they can't help it. [00:54:59] Speaker B: I'm sorry, I'm going to interrupt you because this is the first time I've mentioned turbulence. And so why is turbulence interesting to you? [00:55:08] Speaker A: Turbulence is fractal. And so there it is again. [00:55:12] Speaker B: Everything's fractal. [00:55:13] Speaker A: Well, so turbulence is. Was recognized as sort of this strange case where suddenly laminar flow, which does happen. So laminar flow does happen, but when you push a fluid beyond the bounds of what the container and the fluid can maintain, it'll generate these coils. And that's sort of, you start to see that with a rolling boil in a boiling pot of water, for instance. And so turbulence is this case where the whorls start to contain whorls and vortices and. And you can get these sort of, these structures blinking in and out of existence so you can get things. And so the thing with got sort of terrifying, I think, for people when they realize like, oh, we can model these eddies that sort of didn't exist before, and we can model them as they're particles, sort of like the giant red spot on Jupiter, sort of like, that's not a dot, that's a fluid vortex. That's just sort of very self sustaining. And so. So, yeah, so I think that. So when people started developing the. So I recently had this. I'll just fast forward. I recently had this very exciting conversation with a scholar who works on network models. I don't know much about what the framework was or what the background was, but he is having these models talk to each other and then he's looking to see what they do and I think that's really exciting to see how these things interact with each other. And I kept, when I was at Harvard med school, I would sort of talk to these folks who are making these robots, and I'd say you are, you know, you're coming up with really exciting machines here that we don't know what they're going to do. And that's sort of cool because, you know, we're taking away all the control and the Rodney Brooks idea of just sort of letting the solutions emerge. I was like, you have some robots here and where it'd be cool to, like, measure what they're doing and if they like, it'd be cool to see if they, like successful, intelligent performance aligned with any of the stuff that I've gotten. They're like, yeah, they didn't just say. [00:57:11] Speaker B: Go away, get away from me. [00:57:12] Speaker A: They didn't quite say, go away. But I just sort of like, I feel like there is already, I mean, without, without needing to, like, stick these robots into a pot and boil them or something, there's already this. We're seeing, you know, robots with fluidity of movement and randomness in such a way that I think there's already stuff in there. And I think that's probably would help models and so help understand, like, oh, and that, that's the run. And when it toppled over, oh, and that's when it hit the, hit the target. Like, I think that, like, I think I'm really excited with how flexible and adaptive AI and robotics has gotten. And I just think that, like, now we, as soon as we're building in that noise, it's like, oh, I have some ideas about what sort of noise might actually contribute to this. And so it's, you know, whether I ever get to test that, I don't. [00:58:00] Speaker B: I don't know, but yeah, yeah, I, well, that's, okay. So this is interesting. Like, how do you, so you can you, I think of fractality and scale freeness as something that emerges from, you know, the interacting, self organized, interacting parts. Right. Yes, but, and so I haven't thought, I mean, so would you build fractality in using the right kind of noise, or is it something that emerges and also contributes? Right, there's the causal loop. [00:58:33] Speaker A: Right. So it's sort of like I was saying, like, I think that you will get, so as you let a bunch of pieces fall, fall into a circumstance, things, it's, I think that you will get interactions, they will proceed across scales. And what that means for a system, you can't know really with. Without modeling the fractal structure of those behaviors. So, in that sense, like, I think it goes back to your earlier question, like, is it a description or is it cause. I think it's a description of the causes that in of a class that we haven't talked about when we look at individual pieces and what they do. [00:59:19] Speaker B: Right. So, okay, so you wouldn't build fractality and you'd build the cause of the fractalness into it. [00:59:28] Speaker A: Yes. But honestly, in order to be a polite scientist and understand that, and also out of bare curiosity, I'd want to build in the manipulations, to actually explicitly build in fractal structure into different energy sources and textures, I would expect that we would want that experimental control to say, oh, yes, this mattered, do that. Do whatever cultivated or allowed to happen, whatever cultivated that pattern before we experimentally manipulated, because that's going to give you this class of fractal structure. And we'll know that thereby not because of the fractal structure, but we'll know because the fractal structure is indicative of what it operationalizes, these patterns of cascade. So when we're doing the simulations, we're trying to figure out what kind of interactions are give you this, that or the other kind of response. And so I think that you want to do both in the same way that, like, we sort of, we do this in neuroscience, too, where we have the chemical that we. That we think runs through this pathway, and we have the chemical that, you know, you're genetically disposed to have more or less, or this or that pathway for. And like, we'd like people to be born with the happy, effective flows of. Of specific neurotransmitters. And that might predict all sorts of happy intellectual, cognitive, developmental outcomes. But then we want to know, how can you get in there and how can you enter into that? What language are those causes speaking in? And they're speaking to us in terms of how we find them in fractal structure. And so we can enter into them, and we sort of have a handshake with them in fractal structure as well. [01:01:12] Speaker B: So if you're measuring an artificial intelligence system and you don't find fractal structure, is fractality a necessary marker of whatever we consider intelligence? So you don't find it. Can you even fathom that something could act intelligently in the world without fractal structure? Does that make sense? I know it's an unfair question. [01:01:39] Speaker A: No, no, that's fine. So I think so when. Okay, so the whole motivation for multifractality is that there isn't just one. So I want to sort of preface. [01:01:48] Speaker B: That because, yeah, talk about multifactality, like what it is also. So. [01:01:51] Speaker A: Right, so it's sort of not. So when you take. So the coastline of Britain or that coastline right there, you can estimate one dimension for the whole thing, the whole coastline, the whole Mandelbrot set. You can sort of sketch out exactly how craggy it is and how. How craggy it remains as you unpack it. If you measure different places within that coastline and the coastline of Britain, you will get different variations. You will get higher and lower than that sort of one dimension to rule them all. And that variability, a linear modeler would say, oh, that's just unsystematic error. You have small samples and you're getting wobble in that. So you can test that. And very often, it's not that very often what you get are variations in the fractal dimension across that based on these nonlinear correlations. So you get an excess, often of the variation beyond what. So linear models can generate multifractal noise. And so there's a heated debate whether anyone cares. But I think the reason people care is because lots of folks don't care about multifactals and they want to say, I'm pretty happy doing my linear model. Why are you troubling me with this? And, like. And that's very honest, I think. And also, even if it is just small samples, we got small samples all the time, and we want to know that this is not that. [01:03:15] Speaker B: Well, also linear models are interpretable. Right? And so even. Even if you're using them as knowing that they're incorrect, because all models are wrong, but some are useful, but it's sort of an estimate or maybe a bird's eye view of what's going on. Right. [01:03:33] Speaker A: Yeah. Which is. So let's put a pin in that point because I want to get back to that. [01:03:37] Speaker B: Okay. [01:03:38] Speaker A: But so, yeah, so I think that when. So if I. So when I was first coming out with the practical analysis of behavior and seeing how that changed with cognitive or perceptual changes, the thing I haven't sort of like, you know, the thing I suspect, the thing I have seen repeatedly is as people get sort of into a groove into the constraints of the task, as the rule stabilizes, they get less fractal, so they start looking more like white noise. And that is in some sense a. [01:04:14] Speaker B: Blessing depending on their task, like when they're performing well, when they're engaged, like in a flow state or something. [01:04:21] Speaker A: And so it gets sort of swept off the table very often because so, for instance, if you think about it in terms of temporal estimation. So wallet and Kuznetsov 2011 did a wonderful study that said exactly, or was exemplifying. What I wanted people to see is that actually, no, like fractal isn't always good. Fractal, it means a thing. And so when you have temporal estimation and you give people feedback to say, okay, so, no, you're a little long, you're a little short. And so you're giving people information, they are wherever, however, scooping their error into a more acceptable range. And so they're letting their overestimations and their underestimations, they're reining that in and they're not letting that sort of spill across the next 1020 2nd estimations. [01:05:14] Speaker B: So fractal is default and expertise overcomes. [01:05:18] Speaker A: Fractality, it can depending on the task. So if the task needs you to rein in your error and not let it slip and slide across the, the terrain, then yes. And so when people are standing still with their eyes closed, more multifract, so more variety and fractality is actually correlated with more multifactality is correlated with more standard deviation of center of posture. And so you're getting less stable sway, you're getting wobblier sway with more multi. And when you open your eyes and you're looking ahead at a point on the wall, you're suddenly doing a different thing. And fractal structure, the same multifactality actually goes in the reverse direction. When you change the task and someone is using their visual system to latch onto something out in the world, then multifractality actually stabilizes this way. [01:06:17] Speaker B: Sorry, wait, distinguish multifractality from what would be the opposite of that unit? Yeah, yeah. So I think you mentioned it briefly, but it's different levels, but yeah. Could you just describe it for the listeners? [01:06:32] Speaker A: So monofractality is sort of, is exactly what I was saying before about how you have one cosine and you estimate a dimension for the whole thing. Yeah, multifactality is saying, well, let's look at all these different regions here, here, here and here, and we're going to get slightly different ones. And, and so the debate, a debate, not a very heated or exciting debate for a lot of folks, but is r, is the variation that you get in those, is that a normal distribution like you would expect from the linear model saying, yeah, just sort of unsystematic noise, or is that variety suggestive of nonlinear interactions. And so, so if we just take, if we just sort of scoot aside from that and say the variety is due to nonlinear interactions, and you get this sort of variety in how fractal you are in all of your behaviors. So, okay, so this is an important reason to care when behavior goes from monofractal like Guy van Orden liked, to non fractal, just 0.5 hrst uncorrelated. That was, that was the first ten years of my argument, trying to, like, get people to understand, like, okay, you're not just one thing. There's variation. And that's. And so that's the first way to understand multifactality. Your capacity to vary is potentially very important. And it's not just all fractal or all white noise. It's like, it's a, it's a big. It's a more patterned landscape, and that's. [01:08:01] Speaker B: What you consider a swarm. Am I bringing that in at the wrong time? [01:08:07] Speaker A: Because I would say a swarm is a kind of cascade process where you have the. I mean, actually, Pearson, when he was coming up with. So the story I've heard is that he was actually trying to understand mosquito swarms, and he framed it in terms of the drunkard walk instead to talk about the independent direction changes. So the swarm, I think that's. So that, for me, would be a case where you have this aggregation that can spread apart and coalesce in interesting ways. [01:08:40] Speaker B: I shouldn't have thrown that wrench in there, getting a soft topic, but you were saying that there's the debate, the current debate that you think no one is interested in. [01:08:48] Speaker A: Oh, no, let's talk about that, right? No, let's. No. So I was just actually scooting that aside and saying, we have sort of this idea that you can get different fractal dimensions on different parts of that coastline. And so what I was saying is, like, let's sort of. So that there's. There's an honest question about, like, does anyone need to care about that? And so, and so inside that one, inside that question was me for the first ten years of my fractal work, I would say, on average, saying, look, we can be fractal, we could be pink noise, we could be white noise, we can do all of these things. There's sort of. We can contain many, many, many variations. This. It's not a one or another thing. Let's not like that. So, okay, so, and so when people engaged in sort of card sort tasks, gear system tasks, temporal estimation tasks, they. They they would show as they're doing more sort of like exploratory kind of free wheel and stuff and learning the task, more fractality was helpful as they grasped the task. So they're hyperly diffusing through the task space, perhaps. I mean, these are all sort of like schematic ways of thinking about. But then as people latch down on the rules and figure out mastery and figure and sort of get good, they actually can scoop their error into what looks more like white noise, what looks uncorrelated. And so the very, my first sort of step into multifactality was like me trying to say, no, it's not like the sky is not falling. Sometimes we're white noise, sometimes we're fractal, sometimes like, we can be all these things and it might matter to help us understand what sort of perception action system we're being. [01:10:28] Speaker B: So there's a capacity for fractality. And as we move through the world, we are also exploring that space, and it may help us in certain contexts and hinder us in others. [01:10:42] Speaker A: Yeah, so in the postural, so when I was talking about the postural case, I was saying like, you can just look at a center of pressure time series. Some of the most boring research out there, I know, but standing still is not like you can't stand still. You wobble, and the wobble has structure. And one of, so in terms of like linear description, the thing you need for linear models, like clear description, is ergodicity. [01:11:08] Speaker B: And so forget you gotta define that. You gotta define that term. [01:11:12] Speaker A: I know, that's fine. So in order to, so standing still seems like the most simple, stable thing that you could imagine. Linear models require just that. They require all of their variables, all of the dvs, to have an ergodicity to them. They have to have stable averages. In some sense. It's the same point about the wind having a velocity. You have to have some sort of representativity across the problem of like generalizability, actually, can you say that this sample, as you measure a system, can you say that this sample that you've taken of this single system is, can you take an average of that? Is it stable enough to that average is then important, and then can be pooled with other averages from other people? And then, God forbid, could you take all those sample averages and then predict to another person not in the sample, like that's the clinical diagnosis problem, and then it's the whole generalizability problem. Like, what does the individual person mean in light of any of this work? That we do in the linear model. The linear model requires that all these averages are kosher, they are stable, that these, that the mean can represent, pun intended, can represent the ongoing process. And so the linear model needs that even when you're standing still. Back to that boring example, when it seems like it should be the simplest case of ergodic measures, you should be able to take center of pressure, and we're not going anywhere. So it seems intuitively, that should be, like, the simplest, most stable thing, and we can't even like, it's not. So the models of raw, postural Swaydeh linear models would be nice. You can't put them on, you can't train them on that without something better, something good. And you can't make a predictor and put it inside your linear model and say, I want to know how much, like, what, what the unique effect of that is, that unique effect that you estimate with your interpretable linear model that hinges on it being a representative sample of that predictor that you put in there. In a sense, all the old challenges with the computer model all stand, as far as I know. But the new one that I like bringing up is this idea that take the simplest, most stable behavior you think we have. You want to have an internal model of how upright am I? You cannot have a system that is operating on that raw variation. That's just not, it's either the internal model isn't linear or the models we make shouldn't be linear. I mean, there's an incapacity of linear models to represent what isn't actually statistically, mathematically ergotic. [01:14:16] Speaker B: But does that mean that we have less purchase on an explanation if we're using a linear model as a descriptor? [01:14:26] Speaker A: Well, it means you have less purchase if you're shoving non ergotic things into it. [01:14:30] Speaker B: Okay. [01:14:31] Speaker A: Which very often, lots of folks are. The good news is that, I mean, so when, so cascades, very, very non ergotic. The good news is that when you use these geometries that are made for cascades, those estimates are actually ergodic. You can make the ergodic description. And so, in a sense, it sort of inoculates the linear model against all the problems originally. And so I would, you know, there's nothing I. The linear model, I mean, in terms of, like, understanding how stuff works, having discourse. Linear model is here to stay. We're using it to discuss. We have to know how to fit it, and we just need to know, like, how can we, how can we generalize? So, so I take heart in the fact that one, beyond any sort of theoretical thing, at least these multifactal estimates are behaving. There are ways of describing ergodicity breaking parts of us, and they're describing them in ergodic ways that submit to causal modeling in the old traditional linear framework. So what they're describing is what these things can describe is the variation in fractal dimensions, so that capacity to vary from pink noise to white noise. So if you give the model the information about how much can this vary, then that is actually less troubling because of ergodicity. And it's potentially predicting. I mean, so one of the big problems with ecological psychology and embodied cognition is like, the body is contributing. And I don't, there's not a whole lot of purchase for saying how, like what part of movement? And like, what do you mean? What? [01:16:25] Speaker B: Yeah, what do you describe that more? [01:16:28] Speaker A: Well, so when you, when you. So, okay, so, so the take ecological psychology, there is there, when you talk about perception, action, they're talking about the detection of information, and it's invariantly specified. So don't worry, just, just go with the flow. You don't have to, I mean, in some sense, you don't have to learn. You don't have to learn anything. And so that's why it's been strange for people to talk. And interesting, I like it. Strange for people to talk about direct learning, the detection of non specifying variables, and then just sort of also ways of thinking about affordance and information that's more like transactionalist and less sort of like dispositionalist. Like Turvy would have said that the information is out there. And for me, that just sort of turns the whole problem inside out. Instead of like an intelligent executive, you made an intelligent universe and you're done. But so this, so in terms, so what I'm saying is that the ecological approach his has, has an idea that we do something about our bodily movement. We reach out for information, and it's never been all that clear how that contributes, because everyone's going to reach out in a different way. It's going to be non ergodic, it's going to be idiosyncratic. So what I'm trying to do with multifractal structure is come up with an ergodic description that fits a linear model that sort of aims to say, here are these organisms nonlinear interactivities. So I think that what we're doing is we're trying to quantify and make modelable both the behaviors we're looking to explain on the dv side. And we're also coming up with measures that will behave and go in a linear model that can address how. How organisms do behaviors that produce or contribute to new information. Because this is, this is the, that that latter part is what I don't see very much of elsewhere. Like, so people will talk embodiment. They have, they've heard about that stuff. They're sort of imagery and motor resonance with, you know, if you see people moving in a certain way, your brain will act, activate in a certain way as if you were doing that. There's all this really cool stuff, but there's no, like, then it sort of drops off in the body is doing who knows what. Oh, so I'm, so with the turbulence and the multifractality, I'm trying to sort of say, like the turbulent stuff that we do with our body in between stimulus to thought, that sort of fills in some of those gaps for embodied cognition. That's what we can capture, we can quantify with multifactor. [01:19:16] Speaker B: Okay. Okay, so then zooming way out again, right? So, you know, fractality, the coastline example is always given. Coastline is not alive. The coastline is not intelligent. So fractals are everywhere, right? And I mean not everywhere everywhere, but lots of natural structure that has no agency, no intelligence can be described as multifactal, probably. Right? And so then what is. This is not a criticism. [01:19:49] Speaker A: No, there's no, there is no, there's no simple. I mean, just like, there's no simple, sort of like, fractal is good, fractal is bad. There's no simple. Fractal is intelligent. Fractal is not intelligent. Like, I don't. I've never aimed for that and I don't encourage it. There is, there's sort of the. I mean, so in, I think that where it makes some sense is taking that zooming out, as you said, you zooming out to the scale of these organisms and these brains came from somewhere where you have an evolutionary process unfolding in a structured terrain of some sort. The Gibsonian idea, and he wasn't saying all of this in exactly these terms. So you sort of have to varnish it and maybe see another glint of light in it. But all the intelligence that we have is thanks to our participation and our growth from these cascades. And those cascades happened in a patterned world. All those ambient arrays, those have structure that we are responding to. And I don't. I mean, I guess my problem has never been sort of like, who's the intelligent one, which is the intelligence. I just sort of. I agree that we are intelligent, you and me and all those other humans out there, and we're trying to figure out where did that come from? And some of it did come from the environment, and some of this environment did have self organizing systems. And some of those self organizing systems could, you know, talk. And some of that talking was self organizing, too. So I don't. I guess. I don't think of it as sort of like an ex. I'm not looking to exclude someone from a club of intelligence. I'm more trying to understand this intelligent thing we're doing. How did we get to it? And what's. If it's self organizing, what is the language of self organizing systems? And can we play it out here? It's not to make. I mean, it's not animism. I'm not trying to make, you know, coastlines live and speak and breathe. [01:22:06] Speaker B: I wasn't trying to pin that on you. [01:22:07] Speaker A: Oh, no, I know, but I think it's. It's a risk, obviously, because you can't. Because you can't make the simple fractal as intelligent. Like, that's. But it. But I don't think that's a. I wouldn't pursue that at all. I would say that when you have it. So I would want to know. I mean, so the arthrobots that Michael Levin made, they're nothing. I mean, people would debate whether they're alive. They were. [01:22:32] Speaker B: They fractal? [01:22:33] Speaker A: I would love to know it. I would bet. I mean, I bet that they. I bet that their performance, their. Their growth and their performance rests on cascades, that I'm sure we could model and understand sort of different outcomes and different performance using multifactal geometry. So, yeah, so I think that multifactal is not sort of like a halo. It's just. It is a way to talk about these. It's a way to operationalize the cascades that we think got us here and that we think are going to take us to the next chosen experimental point. [01:23:11] Speaker B: In other words, you don't feel like your entire career has been a waste? Just kidding. [01:23:16] Speaker A: No, no, no. [01:23:17] Speaker B: That was a joke. [01:23:18] Speaker A: Okay. [01:23:19] Speaker B: Yeah, because, I mean, no wonder. [01:23:20] Speaker A: Mandy. [01:23:22] Speaker B: Okay, Damian, last thing. What are you doing right now that's exciting? What can we look forward to in the near future? [01:23:30] Speaker A: So I am working on some more problem solving, some more memory stuff. So I've been doing a lot of posture, and that's been cool, and that's sort of. But I want to sort of get out of it for a little while and get back to some cognitive stuff, which I feel like is where. What got me into this in the first place, so. [01:23:48] Speaker B: Right. Yeah. Just reflect on that for a second. So you said at the very beginning, you said you did everything wrong or you messed up when getting into grad school or something, and I didn't know exactly what you meant. It's hard for me to reflect on my own sort of changing, evolving viewpoint of what brains do and what they are and what mind is and all that stuff. Do you have a sense of that? You know, so when I got into neuroscience, I was interested in figuring out consciousness, you know, whatever that means. Right. And then as you go further along in your professional life, you start to work on very specific, smaller problems. And then when you work on that, quote, unquote smaller problem, it gives rise to a bunch of other very specific, very small problems. Right. And so you become narrower and narrower and narrower. And what you just said is that, like, that's the reason you got into this in the first place, because you're interested in the cognition. So how do you view your own trajectory? [01:24:49] Speaker A: I actually feel like. I feel like I get these echoes back from the past when I read old books or see things. So someone saw my Turing paper, and he pulled me aside. He said, do you know kabbalah? And I was like, yes, I do. So for those of you who don't know, this is a mystic, mysticist tradition in Judaism. And so some of the patterns in the Turing patterns were, like pentagram, like, shaped. And so he was like, you know, this rang. Rang a bell for him from something else. And so, way back before I got into any of this stuff, I was interested in Carl Jung. I was interested in archetypes, and I wanted to know, where do these patterns come from? Why do we keep seeing these patterns similarly? And so, like, there's. So I sort of dip my toe into the Joseph Campbell stuff. And, like. And so I. But I wanted to know, how do I. How do. How does this have any leverage? Like, is there anything that physics has about, you know, self organization? Is there any reason that this could happen? And is there any reason that that sort of suffuses, goes through some of our, you know, psychological, cognitive experience? So, in. In a sense, I feel like I'm still, you know, I'm doing exactly what I want to be doing and answering very old questions. I mean, you know, so. So that's. So I'm pretty. Pretty happy with how things have turned out. I would say, and I hope to keep going. [01:26:15] Speaker B: Good. Well, I hope we didn't confuse many of our listeners. I'm sure a lot of people are very confused right now, but go ahead. [01:26:23] Speaker A: I'm available for contact and clarifying any questions anyone has. [01:26:27] Speaker B: I love questions. Yeah, I'll point to plenty of your work in the show notes as well. So Damian, thanks for coming on. I really have enjoyed the conversation. I'm going to see fractals everywhere I go today now, so thanks. [01:26:37] Speaker A: Careful, don't trip. Okay. Thank you. It's been a lot of fun. [01:26:56] Speaker B: I alone produce brain inspired. If you value this podcast, consider supporting it through Patreon to access full versions of all the episodes and to join our discord community. Or if you want to learn more about the intersection of neuroscience and AI, consider signing up for my online course, Neuro AI, the quest to explain intelligence. Go to Braininspired Co. To learn more. To get in touch with me, email Paul at Braininspired Co. You're hearing music by the new year. Find [email protected] thank you. Thank you for your support. See you next time.

Other Episodes

Episode 0

December 27, 2018 01:04:05
Episode Cover

BI 023 Marcel van Gerven: Mind Decoding with GANs

Show notes: Donders Institute for Brain, Cognition and Behaviour Artificial Cognitive Systems on Twitter. Artificial Cognitive Systems research group. The paper we discuss: Generative...

Listen

Episode 0

November 29, 2022 01:22:27
Episode Cover

BI 154 Anne Collins: Learning with Working Memory

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord...

Listen

Episode 0

July 12, 2022 01:31:40
Episode Cover

BI 141 Carina Curto: From Structure to Dynamics

Check out my short video series about what's missing in AI and Neuroscience. Support the show to get full episodes and join the Discord...

Listen