Episode Transcript
[00:00:03] Speaker A: I mean, the main point was what is a field of inquiry? What is a domain of knowledge? You know, you know, what is a neuroscientist? What is a particle physicist? What is a complexity scientist? I was interested in that almost as a meta question. What you realize when you go down that path is how difficult that is to answer.
Agency is intentionality.
Will all of those ideas break the fundamental assumption of all of physics? Right. I view that as fundamentally paradigmatically new and it is the origin of complexity science. And as you say, it's the origin of life.
And that's what all science is other than fundamental physics. Right. And that's why emergence is so important a concept, because the processes of emergence are what explain why other disciplines other than physics have to exist in the world.
[00:01:09] Speaker B: This is Brain Inspired, Powered by the transmitter David Krakauer is the President of the Santa Fe Institute where their mission is officially searching for order in the complexity of evolving worlds. When I think of the Santa Fe Institute, I think of complexity science because that is the common thread across the many subjects that people study at sfi, like societies, economies, brains, machines and evolution.
David has been on the podcast before and I invited him back to discuss some of the topics in his new book, the Complex An Introduction to the Fundamentals of Complexity Science.
The book, on the one hand, serves as an introduction and a guide to a four volume collection of foundational papers in complexity science, which you'll hear David discuss in a moment. On the other hand, the complex world became much more discussing and connecting ideas across the history of complexity science. Where did complexity science come from? How does it fit among other scientific paradigms? How did the breakthroughs come about, and so on. During our conversation we discuss the four pillars of complexity, entropy, evolution, dynamics and computation and how complexity scientists draw from these four areas to study what David calls problem solving matter, another term for what the complexity sciences are about. We discuss emergence, the role of time scales in complex systems and plenty more, all with my own self serving goal to learn better and to practice better. How to think like a complexity scientist, to improve my own work on how brains do things. Hopefully our conversation and David's book help you do the same. So I will refer you in the show notes to his website to discover all of David's other accolades over the years and with a link to the book that we discuss.
If you want this and all full episodes of Brain Inspired support on Patreon, which is how you can also join our discord, have a voice on who I invite, even submit questions for the Guests and otherwise just show appreciation. So thank you to my Patreon supporters and thanks to the transmitter for their continued support. Show notes are at BrainInspired Co podcast 203. Here's David, I had the thought in reading your book and I'll hold it up here. So we're going to be talking about concepts from the complex world.
My overarching kind of goal, I think, in our discussion is to get a feel for personally for how to think like a complexity scientist. And this book is. It's deceptive, David, because I don't know if you can see, it's pretty, it looks fairly thin, it's elegantly and concisely written. But then you stay and I read the whole thing, but I don't know if I read the. I don't know if I read the whole thing because now I have to go back and read the whole thing much, much more slowly, but because there's so much material in here.
So this book is interesting because it is out before four volumes containing the foundational papers. Oh, oh, you have physical copies already.
[00:04:42] Speaker A: Yeah. I'll tell you the history if you're interested, Paul. I'll tell you what happened.
So a few years ago, I mean, this is a long story. I mean, many people or your listeners might know SFI through its books, those Brown and Red books, if you remember, from the 80s and 90s, Christopher Langton's artificial Life, Arrow and Pines and Anderson on the economy as a complex system. And on and on it goes. And so we've always published and been interested in communicating complexity science right from the beginning, from the 80s.
But we decided to bring a lot of that in house and so have our own press as opposed to working with McGraw Hill or Oxford or MIT, all great presses. But we wanted the authors to be closer to the publishers and we wanted to make the books more affordable. And the big project of the press is the four volume Foundations of Complexity Science, spanning a hundred years, which you see behind me, those beautiful yellow. Those are the first three volumes. The fourth comes out in December. So it's just under 4,000 pages. And it grew out of asking the community, what is complexity science? If you had to have one paper or two, whatever that you thought were absolutely archetypical of this endeavor, what would they be?
And we amassed tons of suggestions and it was surprising how concentrated they were.
And then we asked each person who was expert and had recommended a particular paper to write, the history of that paper and its enduring impact. And each of that's what that four volumes is it's just under 90 papers, all placed in historical context and annotated.
So I wrote an introduction to those four volumes because I thought, oh, my God, what is this thing? What has come together?
[00:06:51] Speaker B: It's more than an introduction, though. So, I mean, one of the things that you do is you. You reference the people who wrote about the papers, who annotated and sort of introduced those papers, which. So you're giving a roadmap of a roadmap to the papers in one sense, but it's more than that.
[00:07:08] Speaker A: Exactly. No, thank you. Exactly. So that's the history. I thought I'd write the introduction. Then I realized I'm not really writing an introduction because it sort of got out of hand and because I wanted. To your point? Each of those papers represented a perspective on the complex world. How so? Where did they come from? How did they influence each other?
And so on. And so in weaving that tapestry, I wrote a little book. I didn't expect to. That was not the plan, but it became sort of black hole dense, so.
[00:07:46] Speaker B: That'S a good way to put it. Yeah.
[00:07:48] Speaker A: But I decided to keep it that way. And what. The reason it's actually published as a separate book has a funny story. It is the first opening part of the four volumes. But my colleague, actually Sean Carroll at Hopkins, was giving a course on complexity with Ginny and Ishmael, and they said, you know, we were looking for the book to write the course, and then we realized we're just going to use your opening introduction, but the students don't want to buy all four volumes. So I said, oh, Shaun, I'm just going to do that, and we'll publish the first opening introduction as a separate book, and that's the history of it, to make it available for students.
[00:08:30] Speaker B: That's interesting, because I was thinking, you know, I was kind of envisioning how this book could be used, and I was kind of envisioning like a sort of a dedicated study group, like whether. Whether we should. If I formed one of those. Right. If I formed a complexity science study group, Foundations of Complexity Science Study group, should I use your book? And then, because all of the papers are in chronological order in the foundations, and. And I can imagine going through every single paper with the annotations, but that would take a really long time. What would you recommend? If I were going to do something like that, how should I approach that?
[00:09:11] Speaker A: I would recommend just that.
No, because I think, you know, if you felt like it, read my shorter book first.
[00:09:22] Speaker B: No. Yeah, I think that that's essential because then you get the whole context and why.
Why you can go through chronologically, which you actually probably took pains to figure out how they are related to each other and put that into the.
[00:09:39] Speaker A: Well, the main point. I mean, again, the main point here was not only to organize the papers, but to ask where they came from in the 19th and late 18th century.
[00:09:51] Speaker B: Well, what was your eventual task, eventual goal in writing the book? Because you set out to write this introduction and then it became something more. So, yeah.
What did you envision here? What did you hope to achieve?
[00:10:04] Speaker A: I mean, the main point was.
[00:10:11] Speaker B: What.
[00:10:11] Speaker A: Is a field of inquiry?
What is a domain of knowledge? What does it mean to be expert in X?
You know, you know, what is a neuroscientist? What is a particle physicist? What is a complexity scientist? I was interested in that almost as a meta question.
And what you realize when you go down that path is how difficult that is to answer in a thoughtful way. Right? You can say silly things. Neuroscientists study the brain. Okay, whatever.
Physicists study the universe. They're not particularly informative answers. Right? And so one was, what is this paradigm that we call complexity? And a lot of people in many books have been written that confuse it with methods. And that really drove me nuts.
[00:11:03] Speaker B: Well, yeah, but it. Okay, we'll get, we'll get. I'll ask you more about that.
[00:11:07] Speaker A: Yeah, no, it's important because the methods matter. Right? It's. It's a very difficult thing. But let me just give you a couple of examples. You know, if I said. If you said to me, well, ask me the question, what is quantum mechanics?
[00:11:21] Speaker B: Right?
[00:11:21] Speaker A: And I said, you. Oh, you know what it is? It's functional analysis, linear algebra, right?
[00:11:29] Speaker B: Yeah.
[00:11:29] Speaker A: You say, well, that doesn't. What's general relativity? And I said, oh, that's the calculus of variations and differential geometry.
They're important. They're absolutely foundational, the study of tensors and so on, but they're not the problem. They're not the conceptual issue. And so part of it was, what is the relationship between the technologies of knowledge and methodologies and the ontology, the domain of inquiry? And they are deeply entangled, as I think you're alluding to, in very interesting ways, particularly in the complex domain.
And so I wanted to resolve that. So I had to go back and establish where all this started. And just in a nutshell, the phrase I use is, just as modern physics and chemistry have their roots in the scientific revolution of the 17th century, complexity science has its roots in the industrial revolution. Of the 18th and 19th, we study machines, and we study machines that were made, engineered, or evolved. And understanding that kind of matter, as opposed to the ordinary matter of physics and chemistry, is the nature of complexity science.
[00:12:46] Speaker B: So recent philosophical works and old have debated. I'm throwing this into a tangent already, so I apologize. Debated, or rather shot back on the idea that organisms could be even equated with machines. But now you're saying that organisms are evolved machines. So I just want to clarify. Do you view organisms as machines or do you see the distinction?
[00:13:13] Speaker A: Yeah, I mean, I have such a capacious definition just in terms of mechanisms that perform adaptive work that are metabolically fueled. I'm willing for that to be an ecology or a machine. I don't, I don't mean necessarily classical.
[00:13:32] Speaker B: Okay.
[00:13:33] Speaker A: I'm not talking about, you know, watches and grandfather clocks. Right. But I am talking about mechanisms that do work.
[00:13:41] Speaker B: Yeah.
[00:13:41] Speaker A: And whose. And the work is dependent on certain internal degrees of freedom that produce motion that we view as informational or computational. So it's quite, it's quite capacious. And. And I'm willing for those machines to be distributed, to be organic and to be very noisy.
[00:14:00] Speaker B: So, yeah, you have an inclusive definition of machine, then it sounds like, which is fine. So just a moment ago you talked about what is it to be a.
Let's say a quantum physicist. Right. Which is a different question than what is quantum physics? Which you also kind of asked. But. And I know you sort of bristle at the question, what is complexity science?
But then I thought, right when you were talking, I thought, oh, well, maybe a better question is what is a complexity scientist? Or at least you could wrangle a better answer. And one of the things that struck me about reading your book about complexity science, one of the many, many things.
Is it fair to say that one goal of complexity science or scientists is as integration rather than unification?
[00:14:53] Speaker A: Oh, that's interesting. I mean, there's so much to say here.
Let me make a slightly different point and then edge into that question in terms of the development of a mind that is interested in this kind of problem.
In my life, I sort of make distinctions between two kinds of scientists. In their early formation.
The first kind I sort of called foveal.
And these are the people who looked at the stars when they were 12 and said, I have to be a cosmologist.
Or they saw a suffering animal and they said, I have to be a vet.
They looked directly at their target and established an ambition.
Another kind of scientist, I think is peripheral vision scientists. And they see a pattern in their peripheral vision. And as you know, peripheral vision is really crap, and you're not sure what you saw, sort of diffuse and it haunts you for your entire life, and you're constantly trying to bring it into focus.
And that peripheral pattern, for me at least, evolved into complexity science. And I found it in writings by people like Doug Hofstadter and Martin Gardner and Margaret Bowden, you know, and so on, and then Nietzsche and Schopenhauer, quite frankly, who slowly gave me a sense that I wasn't hallucinating.
Some validation, a validation, a certain order of nature that isn't a pattern that you can directly look at. So if you say I study the sun or I study, you know, gas giants or nuclear fusion machines, that's one thing. But when you say I'm interested in that pattern that unifies what the brain does and what markets do and what societies do, that's a harder description. And so I do think complexity science as an ontology, which I think I now much understand much better, had that character. Now, when you say, is it synthesis or unification? That's a really interesting question.
I think it's a bit of both, quite frankly, because it's synthesis, because you're looking for the horizontal connections across domains like economies or biology and so forth, but you are looking for the underlying shared principle, for example, the principle of information or computation or cognition, and that's unification. So I think it's a bit of both.
[00:17:42] Speaker B: Well, I said integration, but I guess if you're looking for the common underlying patterns, that would be the synthetic part of it, I suppose.
[00:17:50] Speaker A: Yeah, synthesis can be merely comparative.
Right. You could write an associative book pointing out commonalities. But I think it would be more profound to say or ask where do the commonalities come from?
And that's also a part of this enterprise.
[00:18:15] Speaker B: Okay, since you mentioned your own personal story, I was going to ask you this later and I'll just ask it now.
And this is, I guess, in terms of how the Santa Fe Institute operates and maybe how it optimally operates, do you want a bunch of people who are studying complexity science, who are studying the science of complexity itself, kind of like you? Or do you want people in their individual kind of domains, maybe some foveal people who have started to appreciate their peripheral vision over time and, and appreciate the. What complexity science approaches have to offer within their fields? Is it better to have a bunch of those specialists who then can widen their, their view? Or do you want everyone just studying complexity science if that Makes sense.
[00:19:06] Speaker A: I mean, again, I don't know the answer to that question, and it's very idiosyncratic.
But at the core it has to be people who are looking for the fundamental principles of the self organized, selective universe. That is, we all study problem solving matter and the fundamental principles governing problem solving matter, that has to be at the center. And you can have expertise in other fields, archaeology, linguistics, neuroscience, but that it has to be primary, not secondary, because there's a lot of rigorous technology and methodology that goes with that pursuit. And you'd be spending your entire life doing catch up because most of your expertise was in your domain, which is critical. Our expertise is in the fabric or the interstitial fabric that connects fields.
[00:20:10] Speaker B: How did that feel when you finally found that home? Right, so. So it had bothered you sort of since an early age, going back to your peripheral analogy, and that's kind of the reason why I'm asking is because I want to know how to approach my own field like a complexity scientist. And I've had that same. I don't think I'm a foveal person, but I think that's also hindered me in my specialties. Right, and so how did that feel once you realized, oh, complexity science is kind of my intellectual home?
[00:20:43] Speaker A: Yeah. You know, it's worth mentioning the history briefly here, Paul, because. Because it gives you a sense of where it came from. So in the 19th century, many things were happening, but two things of interest to this conversation. One is we're building steam engines. Yeah, right.
And out of that came the science of thermodynamics and statistical mechanics. How do we build better machines? Various kinds that were revolutionary both in terms of engineering, but in terms of economics at the same time. We were trying to understand patterns in natural history. Post Linnaeus, we're talking about Darwin and Wallace. Where does all this come from? And why is it that animals look a bit mechanical? I mean, an eye, a lung, a heart, that looks a little bit like some of these machines that we're trying to build, just more efficient.
So all these theories start emerging and essentially there are four what I call pillars that emerged in the 19th century.
[00:21:45] Speaker B: Entropy, evolution, dynamics, and computation. See, I read the book. I read the book.
[00:21:50] Speaker A: Those are it. So everyone could ask these questions about these systems they were studying.
Are they stable? Are they efficient?
How much energy do they require?
How are they engineered or evolved?
What problems are they solving? What we call computation or logic.
So as you say, evolution, entropy, controlled dynamics and computation and logic. And all of those people, Boltzmann, Maxwell, Clausius, Carnot, Darwin, Boulle, Babbage, Wallace, were all part of a society in constant conversation in the 19th century.
[00:22:32] Speaker B: But see, and you write in the book that this is before organized academic publication systems, which is where we all talk to each other. Now, I mean, we can say we all talk to each other and we see each other at conferences, for example, but you write a little bit about how these people came together and interacted.
[00:22:49] Speaker A: Which is upon each other and fought each other. You know, Charles Babbage fought everybody.
[00:22:56] Speaker B: Is that right?
[00:22:57] Speaker A: Oh, yeah. He was surly much to say about Babbage, a really important figure. But, you know, was in correspondence with Charles Darwin, as was Maxwell in argument with him. You know, they didn't agree on a lot of things. And this is before the tyranny of metrics and the journal system.
And what was starting to happen and really happened in the 20th, which is complexity science proper, is that those fields started to coalesce. Right. We started to do.
And those four volumes start in 1922 with Lotger, who said, I want to combine Darwin with Clausius and Carnot, I want to do evolution and thermodynamics in one go. And the whole history is. When you say, what does it mean to think like a complexity scientist? Essentially it means connecting the four pillars.
That is the game. So you're not allowed to think about something purely in terms of information.
You have to think about the energetic implications.
Right. You have to think about its stability.
Right. Its robustness and so forth.
What problem it's solving, you know, is it doing it? Is it a computationally hard problem or easy problem it's solving. So really thinking like a complexity scientist is having the four fields a little bit under your belt.
[00:24:20] Speaker B: Have to be a little bit under your belt. That. And that's a big ask as well.
[00:24:24] Speaker A: It's a big ask. But I mean, you can tell right away. I mean, in this community, you had better know something about theoretical computer science, something about statistical mechanics, something about nonlinear dynamics and something about adaptive dynamics. You have to. And you don't have to be expert in all of them. Right, Right. But. And I think it's. So any problem is rotated through those four pillars. And that's what it means, I think, to think like a complexity scientist. And we can talk about the history, but finding those principles that combine the four.
[00:25:01] Speaker B: I mean, you had just mentioned that people often equate complexity science with the methods and how that's a mistake. And other sciences as well. But if you have all four of these pillars under your belt. Each of these pillars has their own abundance of methods. So I think that's where someone like me gets kind of lost. Right. Like, how do I know. It's almost a frame problem in complexity science where how do I know which methods to draw from, from all of these fields to make the connections if I only know a few from each field, for example?
[00:25:36] Speaker A: Yeah, no, I think it's a totally reasonable conundrum that we all face. And in the end it's disciplined by the question you're asking.
So let's take the example. I mean, the second paper in foundations is Szilard's famous analysis of Maxwell's demon. So this is this really extraordinarily surprising thought experiment suggested by Maxwell, that the second law is unlike all the other laws in physics, it's not a fundamental law and it can be violated locally if you have an intelligent demon in your system.
And one can talk about that at length.
[00:26:22] Speaker B: But he didn't call it a demon. Did he call it a demon?
[00:26:25] Speaker A: He did.
[00:26:25] Speaker B: Oh, but it wasn't coined Maxwell's demon until later, is that.
[00:26:29] Speaker A: Yeah, that's right. I think it was actually Eddington, Lord Kelvin, who called it Maxwell's demon. He called it an intelligent being, a discriminating observer. But nevertheless, that weird conceptual sleight of hand that placed an intelligent entity absolutely at the foundations of physics became the field that we now call the thermodynamics of computation, which led to the quantum computation revolution. So there was a good example of someone struggling to understand the nature of a fundamental law in physics, which was not a fundamental law based on conservation principles and symmetries as the other laws were, and realizing that the right way to think about it was through a computational informatic lens.
So to me, they were natural methods. I mean, the methods weren't developed, let's be quite clear. I mean, information theory didn't exist until the 40s.
[00:27:37] Speaker B: But this going back, this is. This is how complexity science was. The pre foundations were, or the foundations were inspired by technology and the machines coming along to a degree.
[00:27:50] Speaker A: Right.
[00:27:50] Speaker B: To even have these processes to study and understand, is that right?
[00:27:57] Speaker A: Yes, I mean, that's a very good point.
The whole concept of the second law comes out of what Ancarno's analysis of steam engines. How do I make an efficient thermodynamic cycle? And so that wouldn't have even been asked, you know, I make an efficient machine by minimizing heat dissipation. How do I do it?
And all that follows from that question.
[00:28:29] Speaker B: And you're right, I interrupted you with an aside. I thought they were connected, what you were saying.
[00:28:35] Speaker A: No, I think, I mean, I just want to make the point that I think it's rarely the case that you start with the method.
I think that you start with the question and then you start in these papers, of course, because they're foundational, they develop their own methods. I mean, there was no chaos theory before Ed Lorenz invented it. Henri Poincare had made the observation, based on his analysis of the three body problem, that there was this thing in deterministic systems that was a bit of a shock that they weren't perfectly predictable without perfect precision of measurement. Lorenz then says, takes that to meteorology and again, very interesting history, a very rich history, and has to develop techniques for the analysis of chaotic systems. Nearly all of these papers are making methods, not just applying them. And that's not unlike the history of physics. I mean, if Leibniz and Newton had to invent the calculus.
Network theory as we now know it, which is what happens when sociology meets statistical mechanics, was invented to deal with systems that are naturally described as network systems.
But at a certain point what happens is something a little bit decadent, perhaps the method moves to center stage and then just gets overused and over deployed and becomes the thing itself as opposed to the instrument for understanding the thing itself.
[00:30:15] Speaker B: Where is. So now I'm just jumping way ahead now and then we'll come back. So where is complexity science in terms of.
Is complexity science continuing to evolve and develop new methods or is it in danger of the methods becoming so centralized that it could be mistaken for the methods? I mean, I know all sciences are sort of continuing on and developing new methods when they need to, but it seems like complexity science is so fluid and evolvable that it. Essentially what I'm asking is where are we now in complexity science?
[00:30:54] Speaker A: I mean, I don't know. I think it's right at the beginning.
[00:30:56] Speaker B: Okay.
[00:30:57] Speaker A: I mean, I think we're right at the beginning. I mean, there are fields. String theory is a good example of a field that's nominally started with an effort to resolve contradictions between discrete and continuous formalisms in theoretical physics and then became the method dominated by mathematicians, not physicists. And you know, now it's sort of drying up because at some point people woke up and realized that they weren't answering the questions, they were just building more and more elaborate techniques.
I think again, you have to look at the history. I mean, let me just give you an example of why. I think we're at the early phases. One of the papers that we include in this volume is the canonical McCullough Pitts original paper on neural networks.
So this is the paper that established the entire Field in 1943.
And two extraordinary people, weird people, as you know. Right. So Walter Pitt's child prodigy, homeless, writes letters to Bertrand Russell when he's 12 or 13 or whatever it was, gets replies inviting him to Cambridge.
Russell not realizing that this kid who had pointing out errors in the Principia was a homeless kid. And you know, on the one hand and then doing the same thing by auditing Carnaps lectures at University of Chicago and then you know, Warren McCullough trying to discover the psychon, the elementary atom of psychological, you know, processing.
These two come together to try and develop a formal logic based on thresholded units that we now think of as neural net and in the process make all sorts of criticisms which are still valid to this day that haven't been addressed since 1943, particularly issues of circular causality.
[00:33:07] Speaker B: You just said that they make criticisms.
[00:33:10] Speaker A: Yeah, they, well they, it's very interesting. I mean that's a, you know, so in that paper they make many points about what these kinds of machines in my sense can do and what they can't do and what their future problems will be.
One of them is in these neural networks that are recurrent it's very difficult to establish causality. So how will we understand them if we've got circular causality in millions of units? Which of course is the problem of today, the problem of interpretability of neural Networks. This is 1943.
[00:33:50] Speaker B: I remember them saying something about, you would know the phrasing better than I but with their drawings and they do put recursive loops in there. And then I feel like they punted and said these of course are in the system and will have to be addressed at some point.
[00:34:07] Speaker A: Yes, I mean essentially that is what they say and they put it very floridly and beautifully. Warren McCullough had a very colorful language.
But that paper really has come into its own in the last 10 years and there are many other papers like it. And so that's what I mean when I say at the very beginning, because the interpretability problem, which is trying to understand the logic of large decentralized thresholding units, even today we're even dealing with non circular causality. Most of these are strictly feed forward right autoencoders. But I don't think we have the methods actually or the frameworks for analyzing such systems. They're Just being developed as we speak.
And I could go through all of these papers. Even deterministic chaos. There's so much current debate about the free will problem, and I've talked about it myself at length.
So confusing and based on a really crappy misunderstanding of chaos and even quantum mechanics and so forth. So whilst we have expertise in a number of fields, it does feel like a series of disconnected islands without bridges.
And that's what's gratifying. Right? I mean, and again, the history is trying to build them.
[00:35:36] Speaker B: You mean within complexity science or the paradigm of complexity science or within the specialties?
[00:35:42] Speaker A: Both, yeah. Frankly, yeah, both, yeah.
[00:35:47] Speaker B: Again, I kind of want to ask, like, what the current challenges. Let's just. We'll come back to current challenges. So let's stay in, like the history.
All right, so take us back. Then you have the pre. Foundations, and then in the 1920s you start to generate. With the advent of people like Turing and lots of other people thinking about how machines work and then applying that to how maybe biological organisms work. So early 1920s, it starts to sort of gather these disparate parts and try to make sense of them together. And so what else, like, in that early landscape, did it look like?
[00:36:27] Speaker A: Yeah, I mean, I think that you have to remember how much happened in the 40s and 50s. I mean, because in the 40s and 50s we have. You think about it. So this is just volume one, right? Of those four, you have Shannon.
[00:36:40] Speaker B: Yeah.
[00:36:41] Speaker A: Information theory of Turing, computation, imitation, game. You have Nash.
Game theory, you have Weaver. So all of this is happening, interestingly. So complexity as a phrase that somehow captures this constellation of concepts that the four pillars circumscribe was first articulated in 1948 by Warren Weaver.
[00:37:08] Speaker B: That was Weaver.
[00:37:08] Speaker A: And Weaver explicitly wanted to make a distinction between simplicity, what in the book I call the world of symmetry, right. And determinism and so forth.
Disorganized complexity, which is the world of Boltzmann, Kahna and Clausius. Statistical mechanics, gases, formally disordered states, both of which we know how to describe mathematically. 1 We average and we treat with ensembles and one we treat with classical differential equations.
And in the middle is the world that he called the world of organized complexity. The things that just seem irreducible, that we're constantly struggling with, and that societies and brains and the natural world and ecosystems and all the rest. And Weaver says that's complex somewhere between those two extremes. And in that middle we need new methods. So it was both ontological and epistemological in 1948, and the point he makes is he says we need new forms of computation to study that explicitly.
He talks less about new kinds of mathematics, which we turned out we did need.
And so that paper is very prescient and it established the field which then by the 1970s everyone was talking about complexity.
You know, now at the same time in the 40s, Rosenbluth, you know, and Vena are inventing cybernetics.
And cybernetics has a legitimate claim to being the embryo of what developed into complexity science.
[00:39:04] Speaker B: Is that because of the emphasis on a. Not autopoietic, but on agency?
[00:39:14] Speaker A: On agency, I think on agency, yes. Autopoietic is 1970s.
Yes, very much so. Because what the cybernetic framework did is it said the objects have objectives.
[00:39:35] Speaker B: Right.
[00:39:38] Speaker A: Interestingly, William James, in the Principles of psychology in 1918 or 1919, whenever he wrote that book, he makes this very interesting distinction between laws from in front and laws from behind.
And James suggests that the defining characteristic of all mental phenomena is that they follow laws from in front, meaning they have purpose, they're being driven from the laws from behind. You know, that's physics and chemistry, what we would think of bottom up. But there's something peculiar about psychological mental phenomena which is that they, they kind of start with a desire, they start with an, with a goal.
And cybernetics was the mathematical and engineering solution to the William James question.
And of course it came out of radar tracking machines and all that stuff in the war and was generalized to the study of, in some sense self maintaining informations whose parts are integrated through the sharing of information. So Wiener set about trying to develop the framework that would allow him to address that question. Of course, somewhat unsuccessfully, it morphed into control theory and complexity science branched off in a different set of directions.
[00:41:07] Speaker B: So had Wiener stuck to his original emphasis and goal, he might have been more of a forefather, I hate that term, but more of a progenitor of complexity science. I mean, you started by saying that in some sense we can trace complexity science back to cybernetics.
[00:41:27] Speaker A: Yeah, I mean the problem is, you know, he became absolutely obsessed with feedback.
[00:41:31] Speaker B: Yeah, yeah, nothing wrong with that, but.
[00:41:34] Speaker A: Nothing at all wrong with that. It's one of the four pillars, right? It's the control dynamics pillar. And I mean again, that has a fascinating history going back to Maxwell's work on governors which regulated power in steam engines. But that Wiener discovered, actually rediscovered. But it was one pillar and it was a little bit too much was made of this.
Everything is about feedback and the maintenance of state and the relationship to the notion of homeostasis. But there's all this other stuff going on, right, that was much as interesting, let's say, about adaptation and computation and change. Not stasis, not just tracking targets, but making them. And he kind of, he was a bit too much of a monomaniac I think, on this, on this feedback loop concept and missed a lot of other interesting material.
[00:42:32] Speaker B: But this goes back to how to think like a complexity scientist, right? Because monomania heralds great discoveries as well. Because when you become transfixed on something, even if you have your blinders on, you going to study that things in great, great depth and that leads to discoveries maybe in that narrow field. So if I want to be a complexity scientist studying, you know, I study the brain, right, and behavior like for example, like how do I know how much time do I spend studying feedback control and then moving on to self organized systems and then moving on to, you know, autopoiesis, right? So how much, how often do I spend, right? And what is the perfect trajectory in terms of when to start integrating concepts and methods from these different fields?
[00:43:26] Speaker A: I mean, let's take the example. Let's imagine that Novadvina asked himself the question of what would happen if two agents were engaged in mutual feedback. He would have been forced to start thinking about things that von Neumann, Morgenstern and then John Nash were worrying about. What we now call game theory, the theory of strategic interactions. It's another higher order stability concept that comes from reckoning with multiple agents interacting.
So that's just one example. He, he, he got stuck with a single agent in an environment with say a moving target. And so I think the framework suggests themselves by virtue of asking the next logical question and then you have to go and retool to try and address them. But I do think it's the question that prompts the expansion of your inquiries.
[00:44:28] Speaker B: But often the next logical question is only logical in hindsight or obvious in hindsight. Right?
I mean that's just a question of creativity, I suppose, maybe more than logic, but it's only logical in light of what you know about complex systems and what makes them interesting. But if you don't know that already, it's hard to see where that next logical question is from that context perhaps.
[00:44:55] Speaker A: Well, let me give you another example which brings in John von Neumann, not in game theory, but in his theory of automata.
I think it's quite natural.
So von Neumann's working at the Institute for Advanced Study on building the Maniac based on the ENIAC Philadelphia to do numerical Meteorology and ballistics.
And these machines are very unreliable, right?
And so these machines keep breaking down, so you're constantly having to replace parts.
So von Neumann says, how do you achieve robustness in noisy computational systems of the kind that Wiener was positing to solve problems of feedback control?
Wiener wasn't worrying about the fact they were falling apart, he was theorizing about it. Von Neumann was building a computer that was falling apart.
So he says the only way to ensure continued operation of a system with that many parts is that the parts replicate.
[00:46:01] Speaker B: Oh, I thought you were going to say redundancy, but replication redundancy was one.
[00:46:05] Speaker A: Yes. He wrote a very famous paper on essentially redundancy or robustness in probabilistic automata. Fault tolerance is what we would call it. Now, you can do it that way, but that only takes you so far. At a certain point, you need to replenish it. And the way you replenish it with, well, how life replenishes it is it replicates parts. So von Neumann suddenly realizes, do you know, I thought I was just trying to build reliable computers. What I was really after was a theory for the origin of life, okay? And so. And because it's von Neumann, he doesn't stop and say, oh, I'm not going to go down that path because look at all the things I'm going to have to learn about biochemistry and so on. He does go down that path and invents an entirely new theory, the theory of universal constructors, which have now proven to be only in the last decade, again with the work of people like David Deutsch, and then my colleagues, you know, Sarah Walker and Lee Cronin on assembly index and so on.
But that's a beautiful example of seeing the problem and then daring to pursue it and not just saying I don't have the time or the skills. So maybe if you say thinking like complexity scientists is a kind of.
I don't know if it's an immodesty or a bravery or a recklessness that says, I am going to go down that path because I know someone has to, right?
[00:47:31] Speaker B: So, I mean, I hate to bring this in, but then people have to worry about their careers as well. And I know sfi, maybe you've used the term maverick in the past to describe people's personalities, what kind of people fit in the sfi, although I know it's a wide range of people, but then I have to worry if I go down all of these different rabbit holes and spend a little bit of time in Each of them and ask the right question that I have to worry about my career. Right.
[00:48:02] Speaker A: You know, I'm slightly less sympathetic. I'm the worst person that way.
[00:48:09] Speaker B: I'm just echoing what I think people might be thinking.
[00:48:12] Speaker A: No, I know and I know people do think this and I. And a rage and I get it and.
But I do think there's something a little wrong with the world, to be honest.
[00:48:21] Speaker B: Yes, but that's the way it is.
[00:48:23] Speaker A: Yeah, well, but then it's our job to defy it. I mean, I have to say, I.
At a certain point, we only live once and we don't live very long.
And I think you should dare to go down that path. And the reality is, Paul, that if you're sincere and you really work at it, chances are you'll do work as interesting as conventional work in your own field. You might not be the one. You might not be a Lorenz or a von Neumann or a Nash. Most of us are not. But you could still do good work down that path. And that's been my experience.
It's not that it's a totally reckless jumping off a cliff move, it's just a lateral move. And so most people have done very well, surprisingly and maybe not surprisingly. Right. Because the territory has not been saturated with other scientists. So even if you're not doing the best work, you're making discoveries because there's no one else in the same room with you. So I think there is a kind of safety, weirdly enough, in moving into under explored territories because you're not competing with a million other people.
[00:49:41] Speaker B: Right.
[00:49:42] Speaker A: So it's a trade off. Right. You don't have the security of peers, as many peers, but you have the benefit of an unplundered environment. So I think they might even out.
[00:49:57] Speaker B: So I'm sorry, I'm just going off the top of my head here with questions, but this made me just think, you have these four volumes of the foundational papers. What role does survivorship bias play in, in like the complexity sciences. Right. So if I go laterally and I, maybe I'm not jumping off a cliff, but maybe I took a misstep and it's leading me down a road that's not going to get me in the foundational papers. Volumes. Right, right. Is there, do you see that? Is that a problem or not a problem, but a phenomenon within complexity science as well.
[00:50:30] Speaker A: I mean, it's a phenomenon in all sciences. Most of us will be completely forgotten. And I think that. So it is the case that these are the papers that prove to be of enduring value.
But it's worth asking, were they attended to in their time? I mean, you mentioned autobiosis a couple of times. That's 1970s, that's maturena and Varela. That was completely ignored. It was considered New age nonsense for decades.
[00:50:55] Speaker B: Was it decades? Has it only in the past decade or two come back?
[00:50:59] Speaker A: I mean, people like Randy Beer and others at Indiana who have really been championing that worldview, even. Honestly, Paul, even now, if you went into your neuroscience conference and you mentioned they'd slap you about the face.
[00:51:13] Speaker B: I don't go to neuroscience conferences. I don't want to get slapped.
[00:51:17] Speaker A: So I think even now. But what they did, right, it's interesting to talk about, von Neumann, is they said life is not about self replication, it's about self synthesis.
Right. It was an interesting move. It was very much in the universal constructor lineage.
But they used this language which was quite unfamiliar to people. You know, the concept of autonomy and self maintenance and the issues of boundary that only now through people like Judea Pearl's work, Carl Friston's work, or this idea of Markov blankets in the Bayes language. But it was actually present in their language, but it was, you know, it was early language, so it felt a bit odd and unfamiliar. So I think. So again, I think they were way ahead of their time.
People like Niklas Luhmann, the German sociologist, used their work to interpret societies building on people like Durkheim's idea of a society as a irreducible aggregate of individuals. It's greater than the sum of its parts. So.
But I think it's fair to say that a lot of papers in this volume, certainly von Forster's theory of self observation, that is, you know, 30 years before Doug Hofstadter talking about strange loops. People think this is kind of silly stuff, Age of Aquarius speculation, you know, and. And part of the problem is we were so good at reductionism, we were so good at simple causality that anything that was decentralized collective, complex causality, many, the summation of many, many important factors, it attracted a kind of holistic language prior to the development of its methods that was kind of marginalized as New Ageism. And, you know, it's there with Alexander von Humboldt in his early theories of ecology. But it's very empirical, right? It's based on the observation of the natural world. Early attempts to formalize these ideas in things like synergetics or general systems theory. A lot of people thought it was kind of just a little Strange.
[00:53:27] Speaker B: Are you happy with the term complexity science?
[00:53:32] Speaker A: I am, because it does two things for me.
One is it's opposed to simplicity and reductionism of a certain kind, that there are ways of knowing without taking everything apart. Right. There's that bit of complexity.
[00:53:53] Speaker B: Can I guess your next point? I'm sorry, are you going to use the word pluralism in this next sentence?
[00:53:58] Speaker A: No, I wasn't, but I can.
The other one was much more about the development also in these four volumes of what we now think of algorithmic information.
Kolmogorov, Solomonov, Chaitin. And this idea that was developed then subsequently by people like Ristanen and others, that complex phenomena are incompressible phenomena.
And.
And another way of saying that is that they break lots of symmetries and they have long histories.
[00:54:33] Speaker B: Can we pause on broken symmetries? It's one thing I realized. You know, when you're reading a book and you. You read the same phrase over and over, and then you get halfway through the book and you think, I'm not sure I have a great grasp on what that phrase means. Broken symmetries is one of those phrases. So I went back and I tried to see if I could find a little more detail. What, what does symmetry, and therefore broken symmetry mean? I'm not sure. If you could just expand on it a little bit and why it's important for. I don't want to use the word emergence yet in our conversation, but why it's important for complex systems.
[00:55:08] Speaker A: Well, I'll tell you why the opposite is important. First, symmetry. So symmetry is the foundations of all physical theory, and you can think of it as symmetry of process, the time, symmetry of the equations of motion that gives you, through elaboration, all of the forces in the Standard Model and so on. So a lot of the gauge theories sit on fundamental symmetries in the processes, but then there's symmetry of outcomes.
There are alternative states you can be in, but you're as likely to be in one as the other.
And that's true at very small scales. And we'll talk about that in a second. And that's where things like the renormalization group and so on kick in. So the symmetry of configurations versus the symmetry of the fundamental equations and laws, and basically physics all comes out of that now. It's been known for a long time in physics and beyond that, in certain processes, whilst there is a symmetry of outcomes, once you enter into one of those states, you get stuck in it, and you can get stuck in it effectively forever.
Very famously One of the founding texts, also in, I think it's in volume two, the 1972 paper by the Nobel laureate Philanderson, More is different. So what Phil starts with in that paper is the following thought experiment says, take a simple molecule like NH3, like ammonia.
Ammonia has two configurations. It's a pyramid. And those pyramids invert, they go bloop, bloop, bloop, bloop. And it's small enough that it fluctuates between the two configurations such that if you were to observe the system, naturally, you'd be in one state 50% of the time and the other state. So you have symmetry of outcomes. Okay. But if you make that slightly larger molecule, just more atoms like Ph3, like phosphine, that also has a pyramidal structure that oscillates, but very slowly.
[00:57:21] Speaker B: Well, still quite fast, but slowly relative to the NH2. Yes.
[00:57:25] Speaker A: And so basically the energy requirement to move between the two states now is high enough the energy barrier, that actually you basically stop where you started.
And as molecules get larger and larger, the underlying physical laws that gives you symmetry of outcome become useless, because now you end up where you started.
Okay, now why does this matter?
In a very famous paper written by Eugene Wigner on physics on what he called principles of invariance and symmetries, he said all physical law tries to do two things.
It tries to come up with a very parsimonious set of processes and a very small set of initial conditions.
The process is the thing we understand usually based on symmetry. And the initial conditions are things you don't understand. You have to assume them.
What Phil was saying in 1972 in the Moore's different paper is that once things get large enough, almost all the information you care about that allows you to explain the observable is in the initial conditions, the thing you know nothing about.
And so now Darwin's theory and other theories like it are essentially theories that try to explain the. The history of initial conditions. And we can get into that. It's a little bit. But it's a very profound observation. So broken symmetry is when the state that you find in the natural world cannot be explained by the fundamental law, but by something you're ignorant of, namely the initial condition. And there are many ways you can get broken symmetries and we can get into that. So if you take. I'll give you one very simple example. And stop.
If you think about a DNA molecule, it's made of four bases.
The sequence of bases matters, right? Because they make proteins that are functional with respect to the laws of physics. You could permute it completely, it makes no difference. That's all one molecule, right? With 4 to the N possible configurations, each of which is essentially equally likely by the fundamental laws. But we know. Wait a minute. Most of those sequences of ACGs and T's rubbish. Only a small, tiny subset actually make functioning proteins.
[00:59:57] Speaker B: Well, junk DNA. Well, we can. Well, that's another side topic.
[01:00:01] Speaker A: That's another issue. The issue of junk DNA. I'm just making this point that why broken symmetry matters is because when it comes to problem solving matter, the only way to really explain the functional configurations is to look at their history, not the fundamental laws.
[01:00:22] Speaker B: So in that example, the symmetry breaking is the sequencing of the molecules themselves.
[01:00:32] Speaker A: It's the sequence you find at multiplicity. So, meaning that particular DNA sequence is found across all organisms, from flies to humans. Why that one?
Physics doesn't tell you why. Physics says any of them could be found.
[01:00:51] Speaker B: Right.
[01:00:52] Speaker A: Compatible with the laws of physics.
[01:00:54] Speaker B: Yeah.
[01:00:54] Speaker A: Right. So you have to come up with a special story which is exactly what Eugene Wigner in physics doesn't want, because it wants it all to come down to the fundamental laws.
[01:01:03] Speaker B: Right? So anything that's not a fundamental law is the result of a broken symmetry.
[01:01:08] Speaker A: Is that any persistent state where the, where the observation of that state cannot be explained by the fundamental law is going to be evidence of a broken symmetry. Right?
You know, you walk out of your door, you go left or right, Physics doesn't know which way you're going to go.
You choose to go right, Because I happen to know you need to buy some new. A new pair of socks.
[01:01:35] Speaker B: That's because I had free will, but.
[01:01:36] Speaker A: Okay, well, that's another issue.
But I need to know your history or your internal states to know them.
[01:01:43] Speaker B: Yeah.
[01:01:44] Speaker A: And anyway, it turns out to be the foundational concept for all complex phenomena, from DNA molecules to transistors, because these can all. If you think about Hopfield, who just won the Nobel Prize in physics of all things.
[01:02:01] Speaker B: Oh, I wanted to ask you about that. Maybe we can come back to it.
[01:02:03] Speaker A: Yeah, I'll come back to that. But what he showed, right, I mean, the reason it's important, the reason he won in physics, is because he was working on spin glasses. And the point about spin glasses is that they can store tons of broken symmetries, they have lots of ground states, and so it's, it's absolutely crucial.
And Parisi, who won the Nobel Prize before, won it for symmetry, what he called replica symmetry breaking, which is why Hopfield's model works so this concept is everywhere once you start looking for it.
[01:02:40] Speaker B: So I don't know if we want to bring up emergence right now. I mean, you do spend some time, actually, it's one of the whole parts of the book, talking about emergent properties and emergence. And the last time we spoke, you wanted everyone to kind of cool their jets about emergence, that it's not some spooky thing. And you talk a little bit about this in the book and then you have, at the very end, you.
You talk about compilers as a sort of solution to the. To thinking about emergence. So maybe we can come back to that. But what I was going to ask, and stop me if you want to go somewhere else, because we can go anywhere, but. So a broken symmetry. Then here's what I want to ask. Is there anything that is not an emergent property of something else? And paired with that, I was leading into that, because broken symmetries are fundamental for any emergent property, right? Yeah.
[01:03:40] Speaker A: So one way to say it, and again, I don't want to get too weedy, but is if you're in this world, right, where the particular state that has been selected depends on a history. And I don't mean. I mean history, meaning it depends on time, right? And what are you going to do if you were Eugene Vignette? You throw up your hands and you say, we're done. There's nothing to be done. That's just all about the world of accidents. It's the world of frozen accidents. Another name for broken symmetries.
And the physicists and the philosophers of physics have a word for this, a pejorative phrase. They call them the special sciences, like special education, anything that's not fundamental.
[01:04:30] Speaker B: I'm special. I'm special.
[01:04:31] Speaker A: Yeah, I'm special, too. We're special.
But you can do something really clever, which is you can take those broken symmetries and you can aggregate them into what we'd call effective dimensions.
One of them could be a cell, and then you can come up with cell theory, or you can aggregate them into a particular kind of cell called a nerve cell that has an excitable membrane, which you can then explain using an effective theory, Hodgkin, Huxley theory. So they're not fundamental, but they're very coherent and consistent and quite parsimonious. And this is the key concept, right, is that the fact of broken symmetries doesn't imply a world just of description, because if you aggregate them just right, which is what emergence is about, you can come up with theories which work at their levels.
And that's what all science is, other than fundamental physics. Right. And that's why emergence is so important a concept, because the processes of emergence are what explain why other disciplines other than physics have to exist in the world.
It's really important.
[01:05:52] Speaker B: So you have an agglomeration of broken symmetries. And this, if arranged just right, leads to what could be. Leads to an effective.
What theory is that? The effective theory. And that effective theory, which just means that you can say something about the causality of the way that the system is interacting with the world or affecting things that. That abides at its own level.
And you don't have to appeal then to lower.
[01:06:25] Speaker A: Exactly.
[01:06:26] Speaker B: Quarks, for example, you don't have to reduce everything to like more micro. Micro states. And that is an emergent property, an emergent system.
[01:06:34] Speaker A: Yeah, absolutely. Let me just make. Demystify this.
So let's again go back to a DNA molecule, an RNA molecule.
That's a very complicated bit of chemistry there. Right. But if you're doing diagnostics, medical diagnostics, genetic diagnostics, phylogenetic inference, you don't need to worry about the chemistry. Just say, just give the letters.
So it's got an A there instead of a C.
So the A thing that doesn't have any of the detailed chemistry. Right. It stands in for it because the detailed chemistry maps in a consistent fashion to the letter A. The A captures everything you want to know. You can, you can back it out if you want to. And that ability to back it out if you want to says something about the chemical processes. Because that's not true for everything. Right. Not everything has that degree of coherence and stability in time.
And for example, you know, your beliefs aren't as stable as that. Our political institutions are not as stable as that. And so emergence has different properties in different scales and in different contexts. But the fact that you can do useful scientific work with a list of letters without going to the chemistry is really interesting. Right. I mean, I can tell you, do you have sickle cell by looking at letters?
[01:07:59] Speaker B: Although we invented the concept of sickle cell.
[01:08:01] Speaker A: No. We did, interestingly, no. But it's. But it's a coherent mapping, right? From the chemistry to the category that works, that has utility. That's why we believe in it. And I'm sure we could have done it in other ways, but it really works. And that's evidence of emergence. And that is again, coherent, coordinated dimensions of fundamental matter that can be labeled or tagged.
And then you work with the tags.
[01:08:31] Speaker B: Right.
[01:08:32] Speaker A: And that is what emergence is all about. And there are different mathematical ways of saying that, but that's the key.
[01:08:41] Speaker B: So then I go down and I list all of the things that I can observe, and I don't find anything that I can't say is not an emergent property of something else. Is there anything that is not an emergent property of something else? I'm sorry, this is a kind of ignorant question, but I've run a short list through my mind at various times, and I thought, okay, so it's just.
[01:09:07] Speaker A: Some examples of things, you know, this property that you can do, work with the aggregate variable, real work, demonstrable work in the world, is not true of most things that you measure.
And.
And so that's very important. There are many properties of the world that are aggregate properties of underlying microscopic things, but they don't show this emergence property. I mean, let me give you one of the controversial example.
So game theory.
So game theory, for a long time, when it was first starting to be developed, was thought to be a normative, prescriptive model of human behavior. We were going to use this simple model that John Nash and others developed to actually run political institutions, right? And mutually assured destruction was not a problem because we could model exactly what rational actors would do when they're in possession of very powerful thermonuclear weapons. Okay.
You know, many of these ideas were idealized into ideas like cooperators and defectors.
Well, it turns out that the notion of a cooperator and defector is actually not a label like ACGT is with DNA. It's not a consistent, coherent mapping from psychological states, cognitive states, neural states. It's not.
It's an endlessly metamorphosing idea that doesn't have temporal and spatial stability, and therefore it's not very useful. And so at a certain point, game theorists realized this is not a normative theory. It's a kind of way of thinking about thought experiments more rigorously with math.
Okay? And so that's a failure of emergence. And you find them everywhere. There are things that we think are real, we theorize with them, and they don't work.
And so there's only a small number of things that actually have that consistent property that you can screen off the lower levels and, you know, and you can say, I mean, much to say about this, but again, look at your field. Like, let's say I wanted to treat a psychiatric illness.
Should I do it with psychoanalysis or some behavioral intervention or with pharmacology, quarks?
[01:11:44] Speaker B: You should do it with quarks or.
[01:11:45] Speaker A: Do it with quarks, Even better or even worse. Right. What is the right intervention into the system? And these are questions of emergence. And I think where our behavioral therapies fail, they're suggesting that the categories that we've discovered are wrong. They're not truly emerging categories. They're actually arbitrary aggregations of microscopic degrees of freedom.
[01:12:08] Speaker B: So for it to be emergent, it has to be pragmatic. It has to work.
[01:12:12] Speaker A: It has to work. It has to have this.
And there's lots of language for this. I mean, it has to be effective.
[01:12:20] Speaker B: It has to have.
[01:12:20] Speaker A: It has to be effective. Yeah. And in people, the technical term for when it doesn't work is sometimes called the failure of entailment.
[01:12:29] Speaker B: Okay.
[01:12:30] Speaker A: And when it does work, sometimes called dynamically sufficient.
Right. So the example I like to use is mathematical proof. If I'm proving a theorem, why don't I have to use neuroscience?
[01:12:43] Speaker B: Yeah.
[01:12:44] Speaker A: Right. Because it turns out that mathematics, the axioms and the deductive rules are sufficient. They. They label consistently a certain kind of logic, so you can operate with them if they didn't.
If you got up every now and then when you were proving a theorem and sort of had an outrageous tantrum, I'd have to say, no, I've got the rotten. I need another kind of theory here. I need a theory of mind here. I need to go down a level.
[01:13:13] Speaker B: Yeah, right.
Why you keep picking on neuroscience, David? That's unfair.
[01:13:17] Speaker A: No, because that's your field. I'm not.
[01:13:18] Speaker B: No, I'm just kidding. I know. That's why you're bringing it up. I appreciate it.
So we kind of went into the emergence talk right before that. You were talking about the stability, and we were talking about broken symmetries. And if something stays in one condition for long enough, it is considered a broken symmetry. But then you sort of hesitated and. But with that sort of long enough, like. And then I thought, well, you write in the book about one of the challenges in complexity science is dealing with time. And is that because we need to think of everything in terms of time scales. And what is the right time scale to think of it? Because if something flips back and forth quickly on our observational time scale, we can say that it's symmetric. But if something has been to the right side for 100 years, or geological time can be symmetric, but we might not observe it. Right. So is that where time becomes a challenge in complexity science? Or what is the relationship of time?
[01:14:30] Speaker A: That's completely correct. I mean, that's exactly right. And because you know, in the lifetime of the universe everything's going to be symmetric because you know there'll be some kind of heat death or what have you and will be fully thermalized and which is a symmetric state. So you're completely correct. Time scales and time is deeply foundational in our thinking and the thoughts that we're having now are absolutely dependent on the timescales of chemistry, action potential rates, aggregate circuit properties and so forth. You're completely right. And it's why this actually introduces. Okay, there's so much to say about this and that is that the notion of subjectivity in this profound sense that you just asked that, which I mean, by the mean, the choice of time scales for these processes is a key concept in complexity science and it was already a key concept in one of the pillars when the concept of entropy was being formalized.
It was understood that the value that you calculated depended on the coarse, what's called the coarse graining. So for example, I can turn a six sided die into a coin by just making half of the numbers heads and the other half of the numbers tails. Right? And so and if you calculate the entropy of a fair coin, it's a different number from the entropy of a fair die because one's the entropy of 16 plus 16 plus 16 and 1 is 1 and a half plus a half. And that choice of what we call the coarse graining, that is the aggregation of the probabilities, is subjective and depends on the time scale of observation. To your point. So this is a field that's nascent now. How we really think about observer dependent entropy calculations and it plays into everything. I mean that's one side of the time question. Another side which is as profound and people perhaps not aware of is the concept of past, present and future have nothing to do with physics.
They have everything to do with observers. There is no past, present and future in physics, but not in classical physics. And the actually the right way to calculate them is to use the thermodynamics of computation, to use a computational theory.
[01:17:20] Speaker B: Again because of entropy.
[01:17:21] Speaker A: Is that because of measurement? Because of. It gets to some of the issues that McCullough and Pitts were talking about, which is the reason why there is a past versus a future is because of things like the irreversibility of the logical or function in a neural network, because the mapping is not invertible. I can go forward in calculating or, but not backwards because I don't know what the initial states are. They're ambiguous. But the point is that all of these concepts that we use to explain complex systems depend on the limitations of the observer.
Whether that's a cell, by the way, a neuron.
Right, yeah.
[01:18:11] Speaker B: Again, picking on neuroscience. No, no, I think it's an important one.
[01:18:15] Speaker A: No, the reason I'm not picking on it, actually.
[01:18:17] Speaker B: I know.
[01:18:17] Speaker A: No, and I know, and I mean this sincerely, because I think it's a very key field because it's about brain and mind, and there's a special role of brain and mind in complexity science.
[01:18:28] Speaker B: Speaking of nested time scales.
[01:18:33] Speaker A: Right. And it's the field that worries about computation and observation, chemistry less. Right. And so I think it's kind of at the nexus of some of these theories in a quite profound way.
[01:18:50] Speaker B: See, we went from.
We went from broken symmetries to emergence to time. I think, before we move on. Is there anything that the. In the book that we haven't talked about yet, and I still have questions that you think is something that you would like to highlight before I continue asking your questions.
[01:19:11] Speaker A: God, there's so much.
So I think.
I think it's important to understand the importance of complexity in terms of reconciling in the 20th century the social and the natural sciences.
And, you know, take the Austrian School of Economics and I'm talking about Schumpeter and Hayek and others.
A lot of ideas that we're thinking about now in network science come from the 40s, which is how you generate knowledge in decentralized systems. And that became an ideology, in neoliberal ideology, but actually in the 40s it wasn't so much. And, you know, how do you come to consensus when you have many constituents with partial information, problems of consensus and coordination? That really comes from social science and now is everywhere. And so we tend to think of sciences in terms of going from mathematics and the natural sciences into the social sciences. But actually this is a case where the reverse happened. And one of the things I've been interested in is this approach to the consilience of the disciplines. When you look very carefully about how ideas migrate.
And that was a bit of a discovery for me. I say, oh, you know, general systems theory, which is now used in large organizations to build aircraft, came out of the work of Fechner in biophysics and psychiatry. Oh, that's. So that the oddness of how knowledge actually comes together is an important part of what I talk about, because it's not linear. And I think the educational system we have is misleading in a really profound way.
[01:21:27] Speaker B: But so the point that you were just making about. Well, Understanding, I think you use the word consilience, but you know, understanding where the knowledge began and then where it ended and you know, how it traversed. And so that is a Herculean scholarly effort. But does it teach us anything about.
Does it teach the working scientist anything about how to go about there problem solving?
[01:21:59] Speaker A: I think it does because.
Because of this weird fact that we're addressing now of the limitations of time very often, right. The seeds to solving a problem exist.
You know, I, as I said, I think there's something very unfortunate about the way we learn science and practice it because. And we all do this, right? We write papers and by and large we cite contemporary work because they're likely to be the reviewers or whatever, some cynical reason, but also because it's seems more relevant.
And then we'll put in the occasional historical reference just to demonstrate our scholarly bona fides, you know.
But I have to say, in going back and reading these papers, and these are quite limited, right, because they're 200 years or in the volumes, just 100 years.
So many trails that weren't followed because they didn't have the methods then. Right. Every paper, it's like one tenth of it was realized because they could, because they had to cite their peers who had methods. Right. But we now have methods that they didn't. You could go back a hundred years and rewrite that paper in a completely novel form now based on what we now know. And so really profound work is so super generative, and I suspect you could kind of win a Nobel Prize just hanging out in the 1930s and rewriting what everyone wrote.
[01:23:40] Speaker B: I was just thinking that's what a wonderful exercise, if you were trying to, if you're an educator, to assign some of those foundational papers and have someone rewrite them through the modern lens.
[01:23:54] Speaker A: Exactly. I mean, it's interesting. I was talking to a colleague here, the physicist Carlo Rovelli, about this, and Carlo said, you know, he's going back and looking at all of the foundational papers and statistical mechanics and rethinking them.
[01:24:07] Speaker B: But life is short. We only have so much time.
[01:24:10] Speaker A: I know, I know, I know. But I think it's less about not being prescriptive about what one should do, but just suggesting that there is richness in the. There's super richness in deep ideas. And it's worth bearing in mind that just one path of many was followed.
It's just worth bearing that in mind. And we could live in a very different world if the other path had been followed.
[01:24:35] Speaker B: Yeah, right, okay. So one of the things that you do in the book is you sort of list out, you call it synoptic surveys. So there are, there have been a number of books over the years sort of giving synopses of complexity science. And each sort of, there's a, like you said before, there's kind of a cluster of centralized ideas, but each highlights a different idea. But one of the things you note among those is that there are common themes. But the earlier you get, the more the books tend to focus on the principles and the ontology.
And then as you traverse more toward later current times, alluding to what you were talking about earlier, the books tend to focus on models and the methods.
And you note that, well, this is a good sign because it's a sign of a maturing field, but you're also somewhat hesitant because it's also a sign of what modern day society demands on a mature field. So, so what do you think about that current demand, let's say, from, from modern society?
[01:25:49] Speaker A: Yeah, so I should say, I mean, again, this book is just as you've seen, right? It's just full of tables. It's kind of a crazy table book because I wanted, given the constraints of length, I wanted to put as much in. And one way to do that is with tables, right? Yeah, it's quite thin, but it's a.
[01:26:05] Speaker B: Sky, it's really thick.
[01:26:09] Speaker A: I think if this had been written the way many books are now written, it would be like 500 pages. And I think people write very long books when they shouldn't, quite frankly. I prefer density myself.
You can look up all the other stuff on Wikipedia. We don't need to resay it over and over and over again. So. But as you said, I really wanted to just list all the books that purport to be books about complexity science. Right. I mean, you know, people can go and read those for themselves and see what other perspectives are worth understanding. But one thing I did notice, as you said, I mean, you know, these popular books, not the technical ones, you know, Haakon's book on synergetics is a very important book, came out of Germany and, but you're not going to read that for fun.
Whereas you, you will read, you know, Prithogene's book for fun or Melanie Mitchell's book for fun. And you know, these are, these are edifying in a way that's not quite as challenging.
But I saw it, the early books that were wrestling with what is this sort of brought a lot to bear on the problem. You know, they're quite poetic, they're quite expansive.
And as the field developed, people would go down, you know, my colleagues, Jeffrey, I'm going to use scaling theory to understand.
Jeffrey west, or like Mark Newman, I'm going to use network theory to understand. And they're all very illuminating, but they tend to be more narrow in their methodologies.
And one advantage of that is that readers can then use them.
[01:27:50] Speaker B: Right.
[01:27:51] Speaker A: They can say, oh, I'm going to do scaling on my data, right. Or I'm going to use networks on my data. And I think society likes that for good reasons. It has utility, but it's a bit that early conversation we were having about what was lost in the history, what other methods would have been useful, what other ideas were there that were neglected because they didn't lend themselves to scaling or something. And I think all fields. I talk a lot to my theoretical physics friends about this, is that theoretical physics up until the standard model was established in the 60s and 70s, so after Gell, Mann and Feynman and Schwinger, then it became more maths.
Physics ceased to be conceptual. If you pick up a physics book, pick up Margonow's book on the structure of physical reality. It's just a brilliant read. It's like, wow, this is fascinating. The nature of time, the nature of causality, whether space is fundamental or emergent, those kinds of questions.
And then they become these very technical texts exploring very narrow questions that you're not particularly interested in. And so I think there is a natural evolution, but I think it's comes at a cost.
And there's a reason why people still go back and read Godela Shabak. It's still meaningful to people and because so many of the questions it raised are unanswered.
[01:29:25] Speaker B: So then what does that mean for the future of complexity science? I mean, is it going to be just more methods and models? Is that the trajectory of a maturing science? Or there'll be a paradigm shift and it'll go back to principles and ontology and rejigger. Oh, by the way, I wanted to mention the paradigm shoot matrix. No, the.
[01:29:47] Speaker A: Yeah, the disciplinary matrix.
[01:29:49] Speaker B: Disciplinary matrix. I think that's one of the more important, maybe not important, but very digestible concepts from Kuhn that you talk about in the, in the book.
[01:30:01] Speaker A: Yeah, so. Oh yeah, this idea. So one of the questions was, is this a paradigm?
If not, what is it?
[01:30:08] Speaker B: Is it a. Yeah, yeah, that's. That's good question. Right, and I shall ask you that now.
[01:30:14] Speaker A: Okay, what is. I'm curious to Know what you think a paradigm is? I think so. I marshal a few alternative related concepts to address this. One is the Kuhnian paradigm and the disciplinary matrix. One is Wittgenstein's language game and the other one is Dilfi's hermeneutic circles. And they're all attempts to deal with a kind of the myriology or part whole relationships of knowledge structures.
And is this a part of physics or part of chemistry, you know, and is this really coherent as an enterprise? And I like what Thomas Kuhn said. He said, look, think about a matrix and or a graph that's connected. And the point is, how easy or difficult is it to add or remove an edge from the graph? If I remove this edge, does it really matter? Does it change everything or does it just locally change something? And for him the characteristic of a paradigm was. Or a paradigm shift or what he called a revolution is when you take away or add an edge which completely compromises the underlying graph.
[01:31:34] Speaker B: Is incompatible with that.
[01:31:37] Speaker A: Yeah, you make this observation where you have an observation of an interference pattern through thin little slits and you see classical behavior just because you looked at it and you think shit, there's nothing in my classical mechanics that can explain this. I need a new theory in the theory of quantum mechanics and so on.
And so the question for me was watch the paradigm shift from physics and chemistry to complexity.
What becomes incompatible with physics and chemistry? One we've talked about quite a lot, which is a weak incompatibility, which is all these broken symmetries which mean you need emergence and effective theories because the fundamental laws don't do it. It's kind of weak, right, because they still obey the physics. It's just not determinate.
The more profound one for me is when the particle thinks life is life.
[01:32:36] Speaker B: Or does it have to think it doesn't matter.
[01:32:38] Speaker A: Life for me thinks, okay, there you go.
[01:32:40] Speaker B: Sure.
[01:32:41] Speaker A: But you know, it's when the particle says no, I'm not going that way, I'm going home, I'm afraid then agency, intentionality, will all of those ideas break the fundamental assumption of all of physics. Right. These are not, these are no longer particles in fields. These are self determining particles. And I view that as fundamentally paradigmatically new. And it is the origin of complexity science. And as you say, it's the origin of life.
The origin of life is co extensive with the origin of complexity science. In that sense.
[01:33:19] Speaker B: Is that why I am trending toward being more interested in life as the thing to study? As opposed, if you want to understand intelligence you have to understand life.
[01:33:34] Speaker A: I think so. And it's really interesting, Paul. I mean, this is a debate that many of us have been having now for a while, which is what is the meaningful difference between life and intelligence?
[01:33:48] Speaker B: Well, you don't, you can't, you don't conf. You don't have to conflate them, which is what I'm worried I'm doing. Right. Defining one by the other. But. And I want to keep them separate. But I don't know how to.
[01:34:00] Speaker A: No. So I'll give you an example. I think I know how to keep them separate, but it's not easy.
One way to say this that I think will be compelling to you, even though I don't have a fundamental theory.
In physics, a distinction is often made between intensive and extensive variables.
Things that grow in system size, like entropy versus things that don't like temperature. You have the same temperature in a room with its traces, beagle.
And life is intensive. You're not more alive because you're an elephant than a flea. That would be weird, right?
It doesn't seem to be scale dependent, whereas intelligence does. I think any reasonable definition of intelligence would allow that an elephant is more intelligent than a flea. I mean, if you don't do that, I would say you don't have a very good definition. And so. But that distinction, I think is real. The theory should reflect that fact that in one world scale matters, in the other world scale matters less, but that somehow that they are at some point of convergence equivalent.
At some point.
[01:35:19] Speaker B: But could you say if one organism was more minded than another, that it was then also more alive? I mean, these are semantics.
[01:35:29] Speaker A: That's why I don't think you should. I think it would not be useful.
[01:35:32] Speaker B: But that has nothing to do with intelligence. That has to do with, let's say, sub objective experience of a fly versus an elephant versus a. Yeah.
[01:35:39] Speaker A: There might be another concept we need.
[01:35:41] Speaker B: Okay, a third.
[01:35:42] Speaker A: Right, That's a very good point. I don't think intelligence is the one we want for that. Yeah, but you know, because I mean, for example, a virus is more adaptable than an elephant.
That's true. We learned that a few years ago. Right.
[01:35:59] Speaker B: At the right timescale.
[01:36:01] Speaker A: Okay, but the timescale turned out to matter.
[01:36:03] Speaker B: Yes, yes.
[01:36:04] Speaker A: And so, but so we do have other concepts where there's more or less which don't perfectly align with more or less intelligence. Right. Can be more adaptable and less intelligent. I think that's true of a virus versus a human.
But there is a point Right. At which intelligence and life seem to converge. And I think the origin of life might be that point, which is the origin of both.
[01:36:29] Speaker B: Right.
[01:36:30] Speaker A: Because I don't think you'd want to say that prior to life, the universe is intelligent. There are people who say that.
My colleague Seth Loyd would say the universe is a computer.
[01:36:39] Speaker B: Well, you're an IT from bit kind of guy, right?
[01:36:43] Speaker A: Well, right. And I think. But information, to me, I want to make distinct from intelligence.
[01:36:48] Speaker B: Okay.
[01:36:49] Speaker A: I think that was the. That's the revolution in the physics of information. Right. Even though Claude Shannon worked on engineering systems, telegraphs and telephones and computers, we now know that you can talk about the information in a black hole.
So it's a theory that's more general than purposeful matter.
I think that's an important point.
[01:37:12] Speaker B: Okay, David, I want to make sure that I get to some. So I had, like I told you at the beginning, I had a few people write in some questions for you. I want to ask you one last question before I get to those Patreon questions. And the questions I'm going to ask you from them also have to do with what we've already been talking about. But. Okay, so.
So, you know, you said early on in the conversation how nascent complexity science is still. And. And we talked about whether it's synthesis or unification and how it's kind of defined by the boundaries of other sciences.
It still seems to me from an outsider, like sort of a disparate collection of entities. Right. And this goes back to, like, how do I know which. Which thing to choose if I have this particular question? Right.
You know, I want to know, like, well, what is complexity? How does complexity science view the brain? And there's not a simple answer to that. Right.
Really what I want to know is, because it feels uncomfortable to me to. To, like, sit in that world.
Does it, over time start to feel more comfortable living in that space where there are so many moving parts and things to choose from and knowing if I have this question, these are the fields and methods I should draw from, et cetera. Does that start to feel more comfortable over time?
[01:38:35] Speaker A: I mean, it's an interesting question. I mean, this is a very hard question to answer, right? Because it would be unfair if you say, how should a physicist understand the brain? Right? And you'd think, God, what would that even mean? You'd have to sort of think about that for a bit and. Or a chemist or anyone else. So, first of all, it's a difficult question to answer for any field. It's not special to complexity.
And I'm not sure complexity is a field in the way they are, by.
[01:39:03] Speaker B: The way, either it's a paradigm, but not a field, perhaps.
[01:39:05] Speaker A: Well, that's interesting, Interesting. And so again, let me just state it to demystify it. A set of principles to understand problem solving matter. Right. And brains do that. They solve problems.
And what are those principles? And we've said they're the pillars. You'd want to understand the metabolism, the thermodynamics of the brain. That's totally reasonable. That goes into fmri. I mean, that's a whole evolution of that technology, which is in some sense an effort to connect the thermodynamic vertex to the informational vertex. Saying through the Landau principle, it says, you know, every elementary operation requires a certain amount of energy, you know, kg log 2 or whatever. And so I think that looking at any system that is purposeful in the adaptive sense, not the religious sense, in the adaptive sense, through these different frameworks and their associated methodologies. You know, there are people who study. I know them. Right. There are people who study critical phenomena in the brain, so they're using statistical mechanics. There are other people who are interested in low dimension movement in low dimensional manifolds in the brain, so they're studying non linear dynamics of the brain.
They're all out there. Right. And I think the complexity lens at its best combines some of those.
Right. And in the process possibly asks you to reevaluate the boundary of the system you're studying.
[01:40:52] Speaker B: Okay, that's.
[01:40:53] Speaker A: So it's sort of saying maybe what I'm really most interested in is not just in one head, it's in a population of heads, in which case I need to now engage with social networks and information transmission in a way that I could get away with ignoring when I did the work in the lab.
[01:41:12] Speaker B: Right.
But do you think a good exercise for me or a neuroscientist, someone like me? Right. Could I go through your book with these four pillars in mind and sort of, because you do it, things are parsed out in tables. But you also write about all the papers that are in all the tables and stuff. So I could kind of go through and scan with these pillars in mind and think what of these methods and conceptual frameworks and approaches are linked to those four pillars that would be beneficial to me to understand what I'm studying. Do you think I could do that?
[01:41:53] Speaker A: I think one could do that. I think there is a heuristic there.
[01:41:56] Speaker B: Okay.
[01:41:57] Speaker A: But also I don't think it's I do think it's a separate enterprise. Right. Because it's really looking for.
I mean, you said it at the beginning, right. I don't remember. Of integration or synthesis. And I mean you'll still do amazing work if you just work on the biochemistry of cell surface receptors. Now there are people, you know.
[01:42:24] Speaker B: You.
[01:42:24] Speaker A: Know, like Bray at Cambridge who said, you know, I want to study self surface receptors as if I'm looking at flocking of birds.
[01:42:31] Speaker B: Right, Right.
[01:42:31] Speaker A: I'm going to study them as if they're coordinated collective dynamics of semi autonomous agents.
[01:42:39] Speaker B: And this is where pluralism comes in as well. Right. Because then you're taking. And perspectivalism.
[01:42:44] Speaker A: Exactly. He's taking a kind of complexity lens on what normally would be studied using more traditional biochemistry. So I think you can.
I think you are right to say if you step back and say if I thought of the nucleus as flocking behavior, what would that do? How could I.
Let's say I applied those methods. I think you could do that. I think it'd be quite a powerful experiment in counterfactuals to use it that way. And I suspect, by the way, I remember John Wheeler making this point from Bit John Wheeler is that his heuristic for doing physics is always to ask the counterfactual what if there was no gravity? What if there was no strong nuclear force? What if, you know, what if information is more fundamental than energy?
And I think that can take you quite a long way as long as it's disciplined by real methods and real frameworks as opposed to fantasies, which is also interesting. But that's. That's on the fiction shelf.
[01:43:45] Speaker B: Yeah, we haven't even talked about that, but all right. It sounds like I better get. Get on it if I want to do it within this lifetime.
[01:43:53] Speaker A: Right, So I don't think so, actually, weirdly enough, I don't think so, Paul, because interestingly, I mean, just take the example of again, whether we like it or not, Carl Friston's free energy.
It really was just.
I'm going to take ideas from information theory and rethink the brain through that lens.
And of course you can take it as far as you like, but I don't know if it's that hard. I don't know if it's that much of a digression.
[01:44:27] Speaker B: David, thank you so much for your time. It's really nice to see you again. I recommend this book and then people can. I'll put links in the show notes to where people can find those foundation papers and Then we'll start the 40 year journey into reading them all and understanding them All.
[01:44:43] Speaker A: Good luck. Thank you for having me.
[01:44:51] Speaker B: Brain Inspired is powered by the Transmitter, an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives written by journalists and scientists. If you value Brain Inspired, support it through Patreon to access full length episodes, join our Discord community and even influence who I invite to the podcast. Go to BrainInspired Co to learn more. The music you're hearing is Little Wing performed by Kyle Donovan. Thank you for your support. See you next time.
La.