BI 228 Alex Maier: Laws of Consciousness

December 31, 2025 01:57:54
BI 228 Alex Maier: Laws of Consciousness
Brain Inspired
BI 228 Alex Maier: Laws of Consciousness

Dec 31 2025 | 01:57:54

/

Show Notes

Support the show to get full episodes, full archive, and join the Discord community.

The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.

Read more about our partnership.

Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.

To explore more neuroscience news and perspectives, visit thetransmitter.org.

Alex is an associate professor of psychology at Vanderbilt University where he heads the Maier Lab. His work in neuroscience spans vision, visual perception, and cognition, studying the neurophysiology of cortical columns, and other related topics. Today, he is here to discuss where his focus has shifted over the past few years, the neuroscience of consciousness. I should say shifted back, since that was his original love, which you'll hear about.

I've known Alex since my own time at Vanderbilt, where I was a postdoc and he was a new faculty member, and I remember being impressed with him then. I was at a talk he gave - job talk or early talk - where it was immediately obvious how passionate and articulate he is about what he does, and I remember he even showed off some of his telescope photography - good pictures of the moon, I remember. Anyway, we always had fun interactions, even if sometimes it was a quick hello as he ran up stairs and down hallways to get wherever he was going, always in a hurry.

Today we discuss why Alex sees integration information theory as the most viable current prospect for explaining consciousness. That is mainly because IIT has developed a formalized mathematical account that hopes to do for consciousness what other math has done for physics, that is, give us what we know as laws of nature. So basically our discussion revolves around everything related to that, like philosophy of science, distinguishing mathematics from "the mathematical", some of the tools he is finding valuable, like category theory, and some of his work measuring the level of consciousness IIT says a whole soccer team has, not just the individuals that comprise the team.

0:00 - Intro 4:27 - Discovering consciousness science 11:23 - Laws of perception 15:48 - Integrated information theory and mathematical formalism 23:54 - Theories of consciousness without math 28:18 - Computation metaphor 34:44 - Formalized mathematics is the way 36:56 - Category theory 41:42 - Structuralism 51:09 - The mathematical 54:33 - Metaphysics of the mathematical 59:52 - Yoneda Lemma 1:12:05 - What's real 1:26:22 - Measuring consciousness of a soccer team 1:35:03 - Assumptions and approximations of IIT 1:43:13 - Open science

View Full Transcript

Episode Transcript

[00:00:03] Speaker A: To me, it seems that the consciousness field had to go through that as well. We had to start out with conceptual theories that are metaphorical mostly and intuition based and intuition driven. But once we reach this point of being able to develop and employ some formalism, we can leave that behind. And then you gain something that you don't have in metaphor, which is again, lack of ambiguity, reliability and precision and rigorous that if you just use words to describe what consciousness is, you just can't get there. Can we really get to a structured description of experience? Phenomenology, qualia? And then the second one for people like me that are more interested in the brain would be like, how does this map on the brain? And so then it seems what we end up. And so this is putting it all together then might be in the end like an equation where we can have on the left hand side, what describes the structure of brain activity. And then on the other hand, the question or your phenomenology description, what the visual spaces that you look at right now, what we're really trying to do here is to find these laws, these mathematical laws. And so what Fechner obviously did is not only find that we can measure perception precisely in these thresholds, but also that if you do so, you do find a mathematical law. [00:01:19] Speaker B: This is brain inspired, powered by the transmitter that is Alexander Meyer. I am Paul almost all the time. Welcome or welcome back. Alex is an associate professor of psychology at Vanderbilt University where he leads the Meyer Lab. His work in neuroscience spans vision, visual perception and cognition, studying the neurophysiology of cortical columns and other related topics. Today he is here to discuss where his focus has shifted over the past few years. The neuroscience of consciousness, I should say shifted back since that was his original love. As you will soon hear about. I've known Alex since my own time at Vanderbilt where I was a postdoc in Jeffrey Schall's lab and he was a new faculty member. And I remember being impressed with him. Then I was at a talk that he gave. I don't remember whether it was his job talk or soon after joining the faculty. Anyway, it was immediately obvious how passionate and articulate he is about what he does. And I remember he even showed off some of his telescope photography. In particular, I remember a really good picture of the moon. So anyway, we always had fun interactions, even if sometimes it was a quick hello as he was running by me, running up the stairs or down the hallways to get wherever he was going, always in a hurry. Today we discuss why Alex sees integration information theory as the most viable current prospect for explaining consciousness. That is mainly because iit, as it's known, has developed a formalized mathematical account that hopes to do for consciousness what what other math has done for physics, namely, give us what we know of as laws of nature. So basically our discussion revolves around everything related to that, like philosophy of science, distinguishing mathematics from the mathematical, some of the tools that he's finding valuable, like category theory and some of his work measuring the level of consciousness. IIT says that a whole soccer team has, not just the individuals that comprise that team. Not only do I link to the work that he's been doing in the show Notes and his lab, but I also link to many of the resources and references that he drops along the way to learn more. Those are at BrainInspired Co Podcast228. Thanks for being here. Thank you to my Patreon supporters for your continued support. If you do support the podcast on Patreon, you get, for example, all the full length episodes. For example, in this episode we go on to discuss a little bit about the free energy principle and how it relates to integrated information theory. So go to BrainInspired Co to learn how to support this podcast through Patreon, if that sounds interesting to you. And thank you to the Transmitter for your continued support. All right, enjoy. Alex. All right, Alex, this has been a long time in the making. I'm really glad that you're here. Um, let's just start off with, I guess because I'm interested in also your. We've talked a little bit about it, but I haven't really learned in depth about kind of how you view your trajectory in. So. So right now you're like all invested in consciousness science, but you've done a lot of work in visual perception, studying cortical columns. You're a vision neuroscientist by. If someone called your name, would they attach vision neuroscientist or consciousness scientist at this point? [00:05:03] Speaker A: Yeah. Thank you, Paul. First of all, thanks for the introduction. And so it's great to have that very many fond memories about the old days. And I've been following you all along. Now, when it comes to my personal trajectory, I should probably start very early on. So when I think back to where did I end up now, it started when I had to decide what to study, which subject to pick up for university, and I was at a big loss. I was thinking about law school at some point, I was thinking about medical school. I worked in a hospital for about a year and I realized, okay, medical school is not what I should do. [00:05:45] Speaker B: Doing what at the hospital? [00:05:46] Speaker A: Yeah. So I. Back when I was young, there was a draft in Germany, and you could conscientiously object, and in that case, you had to do a year of social service. And so what I did is I worked in a hospital, actually in a research lab. Mostly it was in cardiology. And so I got to participate in procedures where they used stent implants with people that had heart disease. And then we worked on a gene therapy attempt of trying to prevent that these heart vessels that were getting a little too tight would grow back in. It didn't work. But I learned a lot, and I definitely learned that the medical field is not for me. [00:06:30] Speaker B: Okay, sorry to interrupt. Yeah. [00:06:32] Speaker A: And so I then really was a little bit in limbo. The clock was ticking. What do you want to do with your life? And I realized, this is going to be a big decision. And I found this book in a bookstore, and it was called what the Soul Really Is. And so that was a unique German title, but it definitely grabbed my attention. And it turned out it was Francis Crick's book, the Astonishing Hypothesis. And so in that book, he lays out the problem of how to explain consciousness from neuroscience. And it's, I think, very dated at this point, but it is still intriguing to read that he just basically talks about vision science, and he goes about the neuroscience of vision, and he toys with this idea that we all have over homunculus, so that when we think about how does vision happen, that the idea that it all comes somewhere together in the brain, and then that must be where neurons somehow produce consciousness. And so what he does is he talks you through the visual system, and you realize, well, I can't be at the retina because if I close my eyes, I can still imagine a fire truck, and I can see it to some degree. And then he talks about the LGN. It can't be there, and it can't be in V1. And. And so you step through the entire visual system until you reach the areas where we know there's activity to faces. And then you realize, well, maybe that's right there, then, where we see something and shows you that, well, if you get a lesion there, you still see a face, you just don't recognize it anymore. So now we left perception or consciousness within the realm of cognition. And so that really struck me that I had this epiphany that, whoa, there's a big question right there. And in the book, he mentions Nicos Logothetis, who was back then in Germany and doing research on binocular rivalry. That's a visual illusion. If you show two different things to the two eyes, you only see one eye at a time or one eye's view at a time. And what that allowed the neuroscientists to do back then is to go and find neurons that respond to one or the other pattern and see if they covary in activity with what you see. And that got me. So that wasn't even before I started studying. And so I wanted to then do neurobiology. And so that's what I studied. And towards the end, I approached Nikos and said that I find this work really cool and if I could do an internship in this lab. [00:08:43] Speaker B: Towards the end of what I wasn't. [00:08:45] Speaker A: Even finished with my undergrad degree. [00:08:47] Speaker B: Okay. Yeah. [00:08:48] Speaker A: And so it was a funny time. He was in a different town in Germany. And so I got all my money together and tried to stay there. And we did experiments and they worked really well. So it turned out that what we found was then published in Nature Neuroscience and that I think got Nikos interested in maybe keeping me on as a graduate student. And that's what I like. [00:09:10] Speaker B: You forced your way in. [00:09:12] Speaker A: I forced my way in, yeah. But what happened was interesting that. So around this time, that was in the early 2000s, the field had reached a peak of interest in consciousness. And so what I didn't realize was that Francis Crick with his book and together with Christoph Koch of papers that they wrote, people got really interested in consciousness. And what they really meant was perception, mostly visual perception. And that though, as you probably know, these things in science, they kind of come and go in waves, so topics get really hot and then after a while, nobody is really interested because the low hanging fruit are gone. And so what happened to me is that I stepped in the field right as it became uninteresting. So at least that's what it felt like. So the conference became smaller and smaller. And so I realized. So what that means is I have to reposition, I have to do something. I can't use that as my research line. [00:10:06] Speaker B: Oh, okay. To survive. [00:10:08] Speaker A: Yeah, to survive in the industry. And the plan was to, you know, grow old and then maybe find back to it and stay asleep. [00:10:15] Speaker B: And you've done just that. Except for the old part. Except for the old part, of course. [00:10:20] Speaker A: Oh, no, no. That is very much what happened. Yeah. But I tried to stay close. I tried to keep working on binocular rivalry and related phenomena, but it was a little ironic because what Francis and Christoph and Many other people at the time did is they made this topic consciousness, scientifically valid. So before that in psychology there was a little bit of an apprehension about talking about that. So it was okay to talk about perception, but even that was the odd one out. So most psychology departments, they would have mostly cognitive psychologists at this point, maybe some behaviorist psycholog and then maybe one or two people that study perception. And it seemed a little weird. They tended to have the papers that everybody was a little apprehensive about because it seems so difficult to take perception and be quantitative and rigorous. And the saving grace that we had was psychophysics that we could point to. Well, you can measure perception quite accurately, I think otherwise this would have been a very difficult topic to study. [00:11:23] Speaker B: Well, I was going to ask you because whether Koch and Crick, Crick and Koch include. So one of your heroes is Fechner because of his psychophysics as a window into the mathematics that allow us to form mathematical relationships between perception and, well, perceptual laws, essentially mathematical perceptual laws Fechner described. But did they point to his work as well? Because correlates of consciousness is not, not mathematical in that respect. But I wanted to ask you. So were they aware of that and did they feed off of that as well? [00:12:03] Speaker A: So this is a really interesting question that I only thought about recently. And it's partly because of feedback and particular pushback that I get. And so for me, Fechner is interesting exactly for what you said, that it made things more rigorous in the sense that it made things more mathematical. So what he found out is that I can measure very precisely if I make a tone very, very faint and I can't hear it until I can hear it. And so all of us that went to an audiologist have done psychophysics and that we had tons of different frequencies and we have to tell when we hear the tone. And so for me that was intriguing. It seemed scientific and rigorous, but I never thought really deeply about what that means in terms of having math and science. I just took it for granted. I think a lot of us do. If we take a physics course or we watch a physics lecture or we take a physics textbooks, we expect there to be equations. But it's kind of actually interesting. Why is that? Why we never tend to think about it. But we also have the psychological reaction that that seems very rigorous. It seems very precise. It seems very mathematical. I think deep down we understand that something like Newton's F equals MA or Einstein's E equals MC squared is very rigorous. It's very Scientific, we have still a leftover vague notion of that's a law of nature and that should be mathematical, should be expressed in mathematics. But even in the physics books, they just take that for granted. There's never a discuss of, you pick up this book about particle physics and all you find is equations and diagrams. And so over time I realized. So that is actually what a lot of this consciousness or perception research struggles with or focuses around with. And that Fechner, that's why you're right, he's my hero, made this very crucial step to show, yeah, we can put numbers on perception very precisely by measuring how much do I have to change a stimulus until your perception changes. And then we get numbers, we get these discrete steps of. Now you hear a difference. Let's call it one, let's make the sound even louder. When do you notice it's louder? Well, let's call that loudness. Two, it seems that your perception goes up in these discrete steps, 1, 2, 3, 4, and so on. And I think this is a deep insight, but it hasn't been deeply appreciated or in fact, maybe discussed to death at some point in the 60s where people tried different techniques and showed that, well, under different circumstances, things are kind of different. And so it added a certain notion of vagueness or relativism to that. And so maybe then we lost a little bit that, look, what we're really trying to do here is to find these laws, these mathematical laws. And so what Wechner obviously did is not only find that we can measure perception precisely in these thresholds, but also that if you do so, you do find a mathematical law. So he found that our perception tends to be logarithmic. And that's a big insight. So it actually is, for example, when it comes to how we make purchasing positions a big impact on us, that if we buy a very expensive car and they add another $100 fee, we accept that because we already paid 10,000 or $20,000. So 100 doesn't seem much. And so people are toying with this fact that we have this logarithmic, squished kind of perception where in the very large things, everything seems to be a little closer together. But to see that as a law of nature goes a little far, maybe. But that, to me, seems to be the goal of science. It seems that we debate a lot in science until we reach that stage. So what I mean by that is the Darwinian evolution, for example, still gets a lot of debated. And I think it's because one reason for that is that it's not as formalized as E equals MC squares. I think there's just less debate about that once you write this down and you show it works. And so that got me thinking a lot about how the goal of consciousness science, as you said, should also be mathematical. It should be to find these fundamental mathematical equations or other kind of mathematical structures or objects that give you the same kind of rigor, precision, and unambiguity that you find in physics. [00:15:48] Speaker B: So did you appreciate that? Did you come to that realization before knowing about integrated information theory, which is a lot of what you work on now, or what? So we're going to talk a little bit about integrated information theory, but part of what I wanted to ask you is whether it's IIT itself or the fact that it uses formalized mathematics that really draws you in, like, which of those are you really interested in? Or both. But my original question here, and we'll get into that, was, I mean, is this like a real. Like an epiphany that you had one day about mathematics, that this is the way. [00:16:26] Speaker A: Exactly. This is what happened. So my way into iit, ironically, is over the evidence. So there's now a lot of debate where some people are criticizing that it is difficult to. What they would say is falsify IIT or test iit. And so my, again, autobiographical history has been that back when I was working on consciousness quite intensively, I got to encounter integrated information theory in its infant stage. So those IIT 0 or IIT 1 or IIT 1.5. And so back then, it seemed not very useful to me as an experimentalist. It was kind of unclear what can I do with FMRI on my electrodes and so on. But then what happened was, and I remember this very distinctly, a science paper come out, a paper in Science where the prediction was derived from iit. And the prediction was that if you're unconscious and you stimulate the brain with an electromagnetic pulse with tms and you measure the EEG response, it should be very different. The prediction was more precise. The prediction said that you should see no more integration because you're not conscious. And the way that it should look like is that you get local excitation, and it shouldn't really cause some kind of complicated pattern. And the interesting thing was that that has never been tested before. That has just never been done before. And the theory gave this prediction. And so because I had this personal history, I knew this was a genuine prediction, which often in neuroscience, at least around this time, was rare. So it tended to be that we were in a data heavy collection period and often to publish your papers, you would collect the data, try to make sense of it and then add a theory. Post hoc. Here's my model of what you're doing. [00:18:01] Speaker B: You still do that? [00:18:02] Speaker A: Of course. [00:18:02] Speaker B: Well, that's also, you know, exploratory science. Right. So there's a place for that. [00:18:07] Speaker A: But yeah, it doesn't seem what physics does. Right. So we all understand it's more powerful to make a prediction and test it than to fit a model to your data. Because I think it was for Neumann who said, give me three variables and I can fit you anything. And so it was definitely interesting that here it seemed to generally happen that and it was a theory of consciousness, no less, that made a prediction. And that's what they found. In fact, they've gone on and throughout the next couple of years, then they tested different anesthetics and they made an exception, for example, for ketamine, where we know that people are not, they seem unconscious, but actually they hallucinate in a dreamlike state. And lo and behold, you get more a consciousness like pattern under those circumstances. In fact, this technique is so powerful that now there's video clips, if you follow Marcello Massimini, who's been leading a lot of that, a neurologist in Italy. They had now patients that were misdiagnosed as being comatose. They use this technique, they find out they're conscious and then they can treat them. And it turns out, yes, they were conscious all along, but they couldn't respond. [00:19:13] Speaker B: And so therefore they locked in patients. [00:19:17] Speaker A: It's actually the one case, and I hope I don't get it wrong. But like the one case, if I understood correctly, was kind of interesting. It was a lack of dopamine. And so that is, if you watch the movie after Oliver Sacks book that after the Spanish flu awakenings. Yeah. So after the Spanish flu that a lot of these people became catatonic and he gave them L dopa, I believe. And so they produced more dopamine, they came back, they could behave. And I said, yeah, I was conscious all along. I just didn't feel like responding. And so this was a similar case. I think that patient seemed unresponsive and so the physicians didn't know whether it's coma or not. And then they gave L dopa, from what I understand, and the patient became responsive. So I'm pointing. And so I know that there's a whole company that now wants to make this a neurological project. I'm just saying this seems to work. There's a lot of evidence that it seems to work. And for me it's quite compelling that it really came out of this theory making a prediction. So that gave me a lot of interest in iit and so the next thing that happened was that so IIT evolved. So now we're in IIT 4. And what happened in particular is that it got more and more fleshed out mathematically. It's not actually really every time a new theory, if you read the very original papers by Giulio Tornoni in the early stages of it, you see it's all there. [00:20:29] Speaker B: It's actually really interesting for the axioms have not changed. What's changed, what changes for every iteration, then if it's not new about it. [00:20:37] Speaker A: The axioms have changed. And it wasn't as axiomatic in the beginning, it was a little bit less formal. But you see the ideas are there. And so what has happened over time is that the formalism got worked out. And so then the next paper that really grabbed my attention was by Angus Leung and now Tsuchiya. And so what they did is they used IIT3 back then, which was the first really complete mathematical apparatus that you could implement in Matlab or Python. And they would use it on local feed potentials, so just voltages, eg, like signals from fruit flies, Drosophila and they had two kinds of recordings. It was with these multi contact electrodes that we have now in the fruit fly, in the mushroom body, which is the brain like structure, and they put them either under general gas anesthesia or not. And so it turns out that the insects, they have sleep like states and they also respond to certain anesthetics that we have. As we do, they stop moving and they seem to be unconscious. Now that's a stretch a little bit, but this certainly mimics the behavior that humans have under these conditions. And what they showed is that IIT to some effect works again. So the prediction of IIT would have been that you find a lower phi value, which is what we can now compute. And that's exactly what they find that happens during the anesthesia. If you make a lot of assumptions and cut a lot of corners, of course with the actual data, but it worked. [00:21:58] Speaker B: Sorry, let me just interject as you just dropped phi and I just want to make sure that everyone knows that phi is the end scalar value of the amount, quantity, level of consciousness of a system. And we'll get into like, maybe we'll get into some of the issues surrounding the calculation of phi and what it Means is actually we will get into it, because I want to ask you about it. But when Alex says phi, that's the end result of all the math that goes into iit Right. [00:22:28] Speaker A: And sorry to interrupt. Julia, thank you for doing that. So what Giulia Tononi's lab did is they wrote Python code that anybody can use, it's public, it's open source, to compute phi values from, say, neural data. So it is quite feasible to do so. Now there's of course a lot of debate about what does this mean phi and what does it mean to have a scalar for consciousness. And this is where I come back to what your original question was, that after I realized all of that, that look, there seems to be something there, there is evidence there. And so my view about science is not falsificationism, but believe in people like Nakatosh and Quine that have pointed out that there's issues with that. I have a more of a Bayesian kind of view in terms of like, you just accumulate evidence and then you weigh what is more likely than something else. And so there seemed to be an increased likelihood of there being something there. So that's when I then started toying with that algorithm myself and with the mathematics myself. And as you said, that is what led me directly into thinking more about what are we doing and so why is it working and what is this thing mathematics and how does this relate to science and nature? And so that's what led me a little bit this philosophical rabbit hole down in terms of thinking. Is IIT interesting because it has this mathematics which most theories of consciousness do not have, or not to the same degree. [00:23:55] Speaker B: Just as an aside, I mean, do you. I know you're not going to say negative things about other avenues of research regarding consciousness, the nature of consciousness, but what I really want to ask you is do you see other non mathematical approaches to studying consciousness throughout history and present day? Is it a waste of time? Is it just down the, like a category error down the wrong path? It's never going to. It's never going to produce a satisfactory explanation or law that we will be satisfied with scientifically. How do you view that without. Without getting yourself in trouble, perhaps? [00:24:34] Speaker A: Yeah, no, thank you for the question. Actually, I've thought a lot about that recently, so. And it was because I got in touch with linguists and they convinced me that in the field of embodied cognition there's an interesting subfield of embodied linguistics, if you want. And what they said is, really got me thinking is that fundamentally when we throw around Concepts in particular in science, what we do is we kind of think visually, all of us, it's all metaphor based. And so that was a big pill to swallow for me. But I struggled to come up with think about physics, about anything, where I don't just have a little bit of a movie or a graphic in my head. So if I think about space time, I see all of these things where it's actually a two dimensional surface that's folding and a ball is rolling and I can't fully envision it otherwise. And so that's clearly just a metaphor. But you can get stuck in this metaphor and you can think, I understand what spacetime is intuitively. And so I refer to that. And then if you talk to other people that watch the same videos or animations, they will think very like you. And I think the interesting thing is, of course that's flawed, but that seems to be what science might always be doing as a first step. And so once I had this in my head, now that I go to any scientific talk or watch a YouTube talk, I realize, whoa, this is actually what happens a lot. Especially if people talk about cutting edge findings. Often they deliberately will say, oh, it's like, let's say it happened recently. Somebody talked about sleep and what happens during sleep. And somebody said, oh, it's like the cleanup crew comes in and cleans up after the party happened. And I realized, yeah, that is what we'll have to do. We have to use these metaphors. Initially we kind of get what somebody's talking about because we do not fully understand the molecular mechanisms here. [00:26:15] Speaker B: Well, so they're necessary and they're useful, but they're also inherently limited. Like models. Like all models. [00:26:21] Speaker A: Exactly. And so if you look into. I got really interested in the history of physics in finding out why math. And David Kaiser has a great YouTube series on physics in the last century on YouTube that was published during the pandemic. It's an excellent, excellent watch. And so a lot of dots connected for me. So the interesting thing, for example, about Einstein is that he really started out with metaphors. He would be very deliberate in using these thought experiments. And you might think that, well, maybe people didn't appreciate that that much. But then I found out that when it comes to Heisenberg and Bohr, to some degree, that was the case there as well, especially Bohr, they would often actually argue the famous Copenhagen debate, they would often argue in metaphor more than in mathematics. And what happened with Einstein, a lot of people know, is that in special relativity, somebody had to come in and show what the Lorentz transformation actually is in terms of formalism. And then in general relativity, that it was very important to work out the tensors and the mechanics behind it. And it's interesting history in Heisenberg's case about quantum physics that he seemed to kind of reinvent linear algebra in terms of only trying to describe things formally. So it seems that the formalism was next. So metaphor first, formalism next. But then the formalism opened a new world. It's thanks to the formalism that we can even get a grasp at quantum physics or relativity. And then the probe extensions and do precise measurements and have precise numerical predictions. And so I think that, to me, it seems that the consciousness field had to go through that as well. We had to start out with conceptual theories that are metaphorical mostly and intuition based and intuition driven. But once we reach this point of being able to develop and employ some formalism, we can leave that behind. And then you gain something that you don't have in metaphor, which is again, lack of ambiguity, reliability and precision and rigor, that if you just use words to describe what consciousness is, you just can't get there. [00:28:19] Speaker B: I mean, we have to use words because we're limited beings. But what if the. So it begins with intuition, which you translate into a metaphor to communicate the idea. And sort of. It's like you're chiseling away at something, right. And. And maybe if you do it really well, you end up with a formalized mathematics that can test the predictions. What if you get. What if your intuitions are wrong? What if your. The original metaphor is wrong? I mean, I suppose that. Are there examples? I don't want to go down this road. I was going to ask if there are examples where the metaphor to begin with ended up developing a formalized mathematics that made good predictions, but ended up being the wrong way forward. [00:28:58] Speaker A: The first thing where my mind goes to is what a lot of what your work has been on this podcast, that a dominating metaphor in the cognitive sciences is computationalism. And there's some formalism coming out of that. But you had several people on there pointing out that that's just a metaphor and that there are limits and that taking the metaphor for the map, for the landscape is maybe something that a lot of us are realizing at the same time has stifled progress. And so even though that. [00:29:27] Speaker B: But it's interesting. Sorry. I'm sorry. But it's interesting that that fact that Alfred North Whitehead called it the fallacy of misplaced concreteness, mistaking the map for the territory that's actually more widespread than ever right now in computational neuroscience. So, yes, a lot of people are aware of it and realizing it, but as computational models have come to dominate the landscape, people are mistaking the map for. For the landscape more and more, especially in the artificial intelligence world where. Where a lot of people think that computers are conscious and, you know, all of that. So anyway, it's just. It's interesting that, yes, people, it's almost. I don't know if you have to go through that computational, computationalistic perspective to get beyond it or what, but it's really more widespread than I would imagine it would be. But there you go. [00:30:17] Speaker A: I see it a little bit the other way around. Go ahead. [00:30:22] Speaker B: That's because of the silo that you're in these days. I think it could be. I mean, I'm in a silo too, but yeah, it could be. [00:30:30] Speaker A: It could be. But I see the silos cracking a little bit. So what I find interesting is that I got obsessed with this a little bit because a lot of the pushback on IIT comes from a computationalist perspective. And so I realized that we have to go all the way back down to the foundations of the house to actually come to an agreement. And we already disagree on, for example, the philosophy of science. We also disagree on basic things like what's the role of computation. So I looked into the history of computationalism, and so what I find interesting there is that here's my interpretation. I think that psychology always struggled a little bit as a science, seen as a rigorous science. And so often the psychology departments are a little bit placed in between the humanities and stem, so often the deans don't quite know where to place it. And I think psychology, now you might say, well, we started with Helmholtz and wasn't that exact science. Wasn't he an experimentalist and a generalist? But you see in William James, who's maybe the founder of American psychology, that there was a very strong emphasis on philosophy. And so, yes, there's almost more of a humanistic streak. And I would argue that Freud was then the problem. So a lot of people know he wanted to become a neuroscientist. He was supposed to, if I remember correctly, as a graduate student, come up with something like a Golgi stain or a NISSL stain. And he failed. He tried to have silver stain for neurons. And it seems almost like he started to hate us all so much that he just flipped the table and made everything literature. And so that is where falsificationism comes from. So Popper is trying to react to that and is trying to say, that's not science. And then when people say, well, why is psychoanalysis not science? He's like, well, it can't be falsified, right? So this is where this all comes from. That's why physicists never hear about falsificationism. And then in the psychology, and then the other science, cognitive neuroscience, it's so prevalent, it goes back to that debate and it also triggered behaviorism. So what seemed then to be the saving grace in the 1930s was to say, okay, we're just going to cut everything out that can't be observed directly. And that goes back to Ernst Mach. It helped in quantum physics to say, let's stick to the observable. And that is still a dominating streak. That, though made psychology difficult. If you look at some definitions of big psychological organizations, they still have in the definition that it is about the behavior of humans, as if it wouldn't be about love and seeing the color red as red and blue as blue and so on. And so that goes back to this historical thread. And I think what computationalism did in the form of the cognitive revolution, as they call it in the 60s, and I think this is not pretty, doubtable, is that computers became the saving grace. Because you could point to a computer and you could say, look, it's got memory. I can't really observe it. All that I see is silicon and transistors flipping on and off. But it acts. It is the behavior, if you will, of memory. So I can study this in humans as well. And you can't accuse me of pseudoscience. You can't accuse me of grasping at straws of something that is immeasurable, fundamentally, or abstract, non existent. Just a concept. [00:33:32] Speaker B: And it's a function. It's replete with functions. [00:33:35] Speaker A: Yeah, exactly. So at that point you get wedded to action and functions. And so that I think goes back even further too. If you look at the history of physics, how prevalent the mechanistic worldview was. So people sometimes confuse this with materialism. But this whole idea that fundamentally you have the world made out of atoms, or atoms are made out of even smaller things like neutrons and electrons, and maybe they're made out of quarks, but that's just smaller and smaller ping pong balls, tiny little bundles of mass. And when we talk about force, what really happens is they move and then they bounce. And so this is what James Ladyman calls microbangings, that the whole world is just made out of Particles that bounce around. And that gives this idea of mechanism, that everything is about motion, which of course was the starting point about physics and in Newton's case has proven so successful that, yes, we can explain a whole lot with motion and that physics, in a way, it's the science of change, it's the science of motion. And so that all trickles in there that I'm so focused on process, on change, on function, on transition. And so that is what comes baked in the cake with computationalism and where I think then some friction arises with consciousness. [00:34:45] Speaker B: Yeah. So, okay, well, so we started this little aside by me saying, has there ever been an example where intuitions have led astray? And so we started talking about computationalism, but before that we were talking about. [00:34:58] Speaker A: How. [00:35:01] Speaker B: You were saying that sure, you can begin with metaphor, as all things do, but as soon as you get a mathematical formalization, then you can drop the metaphors and move forward. And that's what the strength is of mathematics in your view. [00:35:15] Speaker A: In particular, I think it's formalism. So let me give you an example. We recently worked on this crazy new project where we opened up a review paper that we wrote about predictive coding. And we said anybody who's interested can participate. And we're going to write a literature review that really looks at everything that has been published on, that summarizes it. And the goal was to come up with open questions and find experiments that could be done to solve them. And what was interesting to me is that at some point in this process, it seemed to me and some other people that we weren't really on the same page linguistically or semantically. So people would use terms rather loosely in terms of expectation or prediction. And so to me, as somebody who read Francis Crick, I was always a little bit scared of the homunculus problem, where you invoke there's a little mini me inside your head and that's expecting and that's into. So I tried to find out, what do we mean by that. And so the suggestion became to write a glossary. So we take each of these words and we define them. And that became a really interesting experience because while this paper is more than 100 pages and 60 something authors, and it worked and it was quite the experience. And I think the result is something else. But most of the time was spent on my perspective on the glossary. I think if we would count the edits and the effort that went into it, there were whole wars fought over simple words like prediction, how to define them. Yes. Now the interesting Thing that I learned is at the end I was quite happy. I thought, okay, so finally we have a glossary for predictive coding where we don't just use a work like internal model, but we define what it is and how it relates to predictions and prediction errors and so on. [00:36:56] Speaker B: But still in words, not in math. [00:36:57] Speaker A: Exactly. So then I learned about, I got interested in category theory and that's a whole different topic. So integrated Information theory has now been a little bit married with another approach by my friend Naotsu Chu Xia, where he's expanding psychophysics as well, using structural approaches and so taking the structural nature of it and making it, putting it in a bigger diagram. I don't want to talk too much about it now, but I got interested in this topic of category theory, where category theory, in some ways then I will get bashed by the mathematicians. But an easy way to think about it is that it's very graph based. So you have arrows between things. And so these could be functions, they're more generally called morphisms, some kind of relation. Things are in relation to each other. So you get something like a graph with arrows. And so there's a lot of interest right now, from what I understand, in mathematics in category theory, because one reason is that it seems that you can base all of mathematics on category theory. It's a foundational theory. So it's a rival to set theory, which we've done over the last hundred years or so, to base all of mathematics on that. There are some other reasons why it's interesting. It's very interesting in computer science. There's several programming languages that are based off it. And so there's a lot of research going on. And so I saw this talk by David Spivak, who is leading a whole institute where they're working on category theory. And how can it be used with John Bass and other people, how can it be used for people like you and me? And they came up with a software, it's called an olog. So it's an ontological log, where ontological is not the philosophical term here, but the computer science term. So the idea is that if you build a logic or some kind of system, you should be very precise and semi formal. And the way it works is that you can take your theory and then you can cut it into what we did, for example, a glossary where you have these terms. And now you're a little bit boxed in, ironically by set theory. So a good OLOG would use every term that you have. Put a box around it and make it an element of a set. And what you then do is you connect it with arrows, how it relates to all of your other terms. And so if you really want to be very rigorous, the way it works best is these arrows. The relation is just a verb, like is or describes. So what you get is a graph, a kind of a network that shows your whole theory and you can read it. And so there's good examples on Wikipedia where you can take scientific concepts and you can make them semi formal, semi rigorous. It's very graphical as language. [00:39:24] Speaker B: It's semi rigorous. [00:39:25] Speaker A: Okay, well, it's not numerical. Right. It doesn't give you real numbers or natural numbers as an output, but it's structural because it has formalism. It doesn't give you the ambiguity that language does. Where if I have a sentence like, I didn't say you stole my car, I could say, I didn't say you stole my car. I didn't say you stole my car. I didn't say you stole my car. So you don't have this problem that language has, that this. [00:39:51] Speaker B: There's no interpretation problem. Yeah, okay. [00:39:53] Speaker A: Exactly. And so I started doing this for this glossary. So I put in these words that we defined, and I put these relations in between. And in doing so, I realized that there was one concept that had no connection to anything else. So it was in the glossary. It's a key term, but it had no. So for these relations, I used the sentences we wrote, such as is or describes or predicts and so on. One thing I should also say, there's one more step I forgot. As you define a word, let's say prediction error. To make this olog really work, you use type theory, which is an extension of logic that started with Russell finding out that there's problems like paradoxes in type theory, and they can be resolved at type theory. And it's a very simple trick. Whenever you define something, an element of a set, you have to say which set it belongs to. So, for example, when I say prediction error, I have to say what it is. And so you do colon and you say what it is. So prediction error. And I would say a signal. And so then I could define what a signal is, but there were other signals in there, and so on. So everything is typed, and then you have these relations. And so I found this one thing that was typed, but it had no relation to anything else. And so that is where I realized we made a mistake. So that's literally a distinction without a difference. So there's now A part of the model that you can also just cut out. [00:41:05] Speaker B: What is it? Does it matter what it is? [00:41:08] Speaker A: I think it might have been internal model. But there was one concept that is quite key for the theory, but it wasn't linked in our definitions. And so the epiphany that I had is like this little bit of what I call mathematics, a little bit of formalism allowed us to see that. Look, we actually got duped by our language. Nobody detected that this glossary was flawed, that there's a term in there that has no relations to any of the other terms. But then we happily use it throughout the review. So this is where I see that formalism and notation in particular can be very helpful in science. [00:41:42] Speaker B: Okay, you said you didn't want to talk about these things. I want to read a quote from one of. Actually the review that you wrote with now about qualia and using the integrated information approach and mathematical formalized approaches and structuralism. Structuralists believe that it is structural relationships that determine qualia. Is it a good time now to unpack that? I fear, because we could talk forever about the background, the philosophy and the usefulness and stuff, but I kind of want to get the gist of where your head is and what you think is important in terms of studying consciousness and within the formalized mathematical sense. So what the heck is structuralism? Why is it important? And I don't know if you want to say what qualia is, because in the paper you guys use it to sort of encompass all of what we consider subjective experience. Consciousness, it's a substitute term for those sorts of things. The feeling of what it is like is qualia, et cetera. But given that quote, structuralists believe that it is structural relationships that determine qualia. So what is structuralism? Why is it important in a formalized mathematical account, et cetera. [00:42:55] Speaker A: Yeah, thank you. Yeah, it's a big topic, as you said, but the main simple idea is that your experience, your consciousness, and so again, I equate that with experience is non arbitrary. That is how I can put it the most simple. So there's an order to things. And so the easiest way to understand structure is that there's a pattern or an order or a lawfulness behind everything. And that goes pretty far. So for example, as you sit right here, you cannot see what's behind your back, so you experience visually what's in front of you, but it clearly has a limit left and right where it just drops off. And so you can't see 360 degrees around you. And of course there's more, which is that you see color in front of you, but it's not jumping around, moving around, unless something's actually moving. So stationary objects, if you have a red object in front of you, it just stays put right there and it's exactly in this position. And you can go even deeper and think about what do I mean by position. And so that is where structure comes in. Where you would say it's actually a relation. It's that this thing is to the left of that up from that, down from that, and so on. And then you could say, well, that's a certain distance. That's another relation. You can say, well, it's four thumbs that it's up from this and four from that, and so on. So the fundamental insight here is that consciousness, even though it seems so ineffable for a lot of people, I can't describe you what love feels like, that actually might cut short. And so there are things that we can find out about consciousness, in particular the structure relations. That goes further. It becomes easiest to understand for a lot of people if you think about different experiences. So now I talked within an experience. I'm saying that one, if you open your eyes, what you see in front of you is structured, it has order to it, it's not random noise. But between conscious experiences, it becomes even more clear. So if I say the color red and the color orange, how do they relate to each other then? Orange and red to green, you would say that orange and red, they're more similar, they're closer together somehow they have a closer relation than the color green. And so obviously this is how we construct a color space. We can measure these distances, then we can place these colors further apart if we say they're less similar and closer together if they're more similar. And then lo and behold, we get geometry, we get a structure, we get mathematics out of that. Now, it's not the algebraic numerical mathematics that we usually use necessarily, but it shows you that there is an order that we can discover. And obviously that is exciting to a scientist with what we talked early on that Fecner tried to put numbers on perception by saying, each time you cross a threshold, I can go one up. So you have one pain, two pain, three pain. And it seems to jump discreetly as I make things more painful. But now we're getting to a more rich concept where I'm saying, well, I can actually describe, for example, color in a three dimensional space. And so it's a three dimensional, three dimensional object that I can now Use regular geometry to describe. And so it seems that we're getting more rigorous, more formal on something that seems to be as ineffable and loose as consciousness. [00:45:52] Speaker B: Oh, you guys used the phrase enriched psychophysics in that paper to describe this? Well, enriched structures. Right. And I'd forgotten, you know, you discuss in your talks. I don't know if you do in the paper, but yeah, I think you do that. That Fechner sort of realized, okay, well, this is an outer psychophysics quote, unquote, outer psychophysics that I'm doing and knew that eventually we would need an inner psychophysics to describe how the brain relates to consciousness, et cetera. And so you guys are proposing that we need a new enriched psychophysics, I assume, pointed on the inner side, to describe these, like color spaces in these higher dimensional kinds of structures that then should isomorphically map onto our qualia. [00:46:42] Speaker A: Exactly. [00:46:42] Speaker B: That's a mouthful. Is that right? [00:46:43] Speaker A: Yeah. Okay, but you got it. Actually, it really is just a couple of years ago that it fully hit me and I think IIT should get credit where credit is due because I think this basic idea that if we have a geometrical description of your conscious experience, and let's put aside whether that is all, I think there's a lot of debate whether. Well, if you describe all the geometric relations in my vision and so on, there still seems to be something about the redness of red that this can't capture. Want to put this aside? I think it does this too. It's just saying, well, let's just go this far that we say that I can describe what you see geometrically, and I can describe music geometrically. And for example, that if I go up in tones, they go up, but then if I hit an octave, I get a closer similarity relationship. So it seems to become like a helix where tones of the same octave are underneath each other, even though they spiral up. And there's other examples like phase spaces and so on, where if we order things by similarity, I get a geometrical structure. And I think what IIT does, which I think is brilliant and which now Tsuchiya is expanding even more, is to say that if we just put numbers on perception, it kind of gets unclear what we want to look for in the brain. And so maybe that is also why this project of the 90s to some degree stopped or never really fully embraced psychophysics. But what we should be finding is that there should be something about the structure of the brain mechanisms, and I'm using this term deliberately, and I can explain a moment that should be isomorphic or in some relation, in some other structural relation to this thing? Because how else could it be that my vision is ordered if whatever, and I'm going to use this term very carefully, if whatever supports some people would say represents that visual perception is unordered. [00:48:22] Speaker B: Yeah, that's way. Yeah, yeah, yeah. [00:48:23] Speaker A: This order has to map onto some kind of order in my brain. Right. So that's the basic idea. And so this now seems to be an entirely different research project that if we can. And so I think these are two separate projects. The one big project people like Johannes Klein are quite interested in to some degree, a lot of work that Johannes, that now Tsuchiya does. Can we really get to a structured description of experience, phenomenology, qualia? And then the second one for people like me that are more interested in the brain would be like, how does this map on the brain? Can we use this to map on the brain? And so then it seems what we end up, and so this is putting it all together then might be in the end like an equation where we can have on the left hand side, what describes the structure of brain activity. And as you know, there's been a lot of work on this recently with topological data analysis. High orders, beta latent spaces, manifolds, these are just structures, information structures people would call it. And then on the other hand, the qualia or your phenomenology description, what the visual spaces that you look at right now and that relation in between could be an equal sign. It could be an isomorphism, it could be an adjunction, which is an even weaker relation. But the idea is, roughly speaking, these are two graphs and I can transform one graph into another graph. The interesting thing for me is that it seems to me that at that point it's E equals MC squared. It seems that at that point, and I think that's why some people say it would solve this, what people call the hard problem of consciousness, that you know, you can't do it, you can't explain what love is based on your brain activity. It just doesn't seem to work. And to me the answer seems to be, and obviously this is a pipe dream, this is vision, this is intuition. Now that I'm saying no, once we reach this point that we have an equation where I can put in the brain activity on the left side, on the right hand side, I get the description of a mental state that that seems to be a law of nature. And so if this is a general universal relationship that we found something like E equals. And if you still Squabble with that. If you still say, doesn't satisfy me. You have that same problem in physics and people do, by the way. So if you describe the whole world with quantum field theory, which is very potent, it's the best theory we have in terms of Precision, gives you 13 decimal points of prediction that people then at the particle accelerators in Switzerland and elsewhere find. That's as precise as it can be. And yet it's just math in a way. It's just very complicated Excel sheets that predict what you should find. And so you get the same problem that some people say, well, at that point I'm unsatisfied. It seems that there's more to the world than just this math. And I think there's two ways to go from there. One is to say, well, but that's all that you get epistemically, all you can know, all that science gives you is the math, is the structure. And then if you want more, that's fine, but then it's outside of science. Science can only give you the math. Yeah. And so science. Now, for me, it's interesting to say, well, why would you assume there's more? I would push back and say that I'm quite satisfied with the state of the world to say that, well, maybe fundamentally it's all relation, but I understand why some people would object. So let me say this real quick. I think that when we talk about mathematics, we use one word for two different things. And so this one thing I learned from Peter fried, who on YouTube has a great lecture. It's called An Anti Philosophy of Mathematics. And so what he says is that what people think of mathematics is that it's a language. So it's just a superior language to what we usually do. And so there's a lot of evidence against that, and in particular neuroscientific evidence. So Stanley Ann has done neuroimaging studies where you have people do learn math or read math and language, and it's different parts of the brain. So that is already a problem that your brain doesn't think it's a language, it doesn't treat it as a natural language language. But I do agree that there seems to be something man made about mathematics. And so this is, I think, where a lot of people get hung up and think, so look, if it's us in the armchair that are doing that, it can't be nature. So it's a tension that you see in Newton already. Like, why is it F equals M? Hey, how does this go with a mechanistic worldview, what is that? That law of nature, how's it coming from? I can't see it, I can't point to it. Yes. So when it comes to notation, when it comes to proving things, when it comes to thinking about math, writing out math, that seems subjective. It seems to be mind dependent. It doesn't seem to be something mind independent. But I think somebody who has that view has to admit there's something mind independent about this. And so Peter Fried, would that call the mathematical? And so what that means are, what are these rules? Where do they come from? So yes, if I write down two plus three is the same as three plus two, that's me. These numerals, Hindu, Arabic, we agreed on a plus sign even later. And so that's all man made. But there's something deeper in there, which is that if I say two plus three is the same as three plus two, what I actually do there is, I say I can switch things around, it seems trivial. But if I take a book or a piece of paper, let me grab one real quick, and I try to do the same thing, and I say, let's go forward and then to the side, I have it this orientation. But if I say go to the side and then forward, I end up in a different orientation, right? So there it seems that it matters very much what the order of operation is. And so this seems trivial, but I think this is where a lot of scientific insight is gained by to thinking like children again and asking these trivial things that we take for granted, why are they this way? And so Peter Fry would call that the mathematical. Some things commute, some things don't commute. So that would be the fancy word, that it matters whether you move things around or not. And that seems to be mind independent. It seems more like we discover that. And so there's evidence, for example, that we found the square root of minus 1i. And it seemed just a ludicrous idea. It's like, okay, you can do that if you want in your formalism, but in the real world it doesn't matter. Like there can't be that. Right? And yet there's a lot of debate right now, including nature papers, that it seems that quantum physics, the formalism collapses if you don't use complex numbers. So you need that weird thing that we thought is us, is our brains, humans made that, that you can take the square root of a negative number. And yet nature tells us, yeah, you can have a laser pointer without that. So I think that points very heavily to that. While mathematics is A human activity, it points to. And if you wanted to call it a language, fine, but it points to or describes something that seems to be mind independent. On that view, it would be structure. So a lot of people that delve down to that level, they would say, yes, fundamentally, that's all structure. [00:54:33] Speaker B: One is tempted to, okay, so when you describe things mathematically, right, so you have this relation of how things progress through time. F equals ma, E equals MC squared, these relations, right? And some, maybe I won't say mathematics, I'll say the mathematical. One is tempted then to say, well, maybe the mathematical is actually what's real, since it seems to be what we discover. Yes, it's a product of our minds. We can't apprehend the mathematical without our minds. But every time we look, it's almost a discovery, not like a produced language, right. And there's this like slippery slope. Do we discover it, did we invent it, et cetera. But that's almost an idealism perspective on metaphysics, right? So does the mathematical, does it lend itself to an idealism in metaphysics? Or how do you think about the mathematical? [00:55:39] Speaker A: Yeah, this is now very deep. So the word that I often think about is Neopythagorean. So Pythagoras said is all is number. And so he thought about this for this, the word just made out of math. Now Max Tegmark has his famous book, his paper is even better, Our Mathematical Universe, where he makes a similar point. And so a lot of people would say it's Platonic. So the mathematicians that believe that numbers are real, they're Platonist, but it's actually not Platonic because Plato would say there's a real world and then there's the abstract world, right? And so this is basically saying, no, all is relation. And so what really got me thinking in this way were a lot of discussions with Naotsugu Chihiya, who also showed me that the Buddhist perspective goes this way. And then work by Titan A. Bradley, whose mathematician, a young mathematician, I think she's the head of Google AI at this point. And she took a summer school where she learned about category theory. And she has these beautiful handwritten diagrams and so on, and she made it all publicly available. And so she introduced the Yonet Dilemma to me. And I remember when now Tsuchiya wrote a paper on neonetal Emma. I thought he had lost it back then. I didn't think about mathematics, actually. I was really. I felt sad because you'll probably agree with that, that mathematics can be used as a Ruse in, in science. [00:56:48] Speaker B: Yeah. Yes, of course, everything can, it turns out. [00:56:52] Speaker A: And so I think I heard a song once that said, if your results are dubious, shroud them in higher order calculus. And so there is something too. And I had it too. I was very bad mathematics, I should say. By the way, my love of mathematics is fresh. [00:57:06] Speaker B: Why were you sad? You were saying, what were you sad about? [00:57:08] Speaker A: Yeah, so I thought this is it. This is basically so you might know about the Sokal affair, right? So that continental philosophy seemed to be in this decline in terms of logical coherence and so on. And then people, physicists published papers using math, making supposed philosophy and it was all demonstrable bunk. There's a whole discipline out of that now that I highly recommend anybody look into. It's called bullshit studies where they deliberately produce computer produced bullshit. And about 40% of people, they do not just recognize that these are illogical bullshit. [00:57:39] Speaker B: Is this different than Harry Frankfurt's concept of bullshit? Like philosophical concept of. It's the same. [00:57:44] Speaker A: Well, this is different. [00:57:46] Speaker B: It's not deliberate. [00:57:46] Speaker A: Okay, it's different, it's different. This is actually, embarrassingly enough, not made by psychologists, but by economists. So they have these random word generators and they generate these sentences and then you have people rate what they think about it. And what's alarming is that a lot of people, they assign higher inside truth values to these incoherent statements than to real statements. So a sentence like a young kid needs a lot of care is rated lower than a sentence like there is no such thing as true truth. [00:58:15] Speaker B: Really lower of what, what scale? [00:58:17] Speaker A: What are we reading in terms of depth of insight? So a paradox like saying there is no such thing as truth, which refutes itself. Like is this true? If you say this is true, then there is a truth, right? But it's seen as deep, deep, deep inside. It confuses this feeling of confusion that you get because it's non logical. [00:58:35] Speaker B: Yeah, yeah, yeah, okay. [00:58:36] Speaker A: Is taking as enlightenment is seen as, whoa, that's deep. [00:58:39] Speaker B: Whereas, oh, we're all so susceptible to that though. Oh yeah, yeah, yeah, yeah. [00:58:43] Speaker A: So in fact, if you look at these studies, scientists tend to be more susceptible often than regular people to that problem. That is what motivates my push for formalism. Well, I'm like, I think that I'm susceptible to that. So that's why I create an olog and see there's a formal mistake. My logic isn't coherent. And so I think that. So back to the so called affair. So they published these fake papers, they were full of nonsense and they got highly peer reviewed and they went in and the alarming sign here was that a lot of people, they give up on math. Like, I was never good at it, I had an F, I was really bad. But there's another problem in terms of how math is taught and what we feed math. That's why I think it's helpful to see calculating not as mathematics. The mathematical is what's actually interesting. But then a lot of people, I think they see equations in a talk and they just zoom out. It's like, I don't have to read the equation, I don't have to see whether this is really the Gaussian distribution. I just take this as rigorous. Right? And so you can use this a little bit to hoodwink people even. And so that was my concern, that now consciousness science is grasping at the stars and is using pure math and these higher order concepts because we've run out of things to do, because we feel this is. [00:59:53] Speaker B: When now was presented the lemma, the Yonetta lemma, that's what made you. You're like, oh no, we're taking a turn to the masking of the truth via high order math. [01:00:06] Speaker A: Yes. I was really concerned. It created a sadness in me because I thought consciousness is, oh, in my lifetime we won't make progress because now we're just going to be shrouding things in these abstract concepts that me as a neuroscientist, there's nothing I can do. And I was mistaken. So I read what the Yoneda lemma is and I got really enthralled. So a lemma is a proof, It's a minor proof, which is ironic because this is a very major insight. And so it goes to the heart of categories, even. And so I've given a talk about that on my YouTube channel and a lot of mathematicians took a lot of issues with it. So I have to be very careful that I phrase things carefully. [01:00:41] Speaker B: Okay, okay. [01:00:42] Speaker A: But most of the critique was non technical. I think that I had a talk reviewed by several mathematicians and they all said, this is fine, but I'm trying to explain things glibly, which is not the technical language of mathematics. But an easy way to understand the Yodane dilemma is that in set theory, we start with objects, so with elements of a set, and so number one, number two, and so on, or a red triangle and a blue square, and so on. And then we can, for example, have a set of shapes and then we can map that. Let's say you have a Triangle, a gray triangle and a gray square, and so on. And then we have another set of colors, blue, red, and so on. So we have a color set and we have a geometric object set. And then we can create arrows between, let's say, a function where I can say there's a red triangle, so where I draw a line from red to the triangle, or vice versa. And so this is the foundation of arithmetic and a lot of what we can do. Now, category theory is doing the same in some ways, but it's putting the emphasis different. So it's saying, what's more interesting is not these objects that you start out with and then you draw these lines like functions. But what do you mean by a function? What is that arrow? That's what category theory is interested in. And what the Yonedilemma shows is that if you take any kind of mathematical object, could be a number, could be a triangle, whatever you think an object is, and you know all the in and outgoing arrows, so you know how it relates, in your axiomatic little mathematical word, how it relates to everything else. The structure, the structure of relations. Exactly. Then that object is uniquely defined, in other words, that object. There's nothing else to say about this object to identify it than all of these relations. So you can already think about that. If I said, well, you started with a triangle, you could then make that the relation go back to Euclid, what a triangle is made out of lines and how lines relate to points and so on. Right. And so if you know all of these relations, the whole data, everything you can know about that object is in its relations. It's in the totality. It's called the homfanta, the totality of these arrows that you know, outgoing and what Titan A. Bradley muses about in her writing. So that's a proven theorem for category theory. So if you believe these axioms, you have to buy them. Right. But if you believe these axioms, this result holds world. And so it's also used, and you can already think in computer scientists, might be quite interesting. And so the people are interested in that. But what Tai Danae did is she said, this has philosophical implications. So if you think it's you as a person, what identifies you as a person, that Aristotelian way to think about this is, I'm a human, I am a father, and so on. I have these properties. But the other view is that it's all relational. So I come from other mammals, I raise children. And so it's your relation to everything else that equally well, defines what you are. So you. [01:03:27] Speaker B: It's like a concept space, right? Like a. [01:03:31] Speaker A: Yes, it's like a giant graph of how you relate to anything else. It describes all your relations, what you ate for breakfast last night, this morning, and so on. And so once you have the totality of all of these relations, how you relate to anything else, all the data is there, like it uniquely defines you. There can't be something else in this graph that has the exact same relations, is you. So if you think about who you were born to, who you're married with, at what point you had what friends or whatever, if you would take all human relations, not even a Siamese twin, sorry, a conjoined twin, would be the same because they have a different relation to you than you have to them. So if you take any, Go ahead. [01:04:07] Speaker B: Is it a rigid graph or can you do an eigenvector? Like, can you do transformations or is it so high dimensional and so there are so many different hierarchical levels, you talk about hypergraphs, et cetera. Is it a rigid structure or can we translate it? I mean, is my red the same as your red is kind of the same along the same lines, right? [01:04:29] Speaker A: Yeah, I can already see that. I'm running into trouble with mathematics because it's not really thought as a graph, it's a category. It looks like a graph, so you have these arrows, but there are always arrows. For example, in a graph, it could just be a line, right? So you could say, is it a directed graph? Well, it looks like it, but it's a category. So it's a different mathematical construct, but it looks like a graph. And I kind of think of it as a graph. So you're right. And so this is where you find some of overlap with Stephen Wolbram's thinking and with integrated information Theory, that this graphical description seems to be helpful. And obviously, if you do category theory in practice, what I said about the ologs, well, that's a graph that you end up in the end with. So there's clearly, visually and conceptually an overlap. Now what you just pointed out next, which is, is it just a graph of pairwise relations? Can I just use lines or arrows? That I find a really interesting question. So IRT's answer is no. And so once I had that epiphany that, whoa, you can have a graph where you can have a shared edge, a connection or relation with three or more objects, that changes everything. And so that's where I found some mathematicians that opened my eyes to that we might have totally Missed out on that in particular neuroscience. Yes, we appreciate graphs and there's a lot of graph theory in Neurosc, and now we actually even do that, that we use these higher order graphs. Most people are not aware of it, but when they do topological data analysis, they use these simplicity or simplicial complexes. Those are higher order graphs, technically speaking. And so we kind of are moving in that direction. There's a lot of work in machine learning as well. But Carlos Zapata Garrettala is a mathematician that has opened my eyes. He's wrote several great papers on that, where he shows that once you move out of pairwise worlds and you move it to a world where three relations exist, some of them might be non reducible. It's a true mathematical phenomenon of emergence. And so that may actually be very important. It seems to rule the world. Even though in mathematics there is some debate whether everything can be reduced to pairwise relations. I mean, most of our math is pairwise when we say multiplication, addition, subtraction. And so it's a pairwise operation. Now you might think, I can do three times three times three in my head, but really what you do is you use the associative law to do three times three and then you multiply by three again. So you keep it in pairwise operations or you just memorize the result. [01:06:45] Speaker B: You memorize it? Yeah. [01:06:46] Speaker A: If I say 17 times 47 times 65, you could probably do it, but you would do two and then another two. Right. So it's fundamentally, these are defined axiomatically as pairwise operations. And so this seems to work for a whole lot of mathematics and physics except. And so this is the famous example, the combustion engine. So in your car, if you have a combustion based car, you need to have oxygen, you need to have a little bit of fuel, and you need to have a spark, some kind of heat or energy to come together at the exact precise time for an explosion to happen. And then you yield that energy to drive your car. You do this many, many, many times a second. Now the interesting thing is, if you would study that pairwise, if you only use the air and you only use the fuel, you would never learn there's an explosion that could happen if you only use the air and only use the spark. You would not learn this Candido explosion and so on. So if you take. Now I'm going to talk neuroscientifically, if you take the brain and you take all of these neural interactions and you just look at pairwise graphs, it's often if you look at these papers, there's just lines between areas or neurons. You're missing out that if two neurons do something at the same time onto a third neuron, something else happens entirely. That's a Hebbian synapse. And we have nature papers that show that something happens that's totally nonlinear. It can't be reduced to its pair. So if we take pairwise goggles and we try to understand the brain, it seems we're missing the most important part. This taps directly into the debate of AI in terms of, well, if it's the hardware that determines whether AI is conscious or not. Well, a regular Turing machine is fundamentally pairwise. So you cannot. Time is not an operand. So yes, you have these clock cycles, but what happens within a clock cycle is irrelevant. So if you have an and gate, you wait, is this signal coming in? They don't have to precisely coincide, so they come in within a certain period. And you say, both are on and I'm going to flip the transistor and I can refresh and do it again. But it doesn't matter whether they come in with a delay or not. It gives you the same result. The and gate. The Boolean logic doesn't really care about time, but your brain clearly does, right? And so this is, I think, why there's these debates about biomorphic computers, SSN spiking, neural networks. So all of a sudden that becomes an operand and it seems to save energy, it seems to be more efficient. And IIT would say there might be a relation then to consciousness indeed as well, because these higher order interactions are so built in, in the mathematics of IIT and for me as a neuroscientist, all the way back to the beginning of our conversation, learning the math, which was hard because I'm really bad at math and I don't like it. I don't enjoy it at all. But learning that, wow, that's in the theory. [01:09:19] Speaker B: Don't you start to enjoy things as you get better at them, though? I mean, you know, because I. [01:09:23] Speaker A: It's horrible. [01:09:23] Speaker B: I'm sorry, I'm asking selfishly because I like you probably more than you like. I see equations and I think, oh, that's going to take a lot of time and is it worth my effort, right, to do so? But, but, but if I had a little glimmer of like, oh, eventually I'll enjoy this, that would help me help my activation energy get closer to like. I re. You know, every year it's like, I really need to understand these sets of equations in the next year. Man, I really need to understand these sets of equations, you know, and it goes on, but. But you're doing that hard work, and it's. It must be satisfying. But you're not enjoying it. [01:09:58] Speaker A: Not at all. It's horrible. Quite frankly. I think without AI, I wouldn't even be capable to. So I literally spent hours and hours, hours. Let's take Shannon's definition of information, right? Why is there a logarithm? I argue with the AI until I read, and then often two days later, I forget again. Because equations are so information dense in a way, right? The structure is so intricate, but at the same time, what it does you. It shows you that, no, I don't have to learn this equation. Really, what I have to understand is that there's a lot of assumptions that are questionable. So why do we take that as our definition of information? And so are there different definitions of information? And yes, there are. So why don't we use them in the brain? This, clearly, even Shannon said, isn't very good for understanding semantics. Then why are we using that? And so I think this is why it's helpful to sit down and try to understand the equations and maybe why some people say it is beautiful where it's actually on this almost philosophical, conceptual level that's hidden behind the ugliness of the notation. Do you see the mechanics? All of a sudden, it's like the Flammarion engraving where this guy is kneeling in the world and he's putting away the curtain of the sky. And underneath, behind the sky, where the universe should be, he just sees these wheels and rotating machines and stars and people now painted in psychedelic colors. That's more what it feels like to me, that behind it all, you see the mathematical. And that's where the beauty comes in, where you see. Well, if everything is relational, of course your consciousness should be related to the brain. Of course, that should be a mathematically describable relation and so on, where this whole. It all clicks into place all of a sudden that, yes, it makes sense that physics is fundamentally just math and increasingly so because we're describing the structure of the world. And again, you can then have a philosophical debate whether that's all or whether that's what's epistemically accessible to us. And I think that's actually the minimum. [01:11:49] Speaker B: This goes back to is the mathematical what's real. And everything that we perceive as physical, everything that we perceive as real is just the messiness that is somehow adhering to the tracks of the mathematical foundations of the universe. [01:12:05] Speaker A: Yeah. So I'm torn on it. And I find this one of the most intriguing questions now that I'm as far as I am and I'm so excited that more and more people are coming on board. It mirrors a debate that Russell had where he said, look, he understood in science we only find relations. When you say the electron, the only thing that you find is what happens to it if I do something to it or what it does to other things. And then even deeper, what do you mean by causality? And so it opens another whole can of worms that's relevant for iit that do we have a mathematics of that? And a lot of people are now thinking in terms of that doesn't seem complete. It's very powerful the math that we have a causality now, but there are issues and so that happens to free will and so on. But the deeper question, as you said, is there something intrinsic beyond extrinsic relations? I think that I'm open to both. My intuition is there's something intrinsic to my consciousness. There's clearly a first person perspective. And what gives me pause is that now we're back to metaphors and language. And so you pointed out before the role of intuition. And so my concern is as a psychologist, intuition, clearly lots of workshops is based on my experience. So your intuition changes as you experience new things. And so if you test intuition down to the core, there may be a genetic component, but that's just the experience of your ancestry. And so it's basically it allows you to deal with the word by intuition. And so my concern is I have no intuition for the third person perspective. I've never seen it. I can't even imagine it. What do you mean by even perspective? Because anything I imagine as perspective, and some people say God perspective, well, It'd still be 180 degrees. I can't even imagine what a squirrel sees. 360 degrees. Right. So I'm obsessed with people that have near death experiences or psychedelic experiences. Like did you see 360 degrees? Because that should be even then it's not a third person perspective. Right. Because as Thomas Nagy said, it's a view from nowhere. It would still be centered where you're physically. So what that means is I've never experienced it. I have no intuition. And so I think a lot of this debate about what do you mean everything is structure, is that. Yes, my direct experience is first person and so yours as well. We only just have a first person experience. I'm a cartesian Right. So I believe that that's all you got. You're in Plato's cave and you're seeing the shadows. Your brain constructs this first person experience and you're stuck with it. The rest is assumption. Do you really know that it's radioactivity? Have you ever seen it? What do you really know about causality? Right. And so I think that the problem is then when we think about the relationship of our first person perspective, our consciousness to mathematics, it's lopsided, where we have a lot of intuition on the first person perspective. I know what my consciousness is, is like, and no intuition for the third perspective. And so if I try to intuit about what you feel like right now, I'm probably doing very poorly. But better than that, because I kind of know what it feels like to be a human and sit there and so on. And maybe if we take two equations and we look at them just on a third person perspective, we have no intuition for it. But because we have none of either side, it doesn't bother us. Yeah, if I say the Pythagorean theorem, A square, B squared, it's just abstract symbols at point at some kind of principle. But it doesn't really bother me because I don't really intuit. Maybe I have this metaphor in the head of that people draw it out. But if I take my consciousness and then map it onto a seemingly third person mathematical description, it seems I have a lot of intuition here, no intuition over there, and then it doesn't seem to work. I'm like, well, that can't describe everything. But I worry that that is a lot based on the fact that I intuit a lot about the first person perspective and I can't intuit about the third. I think this is why it happens in quantum field theory, where a lot of people, when they learn. No, no, no, no, there are no little ping pong balls. It's excitations in a field. And you say, yeah, but what is that though, a field? They're so disappointed when they're like, what's tensors? It's like a 3D or 4D Excel sheet. It's just numbers at every precision space. Like, that can't be the world. Yeah, but it gives us the best prediction. Literally. You couldn't have the technology you have without that. Right, right. [01:15:50] Speaker B: Well, an equation like EM equals MC squared. Right. So what I'm thinking of, like, all right, let's. The end goal of IIT or any other formalized mathematical approach to consciousness is, well, let's say iit, right. Or a structural approach to understanding consciousness. So you end up with satisfying very predictive mathematical equations. Let's say a set of equations or one equation. Let's say it's simple, right? Let's say it's B equals neuron glial squared or whatever. Right. That still doesn't. So E equals MC squared. We still don't really know the metaphysical ontological nature of gravity, but we can describe it really, really well. It's the best way we can describe it. And we have to bottom out somewhere and be satisfied with that as the explanation. And I think what we tend to think is that, oh God, we understand gravity because we can describe it in an equation. That's not really what we're saying. We're not saying anything about the that fact. Metaphysical, we, they, whomever, experts aren't saying anything about the metaphysical basis of it necessarily, or they shouldn't be. So if we end up with that result in consciousness science, you're going to be satisfied, it's going to be beautiful and it will still be an approximation because science is endless. It will be a law, but it won't necessarily say anything about the metaphysical basis of consciousness. So we don't have to step on metaphysical toes necessarily, Right? [01:17:17] Speaker A: Yes, I think that's right. So I do have stronger leanings. And so my argument would be parsimony. How comes razor? So Quine has the same argument. So if this describes the world so well and that's all I need to describe it, let's say quantum field theory, it doesn't really make sense to also assume that there's pink unicorns that magically change that we can't see that change particles. Right? And so the argument for me, if you say, look, okay, we have it now, let's imagine. And again, this is bad because we shouldn't have thought experiments about the future because we don't know what it's like and people have been proven in doing that wrong and wrong again because all we do, if you imagine, let's imagine there's a brain equation, all we're doing is we're testing conceivability and our conceivability might be limited fundamentally. We're just testing our intuition now. But let's do that though, for a moment and then say, well, I still feel there should be more. If you're describing my consciousness very. And you can recreate on a computer screen what I'm seeing, we're getting there very quickly, right? So people can almost now or actually to some degree Decode dream experiences. So what you see, you put in a scanner, you train with an AI, you recreate it with a generative AI, it comes very close, certainly will happen in a lifetime. So. But that's still not the same though, right? Because like I'm still like looking at that from my first person perspective. Maybe I'm color weak or colorblind, it looks different and so on. So I get that, that you say, well, there's something intrinsic that you can't capture with these extrinsic relations. My argument would be okay, but if you want me to adopt more in my metaphysics, my ontological worldview, there should be more than just my intuition. You should have some piece of data or some logical argument that forces me, compels me to accept that there's more. So I end up carefully again, I'm a Bahesian, so I never believe I know the truth. But I end up saying, well, in terms of likelihood, have a simplified model that all is structure seems quite on the horizon, plausible. It's not there, but I can see that this might work. But I agree with you that I think the danger is relativism. The danger is to say, well, science doesn't know the truth, or as I said, there is no truth. Contradict yourself. And then to say where anything goes, or it's just the worst statement in this realm, I find is all models are wrong. But some models are useful, which is demonstrably wrong. I mean, there are some models that are one to one models. There's nothing wrong about them, right? So I always bring up the meter that was defined as a model. So some people came together in Paris and they took a piece of metal, they made it exactly one meter and. [01:19:43] Speaker B: They said, that's a meter, that's a definition. That's not a model, that's a model. [01:19:47] Speaker A: So mathematically that's a model. It's a concrete representation of something abstract, right? Realization is what mathematicians call a model. [01:19:55] Speaker B: That's how they define it. That's how they define a meter. That's not a model of a meter. That's how they define a meter. [01:20:00] Speaker A: Well, okay, so you can take a model of something, right? And so mathematicians say that's a realization. That's in modern theory. And so what you're saying is I'm modeling something and you're assuming you're making shortcuts. But you know, there's a one to one model. So. Right, you could have a one to one model of the Isle of England. [01:20:19] Speaker B: If you take it well, that would Be England. But then that would be England. It wouldn't be a model of England. [01:20:23] Speaker A: Would be a copy. It's a quark by quark on a different planet. Right. That's a one to one model. It's identical, isomorphic. [01:20:29] Speaker B: Oh, okay. Well, I think that some philosophers of science would have an issue with saying that that is a model, because by definition a model abstracts something away. [01:20:39] Speaker A: Yeah, but that's not a mathematical definition. So a mathematical definition is that you. And so it works. Actually, I think that's a confusion. I think if in science what does a model. Because people use model as theory. But a model is a realization of the theory. So you have a conceptual theory and then the model is usually computational or mathematic formalism that is compelled by logic from your theory that can be tested. So it's a down arrow, not an up arrow. Okay? It's not subtracting up, it's actually concreting down. [01:21:08] Speaker B: So in the limit, I guess the famous example is if you completely replicate everything about the cat, what you have is the cat. So is one way to say it like the limit of a. In the limit. A perfect model is a one to one mapping, but we just never, we hardly ever get the limit. I reject your meter example, so you'll have to come up with a different example to convince me. [01:21:34] Speaker A: If you think about it though, Paul, if you think about it though, it's a model. It's a model of a meter, Right? So basically the meter is an abstract concept and that's our model of a meter. [01:21:44] Speaker B: Okay, maybe we can quibble about definition versus model, but. Yeah, because that's how it's defined physically. [01:21:50] Speaker A: Well, now we define it physically. It's a certain distance that the speed of light is traveling, a certain amount of time. That is a definition. But this is a model. Like you just took a piece of it. So anyway, my point is that the problem with these kind of statements, these slogans, is that they lure you into some further thought that is not even implied, which is that all theories are wrong or something. Or relativism. [01:22:11] Speaker B: Yeah, you're worried about relativism. [01:22:13] Speaker A: And so the way out, I think is pragmatism. William James tried to say, well, some theories work better than others, you have to admit that. So not all theories are equal. And so we all agree on that. Because if I take an airplane to go somewhere, I hope that the theory of aerodynamics works better than some ancient Greek theories about how the air works. Sure, yes. I think for me pragmatism has a problem which is that you owe me an explanation why some theories work better than others. This is going to be very difficult if you don't admit that as an absolute truth, that we're getting closer and closer, more and structurally aligned to that there's some kind of structural overlap between my ideas and what really is. That's why it works better. So I've yet to hear a really good argument why you could have some ideas be better than others. If there is no truth, that is very difficult to make that point. So then we're on the same page. So that yes, even as science progresses, we're learning more and more and our models get refined, but if you look very closely, the fundamental structure always remains the same. So if you look at Newtonian physics, well, it's actually inside of quantum physics, it's inside of relativity, right? So you can make Newtonian physics a special case of these grander theories, but they don't really replace the original theory. If you look at biology, you know that the neosynthesis of using molecular biology and the inside of DNA to explain genetic evolution rather than individual evolution, you know, pushed by Richard Dawkins, the selfish gene and so on, that still has natural selection built in. So, yeah, we're getting better and better at refining our theories, but we're never really having Kuhnian paradigm shifts. So it seems that once a theory is really good, particularly when it's formalized and mathematical, it seems that it's already very close. As Piet Heinz said, we err and err and err, but less and less and less. Yeah, it's probably still wrong, but we have to admit it's better than many alternatives. So, and then an argument could be made that at each point in time you should admit to radical doubt that you really fundamentally don't know much. Maybe you know you exist and that's it. But some theories are better than others. And at each point in time we're trapped in the time we live in. Those seem to be the best explanations. And then we shouldn't make superfluous assumptions, we shouldn't add something else, especially if there's no empirical or logical evidence to do so. And so for me, that makes it enthralling and enticing to toy with this idea that all is structure. It certainly does something that my problem is with other theories of consciousness, which is it makes it non naturalistic, where you see naturalism, meaning you have one physical universe that is explained by physical laws, and then consciousness is something else entirely. And so that I'm willing to accept that. But you have to make a strong argument because we kind of start out with naturalism, assuming there's one reality, it's coherent, it's causally closed. So unless I can see strong counter evidence, I want to keep going with that. And so that's the appeal of structuralism in IoT that it takes consciousness as something that is seemingly not fit into the physical world and makes it jive. It just puts it right in there. [01:25:22] Speaker B: In the brief email exchange we had before we started recording in the literally two emails we were discussing. Oh, okay. Well, I sent you some topics we talk about and you're like, okay, yeah, let's talk about that. And at some point I said, what's going to. Oh, you asked me if you could ask me questions and I said, sure, but what's going to happen? I'll tell you what's going to happen. We're going to spend, you know, two hours and have barely covered, I mean covered a ton of stuff, but barely covered any questions that we could like agree on beforehand. And this is exactly what has happened. But there were a few things that you wanted to highlight and that I want to make sure that we also cover. I already know. I mean, we're just going to have to have you back on again. I have so many questions like remaining here. But okay, so you just brought it back to IIT and you have. So we're totally switching kind of switching gears here. We've been talking really deep, high level philosophical questions. My fault. But, but, but hey, your fault too. [01:26:25] Speaker A: It's your fault too. It's interesting. [01:26:27] Speaker B: Yeah, yeah, it's super interesting. But it turns out that a soccer team is conscious. Um, that's to some degree that's not the take home. Yeah, yeah, but that's, that would be like the highlight, right? In a. That would be the headline. Right. So let's talk a little bit because. And, and maybe this will segue a little bit into some of my questions about phi, the, the integrated information theory, measure of, of consciousness. So you guys calculated phi based on some soccer team results. So I want you to tell me about that and then I, I want to make sure we, we talk about your passion for, for this open science. Yeah. Before we go as well. So, yeah. So a soccer team has some. Phi has some, what you guys call latent consciousness. But the phi measure is lower than any individual soccer player's phi value of consciousness. Therefore, every soccer player has an individual individuated consciousness. That's a summary. What did I get Wrong. And what did you guys do? And why did you do this and what you did? What should we take from it? [01:27:30] Speaker A: Yeah, you actually just nailed it perfectly. And thank you for saving me from drowning in the maelstrom of metaphysics. But I actually think these discussions are interesting because what you just said seems ludicrous. And so it's very important to do what you just said to highlight, no, this is not just philosophy. We can actually use it and analyze data and get results, but then the results have to be looked back with this metaphysical stance, because everything you said seems. If you would have told me three years ago, I would have said, what are you wasting your time on, Alex? There was a YouTube competition on explaining IIT and I found somebody in Brazil who's very good with animations. And we made some animations to explain some of the basics of IIT's mathematics. And I got approached by James Watson, who's a professor at Oregon State, works in complex dynamical systems. He said, I want to use that. I want to try that mathematics. And he said he got these data of FC Barcelona. They're tracking their players with GPS sensors, the ball as well. So you get the position of every player with millisecond precision across entire games, and they do it for all the games in the Spanish La Liga. And so we got that data somewhat anonymized. Unfortunately, it was limiting what we could do. But the idea was to treat the soccer players like neurons. And because IIT is substrate independent, it doesn't say that only your brain can be conscious. It's saying if we have something that operates exactly like your brain does, it should also be conscious. Now, the one thing we haven't talked about that I should throw in real quick is that ILT is not so much worried about the action, not so much worried about the transitions or the functions. It's very much interested in what happens in between what we would call our states. When you said F equals ma, technically that's all at the same time. So F equals ma, no time passes MC squared. That is what we call a state. And so IMI noether showed that function can only be explained with function. And so states should be states. And so there's a little throw in. So what we actually did is we had to look at states in the game. So we can't actually. We're not really interested in what the players are doing. But what IIT does is more interesting, is we look at where the players are standing and what they could do and what they actually do, and so that is a little bit more intricate. There's a lot of computational work involved, but it gets closer at what information is and how it ties to causality. So we're looking at what a player could do and what actually happens and looking at the difference between these things. And so we said, okay, so if an old idea, if humans act like neurons, then maybe consciousness arises if it's enough neurons. And so we had to limit ourselves. That's one critique of IoT, that we cannot do this for very large neurons yet. So we're very limited to Definitely less than 20 at this point, because this is a lot of math that's involved. And even the supercomputers can't quite crunch the numbers yet. So we took very small numbers, four players, two players most of the time. And then we said, look, they're active if they're not just walking around. So if they accelerate more than average, they're probably getting actively engaged in the game, or they just walk around a stand, and that's inactive. So it's kind of like action potentials, that these are zeros or ones, the players are active or inactive. And then we look at this web of causal interactions, including these higher order interactions where two players might have an effect on a goalie that each of them alone wouldn't have. And so I suspect that based on these discussions I had about hypergraphs or these higher order relations, that we might find something in the data that otherwise, if you just look at it as pairwise gradual causality, correlations, transfer, entropy, we wouldn't see that. We didn't test for that specifically, although I've done some informal tests and that is what seems to be the case. But what we found is that if we look at game outcome and remember the data was anonymized, so the best we could do is look at shots on goals. We didn't really know which shot succeeded and which player was which, but then we found relatively weak but very, very significant correlation. So in other words, if you compute the phi value of just a few players in the team, and we iterate get an average, you can predict something about the game outcome of the performance. And then James took a second step and so he would compute these phi values across games and then look at the top three teams of the La Liga at the end of the season, at the rest. And so it's a bimodal distribution, normal. So the top teams, they have on average much higher 5 values than the rest of the teams. And so that is, I think, what I like about it is that it shows you that despite everything philosophical that we talked about, if you're interested in sports betting, you might want to take a look at the paper. So this is the beauty about science that at the end, this has real world implications because you're using mathematics. [01:31:53] Speaker B: Is it implications or is it a description? Because is there an assumption baked in there that phi actually is actually doing what it is purported to do, which is measure consciousness? Or based on the metrics that you're using to assess integrated information, you come up with this scalar value, but it really has nothing to do with consciousness. It is just that's the set of axioms from which you derive all these equations and the structuralism, et cetera, and you come up with a number. And so do we really need to worry that a team is conscious, for example? [01:32:21] Speaker A: I think that's why it's a helpful discussion we had beforehand. And so I have not done it due diligence yet. But the idea is the following. Yes, you're measuring integrated information. I think most of you would agree at this point because there's a positive outcome. And what did you do right there? But the interesting thing is, what do we mean by integration? What do we mean by information? So we're not using Shannon information here and what integration is. We're not using a product. What do you mean by integration? And so there's actually a different mathematical formalism that defines these terms. And if you ask yourself, okay, so what do you mean by information? What do you mean by integration? The interesting thing that IIT does, it starts from your consciousness. This sounds crazy, but it's basically saying, what are these general universal properties that, that all of your conscious experiences have that are different from what you experience in the moment that's constantly changing? But there's some things that are constant about your consciousness. And it derives these five axioms or principles, if you want a structure, that there's informativeness in consciousness, that it is integrated one whole, not split into parts and so on, and it builds the mathematics out of that. So when you in the end say, well, okay, you computed something, but how does it relate to consciousness? I point back to the beginning and I say that math was inspired by my consciousness, that the axioms and the math comes from what I distilled from my consciousness, the structure of my experience. There's the link right there that we started out pie in the sky to try to do math on my consciousness, which seemed ludicrous at this point. Now we have this kind of finding and you say well, where's the connection? And I can trace every step along the way back to where we started with just meditating about my consciousness. So that is what's exciting now, still. Does this mean that I can say anything about consciousness or the soccer players? Yes or no? So why do we call it latent consciousness? Well, because obviously a team is not conscious. We didn't try to show that a soccer team is conscious, but it's a test of the theory nonetheless. Because if, technically speaking, if we would find this really, really high five value and it would be technically speaking, higher than the phi value that we assume the human brains have, IIT would have said the team is conscious. And even more, the players are unconscious. They lose their consciousness and give rise to super consciousness. So in a way, it's not really a strong test. But the fact that we didn't find that is actually going with the theory. Right. So we found a low phi value, and that's exactly what we should find. Now the critique, of course, like an expected negative result. [01:34:43] Speaker B: Almost exactly. Would you consider. Okay, yeah. [01:34:46] Speaker A: Now the critique is that we don't really know the phi value of each player, the human brain, but we can assume it's more than six or whatever we found for these players. So I do think that this is baby steps. It's not a falsification of the theory, not even an attempt, but it's a demonstration of the power of coming up with a formal apparatus and linking it to consciousness by, for example, creating this axiomatic system that is grounded in consciousness. [01:35:10] Speaker B: So there are so many tweaks that you have to do like, like in any analysis. [01:35:13] Speaker A: Right. [01:35:14] Speaker B: So you use the term binarize. I mean, yeah, and we think we measure everything in the brain still with, with action potentials, which we then we, then we binarize. It's either a spike or it's not a spike. And then, so for like time series, like the velocity of the players, right? So you have like a continuous time signal and then you just take the median and you ones are above, zeros are below. And so this is this huge abstraction. Right. And so is that like a reliable thing to like. That's like a huge step. And then we're going to use that as the basis for all of the calculations. And it's necessary to binarize to do this. So there are just so many of those, like little steps that you think, ah, I don't know about that. [01:35:53] Speaker A: Right, yeah. And so this is where my heart starts to beat faster as an experimentalist. Right. Because if you do, let's say you got an EEG signal or a local field potential signal and you're doing a Fourier transform. Well, the math tells you that the signal has to be stationary. So is the signal really stationary? No, it's not. Right. This is actually exciting that what we often find in science is, oh, here could be another example that goes more deeper into philosophy of mathematics. Again, real numbers, you can't actually computationally decide if two real numbers are the same. Yeah, we got infinite decimals. How would you know? So what we do is approximate, right? So we actually truncate the real numbers and turn them into fractionals, and then we treat them the same. And the crazy thing is it works. So I do not have to know PI to infinite decimals in order to build a dome. So nature isn't infinite, it's actually finite. It seems had a starting point. We don't know if there's an endpoint. Quantum physics tells us that we can't really say anything coherent about a certain smaller piece of time and space. It seems discrete. And so then approximation makes a lot of sense. And so the question then as an experimentalist is like, how much can I violate the theory? How much can I truncate and approximate? And it still works. And if it still works, it makes the theory even more powerful. So yeah, the theory uses real numbers and no human person has ever measured a real number. It's possible that we don't have this time and space to write down a real number, but it still works if we approximate. And so that actually in a way lends even more credence to the theory. And the other thing I find interesting about it that IIT now does, and other theories of consciousness I don't quite see to this point is that you can now play with that. You can say, well, what if I violate this principle? Or what if I don't binarize? Does it still work if I don't buy the rice? Or what if I don't like this axiom? Like with Euclid, the fifth axiom of iit, the exclusion axiom, is under heavy debate. When we dropped Euclid's axiom, we found non Euclidean geometry and lo and behold, we can make relativity work. Spacetime might be curved. Now we can do the same in iit, we can drop the exclusion axiom and see what happens. And we maybe find a non iitian kind of consciousness theory. But the benefit is that we'll still stick to strictly sticking to the formal. We're not just using words and saying well, maybe it is this area. If it comes on or it merges in this area, we're wrong about these areas. We're much more precise in terms of. Yeah, but we still have to use these other equations though, or we have to modify these equations and see what comes out of this now. And so I think that's what's exciting about when science reaches this step of maturation behind conceptualization to being more formal, that it allows you to play on a different level. [01:38:26] Speaker B: So I used to do monkey neurophysiology. And the great thing about monkey brains, I have found out, and human brains is that our neurons really spike a lot. So you can have a really nice temporal resolution in terms of counting the spikes over time. And you get a. What's called a spike density function where you can see the rate of the spiking over time. I found out mouse brains do not spike nearly as quickly. So what we end up doing is collecting the spikes and counting them in larger bins. Right. To make our histograms. And so we're kind of coarse graining the temporal scale here and thinking about binarizing and calculating phi. Is it possible? I'm wondering about the limits of like the coarse graining. Can you coarse grain and use coarse grained measures to calculate phi? I'm also thinking in terms of how being scale free. Scale free is a property ubiquitous in living systems. Right. And I'm wondering if you should see that kind of fractal structure such that it shouldn't matter necessarily what sort of temporal resolution or binarization kinds of procedures that you might use should you see the same sort of phi result whether you're coarse graining or not. I'm sorry, that was a lot of information in question. [01:39:45] Speaker A: That was a really interesting thought. I've never thought about. That's a really interesting point. Again, the cool thing is we can try, right? Yeah. So IIT is talking about coarse graining. And so it goes with what I said before, that mathematics is the mathematical is the least arbitrary. So you shouldn't be arbitrarily binning your spikes. We all do. But like, why? Right. And so what IIT is saying you should iterate through all the graining and then you should find where the maximum is located and that is the resolution you look for, both in time and space. So IIT is not assuming that spikes are what supports consciousness. That's why I said before, I'm not saying neural action, I'm saying neural mechanism. So IIT is saying actually the spikes itself are totally Irrelevant. If you take all of these neurons and you separate the axons and they all live in a petri dish and they would fire the exact same sequence, there would be no consciousness. Well, no. Right. So it's the connectivity that is very important as well. So you have to look at the pattern of activation and inactivation. Also, doing nothing is causally as relevant as doing something is not intuitive. But if I gift you the most nice thing in the world and you don't say thank you, it has a causal effect on me. So not doing something that is expected. So if there is a counterfactual that you could do something that actually lends it causal power. So I need to know what is active, what is inactive, and then I need to know what can affect what. So I have to create this possibility space of causal interactions. Then I look what happens, and then I can in a way compute the difference. But what do I mean, what happens next? Right. So, okay, you're saying a nanosecond, picosecond, millisecond and so on. Now, when it comes to consciousness, we don't have to do this crazy. Look, we know a picosecond or nanosecond is not like we know consciousness happens on the order of milliseconds. And so we can constrain our search space. But yes. Is it neurons? Is it columns there too? We have a notion. It can't be entire areas, it's not the entire brain. But yeah, maybe neurons are not enough. Synapses is probably not enough. Maybe neurons, maybe columns. So we can constrain our search space. We can then do these computations and we can find the maximum and say that is our cost grading. Now, your idea about fractality and scale fairness, I have to read. You're going to keep me up tonight. That is really interesting. I agree with you. That would be an amazing question. I did it. [01:41:56] Speaker B: Yeah, that'd be cool though. Yeah. Let me know if you think more about that. [01:42:00] Speaker A: Yeah. So we're working on. So that's really quick. So we're working on the tiny, microscopic worms. The elegance that lives in the soil that people have done a lot of genetics with. It has gotten more interesting. 300 exactly. And so that seems more feasible for IIT then. And the interesting thing is they don't spike. So these neurons don't spike. It's actually an analog computer. So we are running into some of these questions. And so I'm just limited by time, but now I'm burning to try out some of that. [01:42:30] Speaker B: Oh, that's cool. Okay. So in any case. So my son has a basketball game this Saturday. If they're down at halftime, I'm not his coach, but I'll ask to give a halftime speech and I'll say something like, all right, boys, let's get out there and increase our phi value. [01:42:47] Speaker A: Exactly. [01:42:48] Speaker B: Exactly. And then I'll just walk away and hopefully they'll intuit what I mean. [01:42:53] Speaker A: High integration. [01:42:54] Speaker B: High integration, yeah, like the winner. So anyway, just to bring it full circle on the study that you did with the soccer players. So you looked at distributions of five values and wins, losses, etc. Etc. And you find that the. You're more likely to win the game if your team has a higher five value. Essentially has a higher amount of latent consciousness. Is another way to. [01:43:14] Speaker A: Exactly. [01:43:14] Speaker B: To say that. [01:43:15] Speaker A: Yeah. [01:43:16] Speaker B: Okay, so I want to make sure that we talk about this before and, and then. So after we talked about open science, if you're. Are you good to hang on just for a few more minutes after that? Okay. Because there'll be a little extra time and we'll kind of go through some rapid things. So, yeah. You've been like many people who are. You are very excited about this turn in neuroscience towards open science that's happened over the past few years as large institutes like the Allen Institute, there's International Brain Laboratory. I just talked with Tatiana Engel that she's a part of, and I was like a board member of. And everyone is shouting that now neuroscience is going to be a lot like physics has been in the past. And that is a great thing. So what is it about this open science that has you excited? [01:44:01] Speaker A: Exactly. You just nailed it. So I think that I've been in touch with both the EL Institute a lot and also ibl. I'm actually trying to create a bridge. I haven't quite succeeded yet. But these are two vast projects where people come together and I think the track record has been really encouraging, if not not surprising and successful. [01:44:24] Speaker B: Via publication metric or track record via. Okay, that's different. Publication metric to me is correlated with, but not necessarily a perfect indicator of productivity. [01:44:34] Speaker A: That's fair. Yeah, I agree with that. Yeah. So, yeah, I think that's why it's not bad. We spend a lot of time talking about more than just about science. I think that. I think there's two things that I see. I learn about the history of physics. And so there have been crisis in funding in physics. And I think a lot of this, what we're talking about, is driven by two things. One is that the lower hanging fruit are gone. So there's a lot that when we started neuroscience and FMRI was a new thing, I remember when it was there were things you could do that now wouldn't raise an eyebrow and similar, once microelectrodes came around, we could step through entire areas and there was a career built by saying, that's my area and I'm going to study that for the next couple of decades, decades. And so that's over. And so you see that then what that means is that more effort is taken, especially for high impact publications. And again, it's an interesting debate whether that's actual progress, but there's also more and more authors involved. And I think the reason it works is because of Google Scholar and other citation tools that are saying, okay, so the old days. And so sometimes it's still unfairly the case, like we count your first author papers or your last author papers. So it's very important that you're last or very important that you're first. And that becomes a problem if 12 people are on the paper. And so I think the new way of saying doesn't care where you are on the paper. You get when the paper gets cited, that counts for your age index, that counts for your citations. That has broken it down. And so now it's okay to join big teams and get these big papers again. The physicists have led the way and the other one is funding. And so as funding dwindles and so I invite everybody to go and do their own research. So I found out that, that it's basically a geometric, maybe even exponentially decay since the 60s. So if you look at inflation corrected as a percent of GDP, funding in research and development, R and D in the United States, it's been a steady decline. There's been very little blips in between. And it's a trend that we'll have to deal with. And so one idea then is again what physicists have done because their experiments are getting more and more expensive. Big satellites, big telescopes, particle accelerators, is to bundle up, up and say, you know, we're going to all join forces. We built this expensive machine and then all of us can use it and we're going to share that and make progress. We can't have a particle accelerator for every university in the United States. And so I think that was the vision of Christoph Koch with the N Institute and the OpenScale project. And there's a lot of, for me, as somebody who was trained in the old model, the old paradigm, I Was not sure this would work. I saw a lot of objectives. Yeah, what about this? What about that? And I think the nice thing was to just do it it and then see if it works. And so we were lucky enough that we were one of the first teams of the OpenScope project that won the competition, was double blinded and so on. And so then this nice thing was they collect the data, we do the analysis and so far it looks pretty good. So things are going pretty well. And so now the next round of the open scope and if you allow me to advertise a little bit, make everybody aware because I stayed in touch. So that's completely open. So what's happening there is that that review paper that I talked about is now setting the grounds for experiments are being run at the Airline Institute and the data is published within a week. So as soon as they pre process they publish it. It's all on GitHub. So what should you do? How can you profit from that? How can you become an author on maybe one of these big papers? Well, you can just join and write analysis code. And so the way it's working is that we write the code, we publish it and then share it. Somebody else is just builds on that. We have weekly meetings over zoom and those get posted literally the day after on YouTube. So you can follow up. There's an entire website where you can see everything that has been done so far and where you can just jump on. There's even a discussion board that gives you starting points. Anybody can join, no PhD required, nothing academically. Any person in the world with Internet access can join. And what sets the limit is can you really write code and contribute in the discussion and the experiments. And of course the data is amazing. So it's what the Allen Institute can do, what other people can't do. Part of this project is of course two photon imaging. Part of it is doing the exact same experiments with with six neuropixels electrodes. And there's even an experiment where single neurons are imaged hundreds of synapses at the same time near the soma and the dendritic tree. And you can get pretty much single glutamate vesicles and you can see the output of the neuron in terms of calcium signals. So you can do single cell computations so that now with these high speed to photon techniques can be done and you get the data for free. It's literally just sitting there as we speak and waiting to be analyzed. So the publication track record so far has been extremely positive. And my experience in these interactions, I'm like, that's chaos. It can't work. It's working out extremely well. There's steady progress as you work on something, you push your code to GitHub, share it with other people in the morning, you wake up and somebody else, a different part of the world, might have worked on it and done some progress. It couldn't be any smoother and faster. And so that is where I see a lot of the future of neuroscience going. [01:49:23] Speaker B: It couldn't be any smoother and faster. Once you learn how to interact with the API and join up and get the data, et cetera. Like, what is the. I was talking with Tatiana about this also. Like, what is the learning curve in terms of how painful is it to just get up and running? [01:49:40] Speaker A: Essentially, I would say nil, because. So, for example, I said, I'm very bad at math. I'm also very bad at coding, but thanks for wipe coding. Right, so basically the. Yeah, people publish Jupyter notebooks. I use Google Colab so I don't have to install Python. None of that. That used to be painful in the past. So I open a collab sheet, I put the code in and I ask ChatGPT or Copilot to explain the code and then I tell it what I want to fix. Now, this having said, I do know enough coding. Obviously, I've done a lot of matlab code writing. And so in the old days, I'm trying not to be too extreme. I can't read the code, so it's just like the French I learned. I can understand a lot of French. I can't really speak it anymore, so I can read the code and I can catch mistakes. And I can see you could be shorter, so that is, I think, still needed. You can't just wipe code and trust it. But it's all open source as well. So other people will look at your code and the code gets reviewed as it gets used. Lots of sanity checks built in. And so, no, I think that even if you know very little about coding and nothing about the data structure, you can use the code that's already on there that says it loads this Neuroscience without neurodata, without border data. And here, this, this for you in a Google Collab format and you can do it. A Google Collab sheet, for those that don't know, is literally a website where you click buttons and it runs the code and it plots the figures and you can expand on it, you can edit it. So luckily this all happens at the same time. That's I think why there's this rush of energy for trying these new ways of doing neuroscience. [01:51:07] Speaker B: I have a personal hang up and this is some, it's like a knee jerk reaction to these really super large institutions. And maybe you can help me get over it. I know it's irrational, but I always root for the underdog in sporting events and there's, you know, David versus Goliath kind of thing and Allen Institute, International Brain Laboratory, they're coming out with these thousand author publications in glossy journals left and right. And I can't help but see Goliath like in that. And it's almost like I want to root for the, the small laboratory doing, you know, their own sort of thing. I know it's, I know it's a fundamentally different thing because it's, because in this case Goliath isn't like saying, well we have all the data and you can't have it. They're saying here, please everybody. And there are pains taken to make sure that it's accessible and with tutorials, et cetera. And that's like, you know, the biggest issue with these large outfits. And yet there's still part of me that sees it like, like, well, if you don't join, you're not part of the team, you're not part of the, you can't advance, you won't get in high tier journals, so you must join. And it's almost like a forced takeover of the academic industry publication Rat race cycle. Is there any merit to that? To my scared knee jerk, anti dominance reaction there? [01:52:34] Speaker A: I share it. So yeah, I think that obviously I didn't grow up in the United States and the reason I came here was that that was the unique funding model of the United States. Small PIs, RO1s and so on. The problem is that it's not working anymore. And so it's working for some, but it's increasingly hard to get funding. The rates, we don't even know what the rates are anymore. They're not published a success rate. I only know that I score pretty well and I seem to be far off. And so then the question is, what else can I do? And so the other one is that I think that my experience has been the problem is not so much an institution, the problem is how democratic versus top down it is. And so the problem is less institutionalism than authoritarianism. And so the experience for me has been that it can be difficult for a single lab to gain even the intellectual independence that seemed to be suggested. And there's a Certain practicality baked into that in terms of that you need this lab space, we need to pay for it. There has to be a certain money coming in and so on. Then also forces you to not just do research in the blue, but listen to other people, what can be funded and what they think can be funded and so on. Now, so the way that I see that at the ALL Institute, my experience has been is that there's very little top down, which is great, and people like Jerome de Kock that are running promo, I think they deserve a lot of credit to try to make it as democratic as possible. Now, there's always a little bit of hierarchical structure built in and it's necessary because of the legal framework. So there's certain things we have to be careful about in terms of, well, how do we deal with authorship, how do we deal with copyright? That where it's good if you have a general council nearby that can craft some legal text and so on. But the important thing for me is if I think back in Greek times, when we think about the birth of democracy, flawed as it was back then, the agora was the important thing, the marketplace where everybody can meet. And so that can be a pretty big agora. But you need an agora, you need a place where you can meet. It's going to be very difficult, if not impossible, for some of us to work together on big neuroscience or consciousness projects if we don't have infrastructure or a place to meet need. And so that's where I can see that institutions can actually be really helpful as long as they are hands off as much as possible. And so I think that I don't have experience with the IBL, but at the Allen Institute, OpenScope, so far that has worked really well. That again, it's just us meeting over zoom. There is, of course, structure in terms of what we would talk about today and trying to keep things on time, but everybody has a voice. It's not like somebody raises their hand. They wouldn't be getting called on. And everybody can post on the forum. And as I said, I don't think you need an academic email address press. And so that's where I thought before, does this work? Does this not work? And to my surprise, this looseness and almost anarchical system really seems like it has refostered my belief in democracy. It really seems to work. If you get many people in the room, you get the best outcome. [01:55:18] Speaker B: Okay. Because current government does not make me feel better about democracy. And that's what I worry, you know, like, so in like the Senate or something. You got to like scratch bills back back and make sure Tanya's happy before to get just your like little part of the bill passed. And that's sort of, you know, I see that like, oh, that's the danger as a macrocosm. Okay. Anyway, you made a good case for it. One of the things. So you pointed me to, and I will point to in, in the show notes, I'll point people to your YouTube channel and we might be on your YouTube channel right now. And the, the papers that we discussed. And also a textbook if you would like me to. Anyway, unless you don't want me to. So the textbook that you're writing, I actually read it really quickly because it's still in early stages, even though I'm sure you've taken a lot of time on it because it's fundamentally a different style of textbooks where you're like, oh, you really go through the philosophical underpinnings of why what you're going to say is important and you have to like align all those things. So much of what you mentioned today is in one form or another, like described in this textbook in the works that you're. So people can go to that and essentially watch you write the textbook as you write it. It's a very different kind of textbook and I really enjoyed it. So far, some of the chapters are still blank and I was kind of happy because it made me be able to read it faster in preparation. But I'm looking forward to seeing how that comes out as well. Alex, thank you for going over with me and I knew it would be fun. It's been a lot of fun, so thanks. [01:56:48] Speaker A: Likewise. Thank you so much. Appreciate it. [01:56:58] Speaker B: Brain Inspired is powered by the Transmitter, an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives written by journalists and scientists. If you value Brain Inspired, support it through Patreon to access full length episodes, join our Discord Community community and even influence who I invite to the podcast. Go to BrainInspired Co to learn more. The music you hear is a little slow, jazzy blues performed by my friend Kyle Donovan. Thank you for your support. See you next time. [01:57:47] Speaker A: Sam.

Other Episodes

Episode 0

November 29, 2022 01:22:27
Episode Cover

BI 154 Anne Collins: Learning with Working Memory

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord...

Listen

Episode 0

September 07, 2018 00:53:34
Episode Cover

BI 008 Joshua Glaser: Supervised ML for Neuroscience

  Mentioned in the show The two papers we discuss: The Roles of Supervised Machine Learning in Systems Neuroscience Machine learning for neural decoding Kording...

Listen

Episode 0

May 09, 2023 01:27:12
Episode Cover

BI 166 Nick Enfield: Language vs. Reality

Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and...

Listen