BI 076 Olaf Sporns: Network Neuroscience

July 04, 2020 01:45:57
BI 076 Olaf Sporns: Network Neuroscience
Brain Inspired
BI 076 Olaf Sporns: Network Neuroscience

Jul 04 2020 | 01:45:57

/

Show Notes

Olaf and I discuss the explosion of network neuroscience, which uses network science tools to map the structure (connectome) and activity of the brain at various spatial and temporal scales. We talk about the possibility of bridging physical and functional connectivity via communication dynamics, and about the relation between network science and artificial neural networks and plenty more.

Notes:

View Full Transcript

Episode Transcript

[00:00:01] Speaker A: So what makes us human then is, I think, nothing magical. Nothing. I don't think it's a special cell type, it's a special brain region. It's a special type of connectivity or topological feature. It's the sudden explosion of possibilities that occurred when our brain topology became capable of using our bodies and feeding itself information in new ways. So there's a network there, a larger network that's above and beyond what we can measure in individual brains. But I think that' that's the way I think about it. I was working on some of this stuff, you know, 20, 25 years ago, and believe me, nobody had any interest in it. I gave a talk on small world networks in my first talk on the subject in 1999, and exactly three people showed up. Wow. [00:00:56] Speaker B: This is brain inspired. What pops into your head when I say network? Do you think of an artificial neural network like a deep learning model, or do you think of real neurons and their connections and brains? Maybe the cities in your country connected by roads, maybe an ant colony. If you're Olaf Sporns, all these things pop into your head because everything is a network. Olaf has been studying networks for many years now, and specifically Networks of the Brain, which happens to be the title of the book he wrote about a decade ago, Networks of the Brain. Olaf and his colleagues are responsible for giving us the word connectome, which is the wiring diagram of the brain at various spatial scales. That's a structural network, the connectome. But there are networks made of the activity patterns of our neurons as well, functional connectivity and all the network dynamics in between. And in the past 10 years or so, the study of brains using network science has taken on the name network neuroscience. And that's what we discussed today. We talk about where network neuroscience came from, where it is now, and where it's headed, and how Olaf thinks of brain networks relative to artificial neural networks like the current deep learning models. So hopefully this conversation serves as an introduction for you to learn more, which you can do through the show notes at BrainInspired Co podcast 76. Speaking of learning more, I was on a run yesterday. I was about halfway up the mountain, gasping for air, and in my oxygen deprived state, I realized something I should have been doing all along in this podcast and I will start doing so before I air an episode. To those of you who would like it, I'll send an email with a relevant paper or paper abstract, or a link to a video, something like that, that will serve as a primer for the upcoming podcast episode. So if you use this Podcast as a source of education or you just want to get more out of it? I know it can get sort of deep and technical sometimes. Hopefully you'll get this email from me. You can digest whatever I send in the email and then let your subconscious do its thing before you listen to the episode. So I put a sign up box right on the homepage at BrainInspire Co where if you sign up there, you will receive these emails about upcoming episodes. I'll also do this on Patreon, of course, where sometimes before I even record an episode, I ask my Patreon supporters if they have specific questions for the guest that I am soon to speak with. Okay. Olaf was a pleasure to speak with and it's exciting to think of where Network Neuroscience is headed in the near future and it's good to know that people like Olaf are working on it. Thanks as always for listening and enjoy. Olaf, you just informed me that you are new again just today to your now useless office. How does it feel to be back in your office? [00:04:21] Speaker A: Well, it's a little surreal. The department building is completely deserted and we're still shut down. We're going to open up next week, presumably with some research activity, but for the moment everything is completely empty. I haven't been back here very much at all, maybe twice or so over the last two months and it's odd to be back in your office, seeing it after so many days. [00:04:47] Speaker B: Do you miss it though? Does it feel like coming home? [00:04:49] Speaker A: In a sense, I do miss it. One thing I miss is that with the work life balance is a little out of shape. I used to use my office a lot for work and then go home and not do a lot of work. Now I'm working from home, so now it's all mixed up. Yeah, I guess that's the new normal. And also I miss my books and my papers that are stacked up in my office. I don't have any access to those. I've lost that. So, you know, but we're managing well. [00:05:21] Speaker B: I'm glad I could draw you back into your office for an hour or so here. So. Well, welcome to the show. I want to say thanks for coming on, but also thank you for running a journal that's completely open access. Network Neuroscience journal. So thanks. [00:05:37] Speaker A: Yes, I started that journal with many colleagues in the field in 2016 and I was an early adopter of Open Access of the open access model with PLOS going back almost 15 years now. And I strongly believe in the open Access model for publication and sharing articles for free, making Them immediately available to everyone. That's what we do. We've done this from day one. And it's a model that apparently is being adopted more and more. [00:06:08] Speaker B: So how's the journal going? [00:06:10] Speaker A: We're doing great. We have a steady flow of submissions we have in an area that I think we're going to talk about more later today in this conversation. Network neuroscience. It's a burgeoning, rapidly growing subfield of both neuroscience and network science. And we get really great submissions, and we have an enthusiastic board of editors, and our reviewers are doing a great job, too. So we're doing well. And I'm hoping to expand the journal further in coming years. And I'm really enjoying working on it. [00:06:45] Speaker B: That's great. I mean, just reading the literature. You said it's burgeoning. It feels like it is burgeoned. But I. I'll ask you about that in a little bit here. Okay. So, network neuroscience. So I understand that the lofty goal of this sub discipline, network neuroscience, is to use complexity and network science to bridge all of the levels in neuroscience. So that's from the molecular networks within individual neurons, even up all the way to social networks between individual people. And as a collective, today, we're probably, I think, going to focus mostly on the level of neurons and the structures that they form, the connectome and the dynamics and functional activity that they give rise to. So I guess I'll just start by asking you how network neuroscience conceptualizes brain function. [00:07:42] Speaker A: Good question. And I've come across people who've said to me, well, networks have been around for a long time. And so they. There's nothing new about it. However, there is, because while the term network has been used in neuroscience literature for quite some time, really, there's a technical way in which we use the term in our little sub area, and that is our network is a complex system that's been divided up into nodes and edges, elements and interconnections, and we represent it as a graph. That is a very technical meaning of the term network. And that approach does not have a long history yet. I remember starting doing this with some of my colleagues back in the 90s when there was really no interest in this at all. And now it's grown tremendously, in part because network science has grown. Network science is a discipline that deals with networks in all contexts, from epidemiology. We're living in a network science world right now because some of my friends in the field are modeling the spread of COVID and that's a network science application. But Also in social systems and technological systems, the Internet, of course, then social networks and of course, biology. Networks of proteins, networks of cells. The brain is an example par excellence. This is really a technically founded, technically precise undertaking of understanding a system like the brain from a perspective that is based on networks as collections of nodes and interactions. The nodes can be neurons, they can be brain regions, depending on the recording methodology or the scale we adopt. And the interactions can be synaptic connections, physical connections, pathways or functional interactions, dependencies, statistical dependencies, et cetera, that we talk about in functional connectivity, that nexus of network science on one side and neuroscience on the other. It's fairly fresh and it has taken off like a rocket ship, really. It's amazing to see for me, because I, you know, like I mentioned, I was working on some of this stuff, you know, 20, 25 years ago, and believe me, nobody had any interest in it. I gave a talk on small world networks in my first talk on the subject in 1999, and exactly three people showed up. Wow. Yeah. The guy who invited me to give the talk and his two graduate students, and that was it. [00:10:25] Speaker B: You need a world to have a small world network. And that wasn't quite a world. [00:10:29] Speaker A: No. And if I had had any sense of risk aversion, I would have given up on it. I would have sort of said, well, that's not going to work because I had this. Really no interest. But we prevailed. And then a few years after that, the idea of the connectome came about and mapping our nervous system in its entirety and at some of scale with all connections and all elements. And that then became the starting point 15 years ago for where we are now, which is really a big field. [00:11:03] Speaker B: I mean, I want to ask about 100 questions right now. First of all, I'm going to have David Krakauer on the show soon, who's the president of the Santa Fe Institute for Complexity. And I know he's been very steeped in the COVID 19 modeling of it from the network and complexity side. So, you know, that everybody's turned to. That's. That's the most famous network right now, I suppose. [00:11:26] Speaker A: It sure is. In fact, I'm teaching a course on graduate course on networks. I just taught it in the spring. And when I, you know, in my first class, that's one of the applications that I put on my, you know, on the screen and say, yeah, this is a network. It has a virus spreading. You know, there were, We've had a few of these outbreaks now with Ebola and with H1N1 about 10 years ago. And some of my colleagues, good friends in the field, they are modeling that stuff, literally trying to forecast in real time. It's very data driven. It's very much an application of network science. And I wish we had the kind of predictive data driven modeling in neuroscience. We don't quite yet have the data intake. We don't quite yet have the, a viable computational framework to use to really make sort of predictive models of brain function. That would be fantastic to have and maybe we'll get there one day. [00:12:22] Speaker B: Yeah. Well, just before we move on again, I usually say this kind of question for toward the end of the show, but since you brought up the three people showing up to your 1999 talk, do you think that that's an important character trait to just keep going in the face of these? I don't know if that's an obstacle, but it is a bad sign. What would you call that experience? [00:12:45] Speaker A: Yeah, I sometimes talk to, I often talk to early career scientists, grad students, postdocs about what should they do specialize in, how should they shape their careers. And I'm a bad example because I bet on things that didn't really have a high probability of success. Even computational neuroscience, which is now an ingredient of all neuroscience, almost was a very small subfield Back in 1990 when I got my PhD. When I got my PhD, there was no FMRI. [00:13:24] Speaker B: Okay. [00:13:24] Speaker A: So nothing that I learned during my PhD, classes, courses, projects, whatever, really prepared me directly for many of the technical things that I'm dealing with today. So, you know, I kind of persisted in part because I had a few good friends in the field who were equally persistent and who gave me the kind of social support that I needed. You know, Randy McIntosh, Giulio Tononi, Kal Friston, those guys helped me kind of, you know, keeping pushing on in that direction. And eventually we made it to the point where it took off and became an activity that was, that is now widely adopted. [00:14:09] Speaker B: But that's sort of a principle that low probability, high risk, high reward, that seems to be a recipe for mostly failure and some success. And I don't know how you get to the success part of it. [00:14:24] Speaker A: Well, I mean, I was working on small world connectivity patterns in the 90s and complexity. But I also had other lines of research that I was pushing simultaneously that, that were maybe not quite as fringy as that or as arcane and obscure. I was working with robots for a while trying to understand how the brain is embedded in its environment, how the interactions Sort of the dynamic interplay between behavior, movement in the real world, sensory sampling, and then brain activity, how that kind of plays out. And robotics was a test bed for that. So I actually had a robot lab for a while and published in that field and went to meetings at the time quite a bit in an area that's called embodied cognition. It was not totally unconnected from my network and complex systems leanings, of course, because there is still that element of connectedness, right? The brain is connected to the world and to the environment. And being an active participant in that interplay, not just a passive sort of, you know, intake of information, processor kind of thing, but really shaping the information itself, that was an important lesson that I learned when I was working in that field. And I pursued that in parallel also when I moved here to Indiana University for a few years, I had a robot lab here. And then in 2006, 2007, 2008, connectivity suddenly took off and it sort of consumed my research program entirely. So that's all I'm doing now. The robots are gathering dust. [00:16:00] Speaker B: So work all the time. Keep your fingers dipped in a few different buckets along the way. Know when to stop and know when to grab onto the reins and let it ride, huh? [00:16:12] Speaker A: And also reach out to others. One of the most important pieces of career advice I like to give sometimes jokingly, is maximize your betweenness centrality. In other words, think of yourself in your social network as a scientist, as someone who builds bridges, okay? Someone who is in between fields, who makes the connection, let's say, between robotics and neuroscience, or between neuroscience and network science or complexity. Someone who is conversant on both sides. Someone who can bring to the table expertise that otherwise isn't available and makes that connection. It's often those connections that ultimately grow into the next big field or activity that blossoms from that. And you have to have a certain amount of self confidence, maybe and persistence. And even though I never really reflected on it very much at the time, it turned out that I made some choices that apparently have paid off. [00:17:09] Speaker B: Well, sorry I've taken it so far off course already, but these days I know that it's networks all the way down for you. You think in terms of networks with virtually everything, like you were just saying. And so that was in 1999, you gave that three person, three attendees talk. And then 10 years later, you, 10, 11 years later, you published Networks of the Brain, which was, I don't know, the first book about brain networks, wasn't it? [00:17:40] Speaker A: In a sense, yes. I mean, there's Always examples that reach back further. And I had, you know, I have a lot of respect for many senior colleagues in my field who have been thinking along similar lines. I've mentioned a few names already. And so I was building on that partially. But yeah, I wrote that book. I don't know how I did it. I wrote it in 10 or 11 months while I was doing administrative work. The lab never stopped. I was traveling. I remember I somehow managed, but I think in part because I had that plan in my head for a long time of writing that book and making that connection between complexity, science, networks and brain function. And I wanted to get some of the key ideas that have animated my own work until now. I wanted to get them across. And so I was very happy to. It was a great exercise to write. Allowed me to be a scholar for a while and work on something actually entirely on myself. And I'm so pleased with the fact that so many people have read it. And even today, and it's now a decade since it came out, I still come across some people now and then in places that I visit that, you know, pull out their book and say, it really made a difference to me to read this and really got me started in my own way. And that's the best reward you can have. You know, somebody actually reads it and takes it to the next level. It rarely happens with papers, right? Papers have a shorter lifetime in many cases. This book is now a decade old and it's still apparently doing quite well. [00:19:25] Speaker B: I was going to say I have a bit of regret, actually, because I remember when the small world network stuff was just really taking off, even in the popular press. And I remember when your book came out and I thought, oh, I should. Oh, I should really read that. Because it seems like this network stuff is taking off. And then I didn't, but I recently did. And first of all, it's extremely well written. It's just a very easy read and it gives, you know, a great overview of the field. And I know that it, you know, the network neuroscience has come a long way since then, but I still think it's a wonderful introduction to the topic. And I don't know if a second version is coming out and if it's going to be 10 times as thick or what. But what I want to ask is, since then, we've come a long way. So what is the broad current picture in network neuroscience? [00:20:17] Speaker A: Well, first to your question about, is there a new book coming? I have plans to do something like that, but it keeps getting away from me. So let's hope I'll find a break and do it. And I think to your point about network neuroscience has come a long way. Honestly, that book, you know, the references kind of stop in 2009, 2010, and there has been so much more and our perspective has changed a lot too. You mentioned small worldness, for instance. It was in many ways a concept that got network science started and restarted in the 90s with that famous Watson store garden paper, came out in Nature. But today, small world is a totally neglected topic as it doesn't matter anymore. In fact, now we kind of realize that many, so many networks are established. [00:21:10] Speaker B: It's everywhere. [00:21:11] Speaker A: Yeah, it's everywhere. So in some ways, people have lost interest in it. That's no longer the core of the field at all. We are now in a very different world. And so to your second question, where has it gone? Two things have happened that have driven, I think, the expansion of network neuroscience. One is that we have a lot more technology available to us today than we had even 10 years ago in terms of recording brain activity, neuronal activity and whole brain activity and take in that data and then, and then do data science on it, essentially do time series analysis, do dimension reduction techniques, network science tools, etc. Kind of dig structure and patterns out of those data. One of the big ways that, you know, one of the big reasons why network neuroscience took a while to take off was because we had no data. Fifteen years ago, 2005, you know, when we wrote that connectome paper, which really was a manifesto that said we need that type of data and information to make sense of the brain, but we didn't have any. And so now we are swimming in data. So that's driving. So there's a demand for tools and techniques to really make sense of the data. The data is not in itself is great to have, but you really want understanding of what the system is doing. For that, you do need, in part, network science methodology. Because the brain is fundamentally a network. It is elements and interactions. And that's been driving, I think, the expansion of network neuroscience methodology into electrophysiological recordings, eeg, meg, suddenly FMRI and diffusion imaging in whole human brains, but also model organisms, zebrafish, mouse ratio, see elegance, network language terminology, tools, techniques are really pervasive almost in those types of investigations. And it's driven in part by the data. [00:23:21] Speaker B: This is an interesting conundrum, I think, because we run up against it all the time. There's this cycle, you get more data and then you develop more theoretical tools to analyze the data. And you have something like network science, where in theory you have the theoretical tools and you're waiting for the data, but then the data comes and you realize, oh my gosh, we don't have the theoretical tools. It's interesting that we cannot imagine what would we do if we had all the data. And I think this is just going to continue moving forward, but eventually we're going to have the activities of every single neuron, every single type of neuron, and it's going to go into a database. We should in theory be able to think what would we do with that data? But we can't do that. And it's an interesting barrier, I find. [00:24:13] Speaker A: There's multiple questions that I want to unpack a little bit. So your first point, just to make sure I'm not overstating, the tools that we currently have, even from network science, even some of the most advanced things are not always perfect for what we want to do in neuroscience. And one of the ambiguities of network science is because there is a general framework coming more from physics and statistical investigations of networks, but there's also that domain specific knowledge, right? It's data that comes from the brain and that data is the origin of data. We have to take that into account when we analyze the data and model the data. We can't just blind ourselves to. It's just a network. That's not quite the way I see it. So the demand for brain specific, brain appropriate, if you wish, tools and methodologies is still there. There are many questions that we cannot answer or even address with network science tools yet. And so that's an ongoing process. Secondly, that's a good question. What would we do? What will we do when one day we have, let's say, a full account of all neurons, what I call all neurons all the time. [00:25:26] Speaker B: Okay, the structure and activity. [00:25:29] Speaker A: Right, exactly. So what would we do? Just imagine, let's say humans, okay, let's take humans, 80 billion or so nerve cells, and I forget the exact numbers, but certainly trillions and trillions of spikes that occur in any given small period of time as we engage in spontaneous mental activity, but also behavior. So now what are you going to do? Okay, that is a tough challenge. [00:26:01] Speaker B: Not only that, because all the synapse formation and just all of the connections that are dynamic the entire time, which sounds crazy. [00:26:10] Speaker A: Well, there's dynamics. I'm glad you mentioned it. Even the structural connectivity isn't standing still. I used to actually, in my 25 years ago when I was doing wet lab work, that was one of the things that really I wanted to image with, with calcium dyes and with, I work with rat cultures at the time, neural cultures taken from the rat hippocampus, to see the plasticity of neural connections across time, essentially taking videos and. Yeah, absolutely. It's changing all the time. It's not standing still, it's not static, it's dynamic. The structure changes, the activity on top of the structure changes even faster. And so this is a tough challenge. And I think that that challenge cannot be met entirely with, let's say, you know, let's say machine learning or some sort of, you know, throw all the data in a big box, you know, wait a long time or maybe wait a short time, and out pops an answer that says, this is the, this is what's going on. I don't think we can, we can quite take that black box approach. I do think, and I'm a strong proponent of, of theoretical neuroscience. In other words, we do need some overarching mathematical principles. Perhaps we can at some point in the future even call them laws that help us to understand our observations and structure them based on regularities that we believe exist in the world. Without that, the, the, then it becomes a, you know, an exercise of extracting regularities from high dimensional data. And that's what we, for the most part, are doing now, looking for patterns. We're looking for stable, coherent assemblies of brain regions, let's say, or their interactions, or in terms of neurons. We're looking for population activity, we're looking for, you know, low dimensional manifolds within which we can describe and predict neural activity. And that's important too. But I think we're still lacking a theoretical framework that we can put over our observations and that also help us to decide what it is we should measure. Right. There's lots of things that we may want to measure in the brain, but what are the important variables to track? There are some that perhaps we haven't even figured out yet that we are missing entirely. [00:28:40] Speaker B: So it's sort of a fundamental thing that we need some toys to play with before we can figure out what other parts of those toys we need to collect to make the thing. And terrible analogy, I apologize, but, you know, I sometimes think of laws and even, you know, dimensionality reduction and manifolds and anything like that to reduce the parameters that we use to think about all of these things just as shortcuts to the eventual simulation. So this is where complexity comes in, because it's so complex that to actually really recapitulate it you have to simulate the whole thing, and that takes infinity. [00:29:20] Speaker A: I would quibble with that just a little bit. I wouldn't advocate simulating the whole thing in the sense that literally everything has to be included in a simulation to understand the real system. Because then what you end up doing is you're replicating the complicated system you're studying by another complicated system. And now you've gained nothing, really. You've gained something because you can manipulate the simulation, perhaps more, you can use it for forecasting and perturbations and so forth. But it is not quite understanding of the kind that I'm driving at. And mind you, complexity, as I view it, does not necessarily mean an endless profusion and mass passing of facts and elements, you know, so it's not complicatedness. Complexity has its own lawful behavior to it. It's unpredictable in many ways, but it doesn't mean that it's entirely random. Complexity, kind of, as I like to view it, sort of resides in between randomness and sort of complete disorganization, and on the other end of the spectrum, complete irregularity or simple replication. Complexity is somewhere in the middle. So complexity, if you want to take it seriously, doesn't mean that we have to look at everything at once. It means that we need to identify those system variables, those things that we need to track that inform us about the state of the system and allow us to predict its future to some extent. And that is still an ongoing project in our field. I would say networks are one of those ingredients. And I think increasingly in neuroscience, dynamics is the second ingredient that I think people are certainly taking advantage of and pay attention to. And so the intersection of networks and dynamics is kind of where I feel like there's a lot of interesting things happening right now. That means dynamics on networks, but it also means dynamics of networks, how networks change across time and how that change in turn leads to changes off the functional dynamics on top of those network structures. [00:31:40] Speaker B: Yeah, this is what, hopefully we'll get into this pretty deeply in just a little bit here, just to move us forward, because now I just want to perseverate on every point, but. Because I don't advocate simulation, sir. But no, and I know that you don't either, so. But do you think that network neuroscience is going to provide the quote, unquote breakthrough that neuroscience has seemingly sought for so long and some people expect at some point to happen? [00:32:12] Speaker A: Well, I think what it has done, it offers a new perspective that says the brain is, in some ways, it's not that different from other Complex systems. It is a complex system. It belongs to that family of natural biological entities that we study, like the ecology, like a metabolic network, like a protein network. There's, there are neural networks and there's brain networks. There are some commonalities here. There's some differences, domain specific differences, of course. So it takes away the aura of mystery to some extent and says, here's a perspective that we can use. It's productive. It allows us to understand and explain phenomena better than we were able before. When you say breakthrough, what does that mean? Right. I don't know if we knew what it meant. It's sort of like, it's an interesting sort of conundrum. I'm not a philosopher, but if we knew what the critical question was that we needed to ask to understand how the brain works, then we would also have the answer, most likely at the same time. [00:33:16] Speaker B: Well, we can think of a breakthrough in physics, right. Like general relativity, made us fundamentally understand the universe in a different way and be able to ask questions in a different way. And maybe that's along the lines of what breakthrough means. But of course we don't know what it would look like. [00:33:31] Speaker A: Yeah, I mean, I'm not a great predictor of the future, honestly. But I, and I say this very informally and I'm sure many listeners will disagree with this. I don't think neuroscience is anywhere close to that at this point. If you line up neuroscience and physics sort of on a common time axis, I think we are, you know, maybe we are somewhere in the 19th century right now. [00:33:57] Speaker B: Yeah, yeah. [00:33:58] Speaker A: Our schools and our approaches, we're still pretty naive. I talk about myself here. We're still pretty naive. When we look at the brain and we're still trying to figure out basic things of how it's organized and how it's structured and what are the important things to track. How does dynamic activity unfold? How does it relate to behavior? We're still, I think, very much at the beginning of that. [00:34:24] Speaker B: I agree with you. And that's frustrating. [00:34:27] Speaker A: It is frustrating, but it's also part of a historical process. And of course we want to push to the point where we have something like relativity or quantum theory or some other construct that really changes things and opens up new horizons and lets us see things in different ways. Those days will come. I don't know when. I do believe that having theory and investing in theory and training students and postdocs to learn about theory, not just computational modeling, but actual theory, is important in making that possible for the future. So I think we're not there yet. I don't think network neuroscience in itself will give us the answer. I'm not, you know, I think I'm sane enough to not claim that it's the answer to everything. It isn't. But it's an important perspective and one that has certainly given me a lot of insight and I hope it has given other people insight too, and it will grow and perhaps link up with other parts of interdisciplinary complexity research in the future, and then maybe we'll get closer to that ultimate goal of understanding and sort of finding a theoretical framework that really fits. [00:35:44] Speaker B: Is there a flagship result or advancement that network neuroscience has made thus far? That when you're at a party and someone says, what have you done for us? You don't hold up? Small world network, right? [00:35:59] Speaker A: Small worldness got us started. It was almost an historic artifact. In fact, it was around in social sciences at least since I think, 70s. So it wasn't really discovered in the 90s. It was kind of just popular in some ways enshrined in this extremely simple and ingenious model that Watson store got published. No, there's a lot has happened since. Well, I mean, where do I start? I think there have been lots of advances. First of all, the notion of taking connectivity and interaction and making that the central aspect. That is really not the traditional way in which neuroscience has been conducted. There are many historical precedents, of course, to researchers, scientists studying connections and studying anatomy, but it really came into the, into its own in a new way through this application of network methodology. So the notion that, for instance, somewhat jokingly, not all brain regions are created equal, of course, we've talked in cognitive neuroscience forever, not forever, but maybe for decades, certainly about parts of the brain that are multimodal or poly modal or association regions, regions that are engaged in more complex processes, planning, integration of sensory inputs across modalities, memory, et cetera, and other regions are more peripheral, let's say more engaged in sensory processing in a single modality. Those distinctions are not as clean as they appear, but nevertheless, that's been a framework that we've been working with in cognitive neuroscience forever. Now we suddenly have a way to get to that distinction using network science methodology. And that's the notion, let's say, of hubs, highly connected regions with diverse connections that cross many boundaries, that help to integrate information, many sources. The notion that there's those kinds of cortical hubs, subcortical hubs, even neuronal hubs. Now hub neurons that are more heavily connected have more inputs and outputs. That certainly is something that people did not really consider that much before all this took off. We can now routinely identify those regions in neuroscience data sets, whether they are dynamic or whether they are anatomical. The notion of communities of network modules, or communities that allow us to group neurons and brain regions according to their mutual affiliation. They have a similarity in their activity patterns. There are statistical dependencies. That's a technical advance that allows us to coarse grain the system. I take a high dimensional recording and map it down onto clusters of elements that are mutually more connected internally and less connected between. That's a standard approach now in many neuroscience applications. And it comes from, from network science and complexity science. That's a complex system, has that coarse graining to it. That's one of its hallmarks. Herbert Simon decades ago talked about the near decomposability of complex systems. And you actually applied that to organizational structures and other such non neuronal systems. That near decomposability, the fact that you can break a system down into components that are not totally separate, but they are internally denser and more causally engaged internally as opposed to between, that's the hallmark of complexity. And we find it in brain networks everywhere. So these are all things that I think network science has contributed to brain studies. And we can now deploy these new ways of looking at the brain also in very concrete clinical applications. A lot of interest in human neuroscience is directed at understanding the origin of brain disorders. And those network science tools and techniques have made a difference in allowing us to look at features of the brain from a very different perspective. Looking at individual variations among people with and without conditions, different developmental stages across the lifespan in relation to genetic markers. This is all now made possible because of the network neuroscience approach. [00:40:57] Speaker B: Yeah. Oh, that's great. So there are a lot of complex systems, a lot of networks in the universe. If we step outside. So in a couple minutes, I want to talk. We're going to dive deeper into the actual brain and talk about the different perspectives and contributions of network science. But stepping out of brains for a second, is there something unique about brains? You're just talking about the hub structure and the different structures that are hallmarks of complex systems. But is there something different about brains that just jumps out in the network sense relative to other known complex systems? [00:41:37] Speaker A: Well, there's many aspects of how nervous systems are structured and how they're built that are not shared by other complex networks out there. I mean, the big contrast I sometimes draw in my class is between social networks, let's say Facebook, Twitter, what have you and the brain. One of the unique aspects of the brain is. It seems like a very trivial observation. It's a physical system. It's actually, you know, I have one right between my ears as I talk to you. It occupies, you know, about, you know, whatever 1300 mils of volume. And I have to power it, I have to eat, you know, I have to take in food so that I can keep making ATP and keep that thing running because it takes up 20% of my energy budget, even though it's only 2% of my body. [00:42:25] Speaker B: And that's efficient and that's sufficient. [00:42:27] Speaker A: Yeah. So those fundamental facts about the brain are. I mean, these are really fundamental facts. I'm really not joking here. I feel like that is where brain networks and other networks really diverge. Social network, certain people on Twitter can have 80 million followers, apparently, and there's no cost attached to that. Those links can be made ad infinitum. But in the brain, it being a physical system, any physical connection, any synaptic connection, any axon that is made, takes up volume. Every axon, every connection is in a sense, a little cylinder that has a light diameter, a radius and a length and an extension. And for it to be there, it has to nibble at a limited resource, and that's volume. That fundamental point was made, understood and written down by Ramoni Cajal, our granddaddy, if you wish, the grandfather of neuroscience in general. He wrote about this, understood that fact, and thought that it was a driving factor in making neurons the way they look, driving morphology of neurons, and the diversity of morphological types that he saw. So brain is physical. Any connection you want to make, physical connection takes up volume and space. That sets up a competition, you know, that sets up an economic contest, a trade off between having the connection and making that investment in it versus not having it. And so our connectivity, the actual physical network that we carry between our two years is shaped by that. And I strongly believe it's shaped by that ongoing competition of can I afford that connection? What is the value of that connection in terms of making the brain perform better, in terms of guiding adaptive behavior and promoting survival versus energy consumption, energy needs? And so the brain has a very peculiar structure to it, in part because it has to negotiate that trade off. It has to be functional, it has to be adaptive, it has to support the organism that it's residing in, but it also has to be economical, it has to be cheap. [00:45:00] Speaker B: That's not something that I ever really felt like I needed to think of when I was recording neurons in the frontal eye field, for instance, while monkeys made eye movements. You don't really think about that at all. And in fact, I think it's highly underappreciated in general, which is maybe what you're saying. I don't know if you're. [00:45:17] Speaker A: Yeah, yeah, it's still, I mean, I think it's become more appreciated last few years in neuroscience and there have been actually a number of really important thinkers and theoreticians and people who have studied this over the years. So this is not an entirely new idea, obviously the people who have worked on this. But it is something that sets apart the brain from, let's say a social network or the Internet or other such constructs that perhaps we can conceptualize as complex networks out there. Because there is that cost element to it and that link to evolution, which as an evolutionary theorist once said, nothing in biology makes sense except in a lot of evolution. And you swap out biology and say neuroscience, you know, and I sort of subscribe to that. We have to remind ourselves that we are historical artifacts, right? There's a particular history that unfolded and here we are and there are sort of non accidental elements to that history. There are things that have to happen for intelligence to emerge. I do believe that. But our brains are also subject to severe energy volume space wiring constraints. So the process cannot go unchecked. And so in that sense, I suspect our brains are actually along some dimensions, quite suboptimal. [00:46:42] Speaker B: That's interesting because I was going to say it makes a connection to the free energy principle of Carl Friston, whom you mentioned earlier, and trying to lower the. In a sense, that is a energy minimization efficiency maximization theory, in a sense. And not that we need to talk about the free energy principle, but the brain and the Internet are basically the same thing, right? The Internet's just a big brain, right? It's conscious. Oh no. [00:47:09] Speaker A: Trying to pull my chain. No, no, no, no, no, no. I don't buy that at all. In fact, I don't buy. And I'm sure you are going to get there. I'm sorry, I'll take the question out of your mouth. Basically. It's also not very much, very similar to what we now see in artificial neural networks, deep learning and all of those kinds of things that are out there now. Very successful rebirth in some ways of the artificial neural network field that I remember was just emerging when I was a graduate student, mid-80s and was very exciting. And now it's blossoming, it's all around us and it's of great interest to many people, as many applications, many successful demonstrations of its power. Where I feel there's a divergence here is that none of the architectures that are used in, implemented in deep learning networks are subject to a lot of these constraints I've just mentioned. You can make an N square fully connected network. Sure. Make a network that's randomly connected. Sure. You can do that in the real world in terms of AP system that you want to build and put into a living organism. You cannot do it. It will be too wasteful in terms of volume and energy. So those brains, sort of the random brain cannot exist or the fully connected brain cannot exist. It's not even worth thinking about it as a potential null model or a potential sort of alternate reality, because it isn't, it isn't possible to build. We are stuck with the architectures that we have in a particular space of what you might call sort of the space of all possible networks. And in that niche, that's where we are. And whatever intelligence and adaptive behavior we can squeeze out of that niche, that's what we've got. There are certain things we will never be able to do with the brains of. We have. [00:49:16] Speaker B: Yeah, I was going to ask about this later, but let's go and explore it a little bit further since, since you mentioned it. I mean, first of all, you know, deep learning networks don't need to be constrained by any architectural constraints. Right. And therefore, you could say that some people do consider architecture because, you know, in a convolutional neural network, let's say, you know, you're constraining operations to parts of the spatial world and convolving the inputs and, you know, in the latest, let's say, Jim DeCarlo's lab, I just talked to Jim DeCarlo and you know, his models that map onto brain, hierarchical brain areas that in some very loose sense is true to architecture in maybe the loosest sense, though. I mean, do you, do you think that the AI world needs to pay more attention to architecture? Is there something fundamental about our architecture which has been shaped to give rise to whatever intelligence we have across all animal species? There's always some constraint. Is there something fundamental about that architectural constraint that is actually generative? You know, that is a cognitively good thing. Or could we just expand, you know, and make bigger and better fully connected models and somehow optimize even better than, you know, you just said earlier that we're in a great sense suboptimal, which I agree with, so we could optimize beyond us Right, Sure. That's about seven questions. [00:50:48] Speaker A: Sorry, that's seven questions. So how do I even begin to say this? First of all, just to make that clear, I'm really not an AI researcher. I have a. You know, I'm an outsider to that field. I look in from the neural network angle sometimes, and I cheer on my colleagues in AI. I think what they're doing is really changing the way we interact with data, with the world at large. It's great stuff. It's just the comparison between real brains of the kind that I study, real nervous systems and AI seems flawed to me. I'm actually not. So let's break it down. So let's say what is another analogy here that is similar? Let's take airplanes. Let's take a jet. A jet plane on one side and a bird on the other side. Okay. [00:51:37] Speaker B: Okay. [00:51:38] Speaker A: Nobody's advocating that. Planes ought to be built exactly as birds are built. That would bring air travel to a halt even faster than COVID 19. Okay, that wouldn't work. Okay. So what's common between the two, between a jet plane and abroad, is that they're somehow using similar laws of aerodynamics and. And they both fly, but in very different ways. And planes can do things that birds can never do, and that's fine. But birds also do things that brains never do. They can land and take off on a spot like many other people who are stuck at home currently, I look out my window a lot more and I see birds doing stuff, and it's entertaining. And you appreciate how incredibly versatile and adaptive biological organisms can be, how they can navigate their environment in ways that, you know, a plane certainly would not. Would not do. So, so planes are more like AI systems. They are built to accomplish particular tasks that a natural biological system perhaps cannot accomplish. And they do so much better. And they're powerful. They extend our abilities as humans, allow us to connect, allow us to get to places. This is good. On the other hand, if you are interested in broad flight, as if that's your scientific interest, you probably want to study birds. [00:53:12] Speaker B: You don't want to study planes. [00:53:13] Speaker A: Yeah, not necessarily. But also, perhaps there is room for. This connects to my old interest in robotics to build machines that are a little bit more like barbs, that'll be more like insects or small flying creatures that are much more able to navigate in an enclosed space, perhaps are slower, but can gather sensory information or whatever. So there's an active field of bionics and biological robotics, or biomimetics, as it used to be called, and that's technology, it's not building airplanes, but it's trying to build something that is more like a bird or an insect and learn from that. And so if that's your research goal, if that's your application, then of course you would pay more attention to birds. [00:54:07] Speaker B: Would an apt analogy here be in the AI world? So to the airplane versus bird, our current computer vision, right? It's like saying, oh, we. So airplane versus bird. Airplane. We solved flight without understanding feathers. So computer vision, we solved vision without understanding the brain. Is that the apt analogy? Because that's a ridiculous statement, right? [00:54:33] Speaker A: It's vision, but it isn't. I mean, it's vision for technological purposes. It's pattern recognition. Maybe it's automatic surveillance, whatever the upper application may be. Or it's recognizing features and medical images. These are important applications and I don't think it makes sense to really re engineer a human brain to accomplish those tasks. Let's do what we can do with machine vision and AI and deep learning applications in that problem. Space vision, biological vision is not. It's not just that, right? For one thing, again, I'm connecting to my old area of robotics, which has taught me a lot. For one thing, I think of vision as an active process. I move my eyes around all the time. I'm not aware of it that much. But the image that I have seemingly stable in my head is actually moving across my retina at a fairly high speed. And I have to do that because otherwise I wouldn't be able to build any representation of my external world. And so biological systems are not like the eye is not a camera that sits on top of my computer right now looking at me. Biological visual systems are active. They engage in their own motor activity to generate and create information and sample information from their environment, which then in turn is fed to the brain, right? Which is faced with a constantly changing input pattern that it now kind of builds up into this, you know, seemingly stable mental image that I have that I can operate on. And so that's, you know, that's. Most machine vision systems aren't designed that way and for good reason. They are not supposed to solve those kinds of problems. And there's other technologies, technological solutions for getting images than retina, right? [00:56:36] Speaker B: I mention it because it's a flawed analogy. The bird airplane is a flawed analogy. And I'm wondering if it mapped on perfectly. So we're thinking along the same sorts of lines anyway. Yeah, so I did ask seven questions there. I'll just remind you because I totally got us off track so one of the questions, well, you mentioned that you have to have the brain inside your skull. Now, a robot or a device does have to have, let's say, an AI system in a physical space as well. So that provides some constraints, not necessarily. [00:57:09] Speaker A: Some of the robots that we built early 20 years ago actually had remote. Even 20 years ago, 25 years ago, it had a remote link to a workstation or a machine that was not carried on board, but to which sensory input would be sent. And then that's where the brain was, and then outputs would be sent to the effectors of the robot. So I think you can to some extent circumvent that with a machine that's operating in the real world, but where the machine, where such a robot has constraints to deal with is the nature of the sensory input, its own motor capabilities and its own time course that goes with that of moving around in an environment that cannot happen at a nanosecond scale, that has to happen at a similar time scale as, let's say biological organisms do that. So those constraints apply. But you can actually off board the processing and have a deep net sitting somewhere probably that does all of that. [00:58:14] Speaker B: Sort of. [00:58:16] Speaker A: I'm sort of trying to advocate kind of a pluralism here, right? There is AI on one side. It has its own agenda and its own goals and aims, and that's perfectly good. And then there's neuroscience, which has its own agenda too, and goals and aims. They don't necessarily match up perfectly, I don't think. So I would not advocate that. AI must pay attention to how the brain works. I think it can be very mutually informative. We can learn from each other a lot. But I think that some of the biological constraints that brains operate under that we've mentioned already, like space and energy and so forth, and history having millions of years, billions of years of history behind us, those don't apply to AI. And so it's a flawed comparison between the two. And. But there will be a gray zone in between where perhaps biomimetic robotics, you know, some engineering applications will look more organism and brain like and perhaps also draw on AI advances. So there's going to be a spectrum of things here. [00:59:22] Speaker B: So, I mean, you had mentioned, and I reminded us that you'd mentioned that the brain is suboptimal. And then, and then I asked, well, okay, so if you take away the constraints, could we potentially exceed the brain's level of optimality? And I'm not sure if optimality is the right. Because there's a value or there's a judgment in that, in the word optimality. But so I'll go ahead and just ask the question anyway, you know, could we exceed the quote unquote optimality then? Since the brain, since physical hardware, not wetware, isn't restricted to those constraints? [00:59:57] Speaker A: Yeah. We have actually tried to address the issue in a simulation context here in my lab last few years, coming up with a concept that we call network morphospace, which is a space of morphologies or topologies, in this case of networks, where our brain sits in a particular part of that space. And so around it are possible brains that perhaps are wired somewhat differently and configured a little bit differently. And now the question is, are those better code better along some dimension of performance? Okay. And the dimensions that we picked of performance to actually study have to do with communication. And, you know, it seems that one important aspect of brain function is that neurons can communicate with each other and through that communication process can influence each other's activity patterns and sort of probabilistic response functions and so forth. And in the brain at large, what we have to accomplish that, that job is axonal pathways or fiber tracts that connect, let's say, remote brain regions with each other. They're laid out in a certain pattern. We more or less have the same pattern, all of us. This is to do with development and evolution once again, and genetics. And the question arises, is that pattern optimally configured to facilitate communication across the brain like a perfectly well laid out highway system where all cities can. Can communicate with each other along sort of, you know, very direct paths? Or is there a way to improve upon it by rewiring or by adding connections, or perhaps even by subtracting connections? What does that look like? So we can't make do the experiment in real life. You know, obviously we can't do that yet. Not now yet, but that yet will be a long. But what we can do, I will say, is we can study evolutionary relationships among different species. And that it's something that we also have done and are doing in the lab here to look at different mammalian species, for example, and compare connection patterns to see if there's any evolutionary trends. But we can't run an experiment on this. What we can do is we can try to do a computational experiment so we can implement a brain network in a computer. We can simulate its activity or communication patterns, and then we can modify the structure of that network, the underlying anatomy, and we can study again, is it working any better now? Okay. And so that, that's a theoretical exercise to sort of navigate that morphospace and see if. Are we on top of a hill? You know, where are we optimal? Any step we take in any direction gets us to our space. Or are we. Are we on a. On an incline and it's really hard to climb up the hill? And, you know, so that's. Those are the kinds of things that we have been doing, have been trying to do. Is communication the right metric to use? I'm not sure. [01:03:06] Speaker B: But in that method, we are on a hill, essentially. Correct. [01:03:11] Speaker A: It is the case that it's very hard to find a close rewiring that is doing any better than what we have. But here's a tricky aspect of this question, and that is what is the. How do you define communication? Okay. Yeah, and this gets us into the whole interesting scenario of what we call communication dynamics, which is very much a problem that resides in the intersection of networks on one side and dynamics on the other. You know, think of a brain network as a fairly sparse network of connections among neurons. Let's say, you know, in fact, if we, if we roll the numbers of all neurons in the brain, I think roughly about one in a million of all neuron pairs have a direct connection with each other. The rest are not directly connected. So if there's any chance of them interacting at all, they need to do so through intermediate steps. So how does communication look like in such a network? Is it unfolding along some. A preferred route? For instance, the shortest path is often used in network science as a way of navigating a network to go from place A to place B. Do so in the fewest number of steps. Right. If you want to send up a package from here to some other city, you hope that FedEx, you know, uses something close to the shortest path. Otherwise that package will rumble around for years. [01:04:47] Speaker B: I'm still waiting on the shoes that I ordered. [01:04:49] Speaker A: Yeah. So, you know, now in this context I just mentioned, you know, shipping or communication of goods in a real world network like this. Of course, FedEx does not just send out packages at random, hoping that they arrive at their destination at some point. That's a diffusion process. Okay. Very wasteful. But what's good about it is you don't have need to any. You don't need any information to do it. You just send out, you just broadcast. You send out your packages, and some of them arrive, and some will take a million years to arrive. That's, however, not the way most people think about brain communication. They think there's a shortest path that's being used as A direct way of two brain regions, two neurons, if they need to connect to each other, sets up an important problem because the shortest path can't be discerned, can't be plotted, can't be found without some global knowledge of the network. Yeah, when you come to a new city and you step onto a subway station and you look for what's my. I want to get to the other end of the city. You look at the plan, you look at the subway routes, the way they are laid out, and you have a way of plotting to go from A to B in hopefully a relatively short number of steps. [01:06:04] Speaker B: So you'd have to have a perfect model of the structure of your brain, would need a perfect model of its own structure. [01:06:08] Speaker A: But neurons don't have that knowledge and brain regions don't have that knowledge. There's no global map built into the brain that guides communication and kind of routes communication patterns. So the shortest path concept, in my view, is a little suspect when it's. When we apply it to the brain. At least there needs to be a discussion as to how a brain network might access a shortest path. And that's a non trivial problem if you don't have that global top down knowledge of how your connections are laid out. So it's a big open area right now. I'm sort of excited about it because I think communication is a key neuroscience question that we may ask about. How brain networks, how neurons, how elements in a brain network communicate with each other is an open question and I feel like it's been understudied. We have been focused on recording from individual neurons or brain regions. But remember, those recordings do not directly capture communication patterns. They only capture the outcome of those communication patterns. The fact that neurons change their firing rates or response properties, the fact that brain regions bold signals go up or go down, that is the consequence of interactions that themselves are very hard to track. [01:07:32] Speaker B: So let's just jump in and dive deeper on communication dynamics then. There's a really nice review that you and your colleagues have written recently about this, talking about how it might be the key to bridging the connectome, the structural parts of the brain with what's called functional connectivity. And I'm not sure if we've even defined functional connectivity yet, but maybe you can make the distinction between those two. [01:07:57] Speaker A: Yeah, yeah, that's a really important distinction to make and it's often forgotten even by practitioners in the field. On the one side, the connectome, the way we originally proposed it 15 years ago, the connectome was meant to Be about structure. It was meant about. It was meant to be a wiring diagram. It was meant to be a complete list of all the connections and the elements and how they're connected. Sort of a top down map. It's a subway map, really, of how things are connected. And at a given level of scale. Single neuron seemed intractable back then, and still is, fairly intractable, I would say. But whole brain, you know, brain regions, that's tractable now. And that was sort of the level that we aimed at as a first shot. So that's anatomy. Okay. But you know, to some people in neuroscience, anatomy is boring because. [01:08:45] Speaker B: Boring. That's right. [01:08:45] Speaker A: It just sits. It just sits there. What? You know, that's not really what the brain is doing. [01:08:50] Speaker B: Not to you, though. You're an anatomy kind of guy, right? [01:08:53] Speaker A: I'm joking. Okay. I remember a time, you know, 20 years ago, I was interested in anatomy always. But anatomy was not a hot area in neuroscience during that time. Sort of a couple of decades ago. I think it's gotten a lot more attention again and I'm really glad about it because it is the foundation of our field. Look at Cajal, okay? His incredible insights, for many of them came from him considering morphology of neurons and how they're connected. So that's anatomy. On the other side, we have functional connectivity. So what is that? Oh, boy. Okay, now we're getting into a discussion here. So abstractly speaking, if you have two elements in a complex system, and let's say these two elements engage in activity of some kind, voltage going up, voltage going down, spikes happening, what have you, what you can now do is you can construct a measure of statistical dependence between them. How much does the state of one element tell you about the state of the other? It's another way of saying it. If one element goes up, does the other one reliably go up as well? Or does it go down or does it do whatever and I can't predict what it's doing. There's many different ways of measuring this cross correlation being the simplest one. You just take two time courses and you cross correlate. And if they're highly correlated, either positively or negatively, then do you have information from just shared information between them? And you can do information theoretic measures. You can do other stuff. So it's a very simple statement about statistical dependence. It does not, generally speaking, does not imply causal interaction and should not. Okay. Functional connectivity is purely an observational construct that says two things seem to be statistically related or not. You cannot infer from that typically speaking, that they are also causing each other to be statistically related. [01:11:01] Speaker B: You can't even infer that they're structurally connected, correct? [01:11:04] Speaker A: No, you cannot. Structural connections are much, much sparser statistical dependencies. I can usually get a non zero value for any observation, observational pair that I can take any two neurons in the brain and define some coefficient of how much relationship there is in terms of their spike trains or their bold activity patterns. I can do that. But I also know that I mentioned at the neuronal level, only one in a million of those neuronal pairs will actually have a structural connection, a direct connection. This is where it gets tricky though, because let's stay at the whole brain level for the moment. Let's say we have 2, 3, 400 brain regions, a good number that we work with typically in our day to day work these days. The structural connectivity in terms of the pathways, the white matter pathways that connect these remote brain regions, it may have a density of 5, 10, 15, maybe 20%. So it's much denser than individual neurons, but it's certainly not a fully connected network. At the level of functional connectivity, if I do something like cross correlation or mutual information of activity traces at the whole brain level, that's always a full matrix because all pairs of brain regions have some level of relatedness, have some level of similarity, even if it's near zero of how their activity levels vary across time. [01:12:28] Speaker B: So you have to threshold it. [01:12:30] Speaker A: You either threshold or you build, and we've done this in years past. Or you try to model the functional connectivity between brain regions that are not structurally connected and understand them as a consequence of multiple indirect paths. Because remember, for two sites to influence each other, it doesn't have to be direct. It can also be A connects to B connects to C. What happens a lot, especially at the whole brain level with the kinds of signals we get in mri, is that there will be ultimately a correlation between A and C, even though A and C are not directly structurally connected. Because there is what we call transitivity. There's sort of like correlations kind of propagate outward and closure. Ultimately, because the outer ends of such a chain tend to share some variance, they tend to be connected in that functional sense. Now that is a totally, really simple fact. Unfortunately a lot of, you know, there have been recently a lot of criticism of functional connectivity, which I think is slightly misguided because I don't think functional connectivity really aims at establishing or portraying causality. It should not. [01:13:48] Speaker B: Anyways, you have terms like Granger causality, And things like that. [01:13:52] Speaker A: GRANGER Causality is a slightly different construct related to transfer entropy and other information theoretic measures that tries to kind of infer a particular type of causality which says the future state or evolution of an element is better predicted by taking into account the past states of another element. So you have an element A and you want to predict how, what is the next state of A, you know, a second or a minute down the road. You can use its own history to do that, but you do better if you take into account the past states of another element somewhere and that improves your prediction. In that sense, you say that element, that other element, is causally engaged in molding or shaping the future states of A. And in that sense am I going to ascribe a causal influence that is a variant, I think, of functional connectivity that still is based on observation, still is based on time series analysis, still is based on temporal precedence cues that we often use to infer causality. But it is not a direct portrait of Cox of causal interaction either. In fact, causality is very hard to get to. It's a word that rolls off the tongue very nicely, has a lot of appeal to it, but boy, it's definitely not easy to get to, especially when you're in an observational context and you really can't intervene or make perturbations as we often. We can't do that in human brain imaging very easily at least. So we kind of, we have, you know, of course there's ways of trying to use models to infer causal, so called effective connectivity, which is I think, a very interesting way of doing it. And some of my old colleague Kyle Friston in the lead have done this for many years. And it turns out that that's also not totally straightforward. It's actually a hard process because you have to identify a generative model essentially of your data that is parsimonious, as simple as possible. Essentially generates data that match what you have observed on the basis of a structural and physiological model that's built into it. And that's a difficult process of model selection and inference that takes up a lot of, a lot of computing and thinking and so forth. Great way of doing things. Unfortunately doesn't scale very well. If you go to more than a dozen or so elements or brain regions, the model spaces become so large that explodes. Yeah, it just cannot effectively handle it anymore. And so that process of inference, inferring a causal interaction from observations is very difficult, notoriously difficult, especially difficult in the case of a system that's so high dimensional, so fine grained and so interconnected as the brain. And we have to just be honest about it, it's not an easy process. Functional connectivity does not give us that. But I will say it does. It's not nothing. It does have lawful relationships to structure connectivity. Some of them are fairly robust and it tells us something about the similarity of firing patterns or activity patterns across the brain and has given rise in our field to a whole new way of breaking down the brain into systems internally coherent because they share time course similarity and they're externally diverse and different. And there's a huge literature of showing how these systems relate to activation, cognitive activations, when people do tasks, to anatomy and even to other data domains like genetics and development. So it's been very productive in many ways, but it's not meant to be. It can't really aspire to be a causal framework in itself. [01:18:09] Speaker B: David Hume is just laughing in his grave right now. Well, I don't know if that covers functional connectivity well enough, but I want to bring in the communication dynamic story. So on the one hand you have structural connectivity, on the other hand you have the functional connectivity. And sort of between these two is communication dynamics. How does it sit between structural and functional connectivity? [01:18:36] Speaker A: I see communication dynamics as kind of the missing link that allows us to bridge between fairly static, although also changing across time anatomy on the one side, sparse connectivity that's structural and physical and real. On the other side, these statistical descriptions based on dependencies really across time courses which are non causal. Underneath it all, and partly invisible to us at this point is that process of signaling and communication that occurs. Neurons in our heads right now are firing furiously. And as a result of that activity, impulses are being sent along axons that impact on their targets, changing their status, their state, their firing pattern, subtle, subtly or very dramatically. And these communication events of elements in our brains currently communicating with each other at furious speed and at very high rate. That is something that we, I would contend we cannot directly observe right now. What we can, what we can observe is we can record from neurons. Yes, we can record spikes, yes. But the spikes are recorded locally. They are sort of what a point source is doing at this point in time. But we don't really see the interaction, we don't see the. Even if we have perfect knowledge of all neurons all the time, we still don't see which connections are active or. [01:20:06] Speaker B: Inactive, sort of the pathway that the information flows. [01:20:10] Speaker A: So I'm really thinking of it as a flow, as a rapid fire exchange of directed Signals that run down physical synaptic pathways, axons, etc. That we can't directly observe. That to me is a causal substrate. That to me is like those are the causes of the firing patterns that we then observe of the ups and downs of the bolt signal, of the changes in activity and activity rate and firing spike timings that we see in single neurons. [01:20:43] Speaker B: The information flow. [01:20:45] Speaker A: The information flow, right. And so, you know, there's a gap here, I think methodologically in terms of technology not being able to really visualize this very well yet. It requires typically a process of inference to infer those interactions. This gets back to just previous point about causality and how hard that is to infer if we don't see it directly. And it is something missing. Link and I have never actually talked with Carl about this and I hope I'm not doing damage to his framework here, but I do think of it in some ways as one way of conceptualizing effective connectivity because it is the blow by blow account of which neurons, which brain regions causally affect each other through that communication process. It's going to be a very dynamic construct going to happen. It's not going to be something that is static over any period of time. It's going to be sort of like a, like a, you know, think of it like a bunch of flashes of light that occurred almost instantaneously change a target's behavior and then that target in turn sends out a signal or not, and that propagates through the network like a wave or a cascade. I feel like that is where technologically we don't have very good tools to see that directly. And methodologically the inference is very hard to do, but it is the level that I feel is most closely related to effective connectivity that we would really like to have. And so it's a gap in our knowledge understanding. And I think that would close the gap between the sort of static sparse anatomy on one side and the statistical dependencies which are non causal. However, on the other side, function connectivity. [01:22:38] Speaker B: But you also think of it as. I'm not sure if this is in the same vein as you were just talking about causally, but at the communication dynamics level, being able to gate information flow, is that on a causal level as well? And either allow information to flow easier or gate it. And so it in itself can serve as a way for the brain to integrate and segregate information. Is that right? [01:23:10] Speaker A: Yes, absolutely. Very good point you're making here. So we should not think of, I don't think of these communication Patterns as something that sort of, you know, all communication channels are open all the time. That can't be. That doesn't sound right to me. I do think that, just as you said, that there is a way for brain, for the brain itself, perhaps for modulatory systems, et cetera, to open and close communication channels selectively. Perhaps this is in the end the way that a navigation problem is solved. Like the problem we mentioned earlier about accessing the shortest path. Maybe there's a way for paths to open and close in a way that allows information to flow one way and not the other and carve out sort of a structure where certain communication patterns are more privileged, are more frequent, are more abundant, while others are shut out completely. This is something where I'm hoping this is one of my next. Well, I want to undertake in the next months and years maybe to get away a little bit from these communication models that treat the brain as if it's a gas where everything is happening all at once and more of a system where certain brain regions, certain elements of the system really aren't made to communicate while others are communicating much more frequently, much more readily. And what are the mechanisms and what are the network mechanisms that allow that to happen? And so that's actually one of our next ideas for projects down the road a few months or so to get back communication and actually look exactly at this. [01:24:56] Speaker B: Like you said, you can't measure the communication dynamics directly, so you have to build models. Can you just talk about how you, just briefly, how you build models? I mean, you give kind of a two step process. [01:25:11] Speaker A: I mean, one thing that you know, for instance, right now we're working with spiking neuron data, data where neurons have been recorded from in a setting where we have access to that individual spiking activity. So how do we infer that communication process? We do things like transfer entropy, which is sort of a more general version of what you mentioned earlier. Stranger causality, a way of inferring causal interactions based on criteria. Let's say whether a, you know, whether a, another neuron spike train adds information about the future of a neuron that. [01:25:52] Speaker B: You'Re looking at increases or decreases entropy, in other words. [01:25:55] Speaker A: Yeah, exactly. So that results in networks where we have pairwise interactions, directed interactions actually that are inferences, presumably on causal dependencies. We get an arrow that points in one direction between any pair of neurons and the arrow has a weight. Sometimes it's zero. There's no, no evidence of any causal interaction at all. And sometimes it's bigger than zero. And so we have some evidence that based on the spike trains that there is a graded causal influence between going from one direction to the other. So that's one way that we can infer, try to infer those processes. But there's big problems here because it's an information theoretic measure, transfer entropy, it requires quite a lot of data to actually stabilize. And so we can't get to the blow by blow millisecond to millisecond account of who's communicating. We have to take a lot of data and sort of smoosh it together and say on average, how do neurons interact? Same conundrum in whole brain FMRI recordings where we have time courses sampled usually at excruciatingly low rates of like once a second if we're really good and often two or three seconds apart. And that those are noisy measurements and have issues have to do with the imaging process itself. And we typically infer interactions based on many minutes of data, many dozens of observations, hundreds of observations sometimes that all get put together into a single matrix of functional connectivity. Okay. But what we don't yet have is a more fine grained account of what happens that second, the next second, the next second we don't have. We are working on this now with my colleagues here at iu. We're about to put some papers out there that try to address that issue in FMRI recordings specifically. But it's still a sort of unsolved problem. We don't have that low by blow account of communication. We have an overall picture of dependencies and then sometimes a way of inferring the spike trains direction. But we're not directly observing or even inferring that fast dynamic that must occur. [01:28:29] Speaker B: Must occur. [01:28:31] Speaker A: But it's difficult to see. And so this is, this is to me in the last remaining. Who knows how many years I've got left. As I'm nearing my expiration date, I'm hoping to make some progress along those lines and then exit the stage and give it to the younger generation. [01:28:49] Speaker B: Raise your arms in triumph as you exit. [01:28:51] Speaker A: Exactly. [01:28:53] Speaker B: Well, Olaf, let's see. There's a handful of questions I still have for you in these remaining moments. Is there anything that we need to add to communication Dynamics to wrap up? [01:29:03] Speaker A: No, I think we've touched upon it. It's an open topic. Right. And one that when we wrote the review a couple of years ago, one of the impulses to do that was to kind of raise the question and bring the topic out there because I feel it's been underappreciated yeah, well, that's. [01:29:20] Speaker B: I mean, network neuroscience comes around. There's the structural stuff and we all know about functional connectivity. And it's like, oh, now we have this to deal. There's always a new set of problems to deal with. [01:29:34] Speaker A: It keeps us busy. It gets boring if there isn't something new now and then. [01:29:40] Speaker B: So going back to the brain versus other complex networks. So we have the connectome for a lot of different species now. And you kind of go through them in your talks sometimes. I don't think I've heard you talk about the human brain relative to other species. Is there anything that we know about the human brain because we're so special? Is there anything in the network neuroscience world that jumps out that is unique about human brains relative to other species at this stage? [01:30:13] Speaker A: Yeah, it's a good question. A couple of years ago, I went to a meeting that the title of the meeting actually, I think was what Makes Us Human? They were bringing in people from different neurobiological perspectives. Genetics, evolution, AI, but also then connectivity and brain. And I said to the organizer, I'm very embarrassed because I really don't. I got nothing. Okay. Apart from the fact that of course it has its own topology and it looks different just when you look at it from, let's say even a non human primate brain or another mammalian species brain. The very global things that we've so far been focused on in our field, things like hub structure or communities or even issues about communication, they seem to be playing out fairly similarly across brains of different species. [01:31:07] Speaker B: Modularity. [01:31:08] Speaker A: Yeah, you can find modular organization all the way down to invertebrate brains. We've worked with colleagues in Drosophila for a little bit. We are going to get a lot more Drosophila data very soon if it's not already arrived. And I suspect a lot of what's driving those organizational principles have to do with very general requirements of what brains are supposed to do. Guide behavior, integrate sensory inputs, often from many sources, have access to past information through memory, and integrate all this in real time to guide especially motor behavior out there, whether it's gesticulating with your arms or speech or what have you. So there's sort of a common design specification that brains kind of have to fulfill. And so perhaps that is driving that at a very global level. Brains have some common features to them, such as modular, modular organization, some prevalence of hub structure, some regions or some parts of the brain being typically deeper in and more diversely connected for integration of information purposes. So the class is either half full or half empty. Okay, you can say, well, you found. So in other words, you're telling me you found nothing. Okay, the half empty perspective. Or you say, wow, you've hit upon a universal principle. Okay, you found something that's actually shared, widely shared across different species. And sometimes I'm on one side, sometimes on the other side. I think so far, specifically, human topological features are not that evident to me. It may be because we haven't dug deeply enough data, haven't been of high enough quality or high enough resolution. But so far, the things that are shared across, certainly across primates, certainly across mammals, I would say the set of things that are shared are much larger than the things that are unique. Looks to me. And so that's where we are. So if you ask me what makes the human brain special in terms of its network features, I don't have a lot of answers for you. [01:33:31] Speaker B: So I think maybe the title of this episode will be Olaf's Borns Humans. [01:33:37] Speaker A: Eh, no, that's not to say that, because remember, it's not just brain topology, it's also how the nervous system is connected and how we are connected to our environment and our world. So actually, when I gave that talk at this meeting about what makes us human, my takeaway message was that we shouldn't. It's a mistake to look at specific human features of something totally wonderfully enabling in our connectivity that it makes us so highly intelligent, as apparently we are. Sure. [01:34:11] Speaker B: His hands are waving in the air. [01:34:13] Speaker A: I'm not so sure that. You know. But one thing that is somewhat uniquely human is that we have found ways to transmit knowledge across generations. We have a way of building culture. We have social interactions that are unpacking, I would say, unparalleled in the animal kingdom, okay, in terms of the richness and the pervasiveness. And that perhaps taps into some specific brain systems that have evolved that allow us to act on the world in the way that we do. So language, for instance, I'm not a language. I know nothing about language. But. But people tell me it's a fairly recent invention from an evolutionary perspective. Now, we clearly have systems in the brain that are associated with language and perhaps you might say, specialized for linguistic processes. And interesting question is, did these systems arise because language was selected for, or is that something that was there to begin with? And then language, our linguistic capacities that evolved in our social world, in our environment, kind of got a hold of those systems and kind of made them work the way they do. So I mean that to Me is the big difference what makes us humans is that we manipulate our environment in ways that is actually threatening the planet and we can transmit knowledge and therefore accumulate and build. And that is not the case for other animal species. Some non human primates can do a little bit of that, maybe some birds can, but there isn't nearly that profusion. Again, at some level it has to have a brain. There has to be something in the brain that allows us to do that. But I suspect that the answer ultimately is not just in the brain, but it's also in the way that we are able to use our sensory motor capacities to extend our cognition outward. I'm a great fan of Andy Clark's cognition framework. Andy was here at IU for a few years and we got to be good friends and talked a lot. And I really like his perspective on how cognition is not just happening in the brain, but it's also, we have found ways to externalize it to some extent and to use our environment and to extend our capacities for representation and transmission of information. Tremendously interesting language through writing, through cultural artifacts, social practices. That's what makes us human. I think that's what accounts for our capacity and our ability to do good things and do terrible things. [01:37:03] Speaker B: And you think of that as a network, I imagine. [01:37:06] Speaker A: Yeah, I think there is another level of network here. When I wrote Networks of the Brain, here's a little tidbit for you. I chose the title deliberately as Networks of the Brain because my plan, my first outline that I sent to the publisher was actually going to have a complete second part that deals with this issue of how brains themselves make networks and utilize networks that exist outside of them. And then my energy, I think I ran out at some point and I had only one chapter at the end, chapter 14 I think it is, that sort of touches upon that. But it is a very important set of ideas. I think to. This is again a very different thing from AI, right? AI systems, really deep learning systems, are not embodied. They are fed millions of elements of data, but they are not gathering the data themselves. And they have no social transmission or anything like that. They have no bodies. [01:38:08] Speaker B: They just say, it's a flower. [01:38:11] Speaker A: Yeah, they're classifiers. And again, very important, very powerful in many ways. We've talked about this earlier, but. So what makes us human then is, I think, nothing magical. I don't think it's a special cell type. It's a special brain region. It's a special type of connectivity or topological feature. It's the sudden explosion of possibilities that occurred when our brain topology became capable of using our bodies and feeding itself information in new ways. So there's a network there, a larger network that's above and beyond what we can measure in individual brains. But I think that's the way I think about it. So it's humbling to some extent. I'm not claiming that by, for instance, I do not believe that connectomics, if it's taken to the limit and we get all neurons and all connections, perhaps at one point will give us a magical answer or insight. It is a fundamental ingredient. It is necessary. It has given rise to change in our field. I think it's turning 15 years actually this year. And it just became a word in the English language last year, the Oxford English Dictionary connector, officially a new word in the English. [01:39:28] Speaker B: Congratulations. [01:39:30] Speaker A: Thank you very much. I feel, you know, I've come, I've run my course, I can happily, you know, retire because, you know, award, I contributed a word to the English language. But anyway, so, I mean, I'm not expecting any magic to come from quantomics. I do believe it is fundamental, though. Just genomics, you know, is fundamental for understanding biological systems. But there's no magic answer there. Indeed. It's complicated is the answer. Right. And so coming back to what makes us human, I don't. I think there is. There's no reductionist answer to that. There's no, there's no, like, this is the connection that makes it, or this is the cell type, or that's the gene or whatever. No, I don't believe that at all. [01:40:13] Speaker B: By the way, speaking of connectome, I had Kanaka Rajan on the show and she used the word exposome. I mean, everything is an om now. And that's like what you've been exposed to. I'm like, what is that a word? And she said yes, because all ohm words are words now. So. Thanks, Olaf. [01:40:30] Speaker A: Yeah, yeah, this is true. I mean, there's many ohms out there, but many of them don't make it in the sense of, you know, really taking office Concepts Connectome did make it, and so that's one that I think will stay with us. [01:40:43] Speaker B: Well, I've already taken you over time and so hopefully maybe one day I'll have you back because I wanted to ask all about rich club features and how they underlie our consciousness, although they're all over every brain, just like every other feature seems to be. And there's nothing special about humans. [01:41:00] Speaker A: We haven't mentioned consciousness until this late point in the interview. This is interesting. Yeah, well, let's talk about consciousness some other time. This is another topic that I have avoided, really working on, to be honest, for over my career a little bit, because I. This will shock you. I don't think ultimately it is all that interesting, but maybe that's left for another conversation, I suppose so. [01:41:28] Speaker B: You're killing my audience and me with this. Leaving off, let me ask you one thing before we go. Has learning about the brain and network neuroscience, is there something that you used to believe that you now, looking back, think, oh, that was naive. In my younger days, my inexperienced days. [01:41:48] Speaker A: Well, I think, you know, since I've, for as long as I can think, almost going back to certainly my undergraduate days and even before then, I was always fascinated by sort of the complexity of biological systems. I didn't have the vocabulary back then or the tools or the insights that I have now, but it was always something that struck me where, you know, biological systems were somewhat different from other physical systems that we might study. The complexity of it, the resilience, the interesting sort of structure and dynamics aspect, the historical aspect, the evolutionary aspects where these systems come from. And really, to this day, I keep being surprised about how this complexity plays out in ways that are unanticipated in the brain. There are certainly things now that I think are important that I didn't think were important 20 years ago. This whole interplay of structure and function is sort of a fundamental dialectic, almost using a philosophical term here, of how our brains operate. The fact that there's a physical infrastructure, neurons, connections, synapses, molecules, et cetera. And then on top of that, these incredibly rich dynamics that unfold in ways that are bewilderingly complex. And these things interact, the dynamics change, the structure, the structure changes, dynamics. All of that's coupled to our bodies and our environments. I certainly didn't think of the system in this manner when I first got started. And in some ways, the complexity that we're facing is daunting. But I have some hope that as we face up to it and directly engage with it and use it as a framework for studying the brain, we will ultimately discern laws, functional relationships, regularities, principles that are going to allow us to write down some fundamental working principles of how brains operate. We're not there yet, but I think we have a better chance of getting there now than 30 years ago. [01:43:59] Speaker B: Well, there's only one way forward. And thank you, Olaf, for helping to move us forward much faster, much more efficient than we otherwise would. And so we're about to hang up and I know you're in your office. When we're done here, you're going to swivel your chair around, kick your feet up, stare at the books on your shelf, take a deep breath in and relax. Maybe turn your printer back on and. [01:44:22] Speaker A: Then maybe go home, put back on my face mask. I'll leave my office and leave the building and go back home and work from there. Yeah. This is the new time right now, so. Yeah, it was good to talk to you. [01:44:37] Speaker B: Good to talk to you. Thank you so much for your time. [01:44:40] Speaker A: Thank you as well. [01:44:55] Speaker B: Brain Inspired is a production of me and you. I don't do advertisements. You can support the show through Patreon for a trifling amount and get access to the full versions of all the episodes, plus bonus episodes that focus more on the cultural side but still have science. Go to braininspired. Co and find the red Patreon button there. To get in touch with me, email paulainspired.co. the music you hear is by thenewyear. Find [email protected] thank you for your support. See you next time.

Other Episodes

Episode 0

March 26, 2022 01:26:52
Episode Cover

BI 131 Sri Ramaswamy and Jie Mei: Neuromodulation-aware DNNs

Support the show to get full episodes and join the Discord community. Sri and Mei join me to discuss how including principles of neuromodulation...

Listen

Episode 0

March 28, 2021 00:50:03
Episode Cover

BI 100.6 Special: Do We Have the Right Vocabulary and Concepts?

We made it to the last bit of our 100th episode celebration. These have been super fun for me, and I hope you’ve enjoyed...

Listen

Episode

August 25, 2018 00:37:16
Episode Cover

BI 006 Ryan Poplin: Deep Solutions

[bctt tweet="Check out episode 6 of the Brain Inspired podcast: Deep learning, eyeballs, and brains" username="pgmid"] Mentioned in the show Ryan Poplin What is...

Listen