Episode Transcript
[00:00:04] Speaker A: Maybe this interdisciplinary process, whatever we're calling neuroai, can help dislodge some of these stale debates and arguments and help us get to a new conceptualization of what's really going on in brains.
[00:00:17] Speaker B: There are some people that think neuro AI is about using AI to understand neuroscience, and then there are people that think NEUROAI is about using insights and principles from neuroscience to improve AI. We're really more interested in the convergence and the reciprocal aspects of neuro AI.
[00:00:35] Speaker A: You can like ignite it essentially and then the connections are such that the nonlinearities all line up and you get self reinforcing, self supporting, self sustaining activity.
That's the basis, that's the base computational unit.
[00:00:52] Speaker B: And I turned out to be the only student that had the right answer.
[00:00:55] Speaker C: Is that because you used a for loop and just did it like the careful slide?
[00:01:00] Speaker A: Yes.
[00:01:08] Speaker C: This is brain inspired, powered by the transmitter. Well, well, if you're a longtime listener, you may recognize that I added back in Chopin that is at the demand of one of my guests today, Joe Monaco. Joe berated me, you could say, for removing it a long time ago from the introduction. So he took it upon himself to play it, record it and send me the recording. And so it's back. I hope you're happy, Joe.
Joe and Grace Huang, the other voice you just heard, will introduce themselves in a moment, but I will introduce them now as co organizers of a recent workshop that I participated in, the 2024 Brain Neuro AI Workshop. You may have heard of the Brain Initiative, but in case not, it is a huge funding effort across many agencies, one of which is the National Institutes of health or the NIH where this recent workshop was held. The BRAIN Initiative began in 2013 under the Obama administration with the goal to support developing technologies and implementing those technologies to help understand the human brain so that we can cure brain based diseases.
So the BRAIN Initiative just became a little over a decade old now with many successes under its belt, like the recent whole brain connectomes you may have heard of and discovering the vast array of cell types and many others. I'm not going to list them here, but I'll point to a reference for you to learn more. So now the question is how to move forward and one area they're curious about that perhaps has a lot of potential to support their mission is the recent convergence of neuroscience and AI, or what has been recently coined as neuro AI, for better or worse, as we discussed. So the workshop was designed to explore how NEUROAI might contribute moving forward and to hear from NeuroAI folks, people doing the NeuroAI research to hear how they envision the field moving forward. You'll hear more about that in a moment. That's one reason I invited Grace and Joe on. Another reason is because they co wrote a position paper a while back that is, among other things, it's an impressive synthesis of lots of concepts in the cognitive sciences and neurosciences and history. So we talk about that. But it also proposes a specific level of abstraction and scale in brain processes that may serve as what Joe calls a base layer for computation. So the paper is called neurodynamical Computing at the Information boundaries of Intelligent Systems. All right, so you'll learn more about that in this episode as well. Okay, I don't want to yammer on here, so let's get you to Grace and Jo. There are lots of show notes in this one to workshop related stuff and to many of the papers that Joe and Grace reference. Those are at BrainInspired Co podcast 200. Right, I forgot it's the 200th episode.
That is awesome and amazing. And what a fitting way to bring in 200 talking about a Neuro AI workshop, among other things. Patreon supporters, you are the best. Let's have a live chat before the Christmas holidays if you're up for it. So I'll be in touch about that. Go to BrainInspired Co to learn how to support the show on Patreon and join in on fun stuff like that and get full episodes all the time. All right, here are Grace and Joe.
Yeah, we're starting. We're starting.
You guys just blew my mind.
So this is the beginning.
I've been interacting with both of you. I had no idea that you were a married couple because my interactions with you were very professional because I just came back from this Neuro AI Brain Initiative workshop and then 30 seconds into us speaking to each other, Grace said, you do know that one of you said you do know that we're a couple, right?
And nope. No, I did not. But I do know. So first of all, hi, Joe. Hi, Grace. Thanks for being on the podcast. Hi, Paul.
[00:05:42] Speaker A: Thanks for having us.
[00:05:44] Speaker C: So, and this is a little bit different than the way I normally do things, but could you just like very briefly state your name and occupation? Or not your name, but your occupation?
Grace, we'll start with you.
[00:05:57] Speaker B: I'm Grace Huang. I am a program director at the NIH at the National Institutes of Neurological Disorders and Stroke, and I support the Brain Initiative full time.
[00:06:11] Speaker C: And Joe, I am Joe Monaco.
[00:06:14] Speaker A: I am A scientific program manager. And I am a contractor for the office of the Brain Director at the nih, so we are housed under ninds.
So I'm there with Grace, but I work with the brain director and I work with all of our internal brain teams.
[00:06:31] Speaker C: Okay. And I just worked with both of you, among many other people who worked with both of you because you both put in an absurd amount of work to organize this recent Brain Initiative Neuro AI Workshop.
But now I don't know where. I don't know where to start, because your partnership has gone back a long time. And, Grace, I had recently learned that it was an interesting way that you guys met originally and came to form an intellectual partnership, so maybe we could just start there.
[00:07:07] Speaker B: Well, we Met Originally in 2004 at Brandeis University when we both were taking computational and theoretical neuroscience class under Larry Abbott, back when he was there.
[00:07:20] Speaker C: Yeah, that's kind of a famous class. I feel like a lot of people matriculated through that class and have fond memories of it.
[00:07:31] Speaker B: Indeed. Joe and I started our first collaboration. This was when I had just gone into computational neuroscience and was learning how to use matlab for the very first time and how to do a homework problem. And I was the only student that had a different answer than everybody else because I had not figured out how to unroll my loops. So I was literally writing all these loops and nothing wrong with a for.
[00:07:54] Speaker C: Loop, although it's better to not have them if possible.
[00:07:57] Speaker B: Yeah, that's right.
[00:07:59] Speaker A: And that plays into the story.
[00:08:00] Speaker B: And because I was the only person who wrote a for loop, I had a different answer and I thought I had the wrong answer. Even the TA had a different answer than me. So I sat in the computational annexation for days trying to find the bug in my code. And then at 1 or 2 in the morning, before the homework was due, comes in, a student very quickly writes his code and is about to leave with his correct answer. And I asked him, can you just look at my code? And he looked at it and he said, there is nothing wrong with your code. And an hour later, he found a bug in his code that every other student who were super programmers made. And I turned out to be the only student that had the right answer.
[00:08:41] Speaker C: Is that because you used a for loop and just did it like the careful, slow way?
[00:08:46] Speaker A: Yes, because all of the other computer nerds in the class, they know about vectorization and vectorize.
[00:08:53] Speaker C: Vectorize, yeah.
[00:08:54] Speaker A: And matlab is very slow when you write manual for loops. And so everyone got the same wrong answer and they Thought it was the right answer.
But looking through Grace's code, it's like she sequenced the order of operations absolutely correctly. And then I was able to figure out, when you vectorize it, it actually changes the order. And we were doing classical conditioning, so the order in which you update the parameters of the synapses really mattered.
[00:09:21] Speaker C: And you guys were already married at that time, right? No.
[00:09:25] Speaker B: No, no, no, no.
[00:09:27] Speaker C: But you were just saying that you.
[00:09:29] Speaker B: That was when we met.
[00:09:30] Speaker C: Yeah.
[00:09:30] Speaker B: Yeah, we married. We've been married for 10 years, so it took us a long time to get married.
[00:09:36] Speaker A: In 2014, we had to get our PhDs first.
[00:09:40] Speaker C: Backing up even further. Grace, I understand that the only reason you went into computational neuroscience was because you developed mouse allergies. Is that correct?
[00:09:49] Speaker B: Yes.
I initially went to Brandeis University to study biophysics and structural biology. I was building a single photon molecule microscope trying to study the translocation of HIV protein. And that required that I work with protein chemicals and rotate it through a mouse lab. And I had really bad allergies, whereas in hives and going. And had an EpiPen.
[00:10:16] Speaker C: Wow.
[00:10:17] Speaker B: And it was not. It was not a sustainable lifestyle. So in my third year of graduate school, I changed labs into a computational memory lab, where I joined Michael Kahana back when he was at Brandeis University.
[00:10:30] Speaker C: I mean, the interesting thing about that is I've only physically been near Joe a couple times now, but I think I developed an allergy.
[00:10:39] Speaker A: No, I don't know how to take that.
[00:10:42] Speaker C: No. Okay, so then how did you guys. Because what you just co organized was a neuro AI workshop. So then how did you guys end up coming together in that steed?
[00:10:54] Speaker A: Well, so I'm. Out of the two of us. I'm the one who was a neuroscientist from the beginning. I was a computational neuroscientist and a theorist. I did my PhD on grid cells and place cells and modeling how they might be related through remapping. It's very important. Computational transformation and hippocampal studies.
But I kind of brought that through to expanding outwards to think more about the complexity of behavior throughout my postdoc. So I joined Jim Canarium's lab at Johns Hopkins, where he had a wealth of these experimental data of freely moving but on track rats basically navigating in a clockwise circle a number of times. And so you can very closely track the behavior, where's the position of the head and body. And then you can track the emergence of place field activity over time. And so that was kind of the basis of me getting into complex neurobehavioral analyses and thinking about from an organismal perspective, what's actually going on here. It's like, do we have all these just internal computational representations? How does that translate into what this interesting little animal is doing on a moment to moment basis? And so that's kind of where I started thinking more deeply about complex temporal dynamics and behavior. And eventually Grace was kind of off doing other things in other fields. She's kind of a polyglot of science and technology.
[00:12:28] Speaker B: And then yeah, I basically after I graduated with my PhD, I went to the Mitre Corporation and I was was developing optical biosensors to detect pathogens from Excel Breath. And I did some work in government at IARPA as a SITA contractor and also at DARPA as CETA contractor. And it wasn't until 2015 when I went back into quote unquote academia. I joined Johns Hopkins University Applied Physics Lab as a program manager to run their applied neuroscience program. Initially I was an assistant program manager and that was the first time since 2005 did I start to pay attention to neuroscience again. And it was so exciting because neuroscience had just accelerated and a year. So I started going to society for neuroscience meetings. But I wasn't following Joe's research at all. And it wasn't until the 2017 Society for Neuroscience poster session did I learn about Joe's discovery of phaser cells. This came out of his research with actually Keqian Zhang and Tad Blair. I was so intrigued by phaser cells because unlike hippocampal place cells that maps.
[00:13:39] Speaker A: Phase, they're mapped asymmetrically to the traversal of a place field. Very famously in hippocampus, the pyramidal neurons there, the place cells will start firing at a later phase of the theta rhythm. And then as the animal moves through the place field, each spike will become earlier and earlier within the theta cycle across subsequent theta cycles. And so if you plot the distance through the field against the theta phase of spikes, you get this kind of nearly monotonic decrease in the phase, this advancement. So that's called phase precession and that's a very robust field. It's thought to be related to sequence learning and very important things that CA3 is doing in hippocampus. That's the high recurrent sub region.
But with Tad Blair, I had a collaboration where we're looking at the one synapse downstream into the subcortex. We were looking at lateral septum and so the septal bodies are very interesting. People don't record from them very often. But we are looking at, you know, what other phase codes might there be. It's a theta rhythmic brain area. And so we found or well, I went looking for a different kind of code, one that wasn't locked to a particular trajectory, but one that was locked to space. So I was looking purely for spatial information in the timing code relative to theta oscillations and found it. And so that's kind of. I coined the name phaser cells. There's a couple other models out there called phaser.
[00:15:09] Speaker B: But yeah, and this blew me away because you know, you were able to directly map phase to place. And I was working at the applied physics labs in the intelligence system center surrounded by roboticists. So I immediately, my immediately thought was wow, if we could use this internal phase code to do self localization and mapping, wouldn't that be cool?
And on that very same day there was this other paper that came out, it was titled Swarmlators and these are swarm lasers that can sync in swarm using an internal phase variable which came out of Kevin O'Keefe and Steven Strogat's lab. And so the two things got me thinking, wow, maybe there's a there there for using Joe's discovery of phaser cells and controlling multi agent robotics. And so that was how our collaboration reinitiated back in 2017 and then AI became pervasive and so we kind of now just begrudgingly call it neuro AI because people tend to know what that means. There are some people that think neuro AI is about using AI to understand neuroscience. And then there are people that think neuro AI is about using insights and principles from neuroscience to improve AI and AI could be hardware, SOFTW or a combination of the two. So we kind of left it open ended because we didn't want to, you know, we told people for the purpose of the Neuro AI workshop, we're really more interested in the convergence and the reciprocal aspects of neuro AI and not in the feed forward using AI for any science approach.
[00:16:47] Speaker A: And I wouldn't say begrudgingly, I would say this is where the opportunity is. Right. So you know, in this interview we both have to be clear about when we're speaking from the perspective of the Brain Initiative, when we're speaking from our own scientific perspective. I mean from the perspective of the Brain Initiative, there's a lot of interesting opportunity here. And so you know, at the, in my opening remarks I showed you know, a figure from Brad Imani's review paper of like the, you know, four or five main threads of this kind of how AI has evolved from different ways of computing through learning with data.
The question is, how does all this come together and what do more brain like forms of this type of computing by learning with data, what does that look like going forward?
And there's been a few major inspirations from neuroscience from the brain over the decades since good old fashioned AI in the 50s where you had these symbolic approaches, Newell and Simon coming out of cognitive science. And then we're all kind of familiar with the history, the back and forth there. The AI winters connectionism rose in the, in the 80s, you know, with the advent of neural networks and back propagation for updating weights and you know, it's only in the last 10 or 15 years did you know, the scalability come into play with hardware that enabled the amazing advances that we've seen in the last 10, 15 years and what we now call, you know, AI computing or AI technology. And so the convergence right here is really ripe. And I think we should not be arguing about definitions necessarily because people in cognitive science and neuroscience and artificial intelligence are very good at arguing about terminology and definitions. We could bring in the consciousness researchers if you really want to go.
[00:18:45] Speaker C: Leave the philosophers out because they're the best at arguing about semantics.
[00:18:50] Speaker A: Right? Well, we do need people worrying about this. But I think it's at this early stage in this kind of like this exciting convergent period, there's a lot of decades of thought and research going into all these different threads and they've all hit kind of fairly related roadblocks. It seems cognitive science didn't become that fully encompassing research program that started in the 60s and 70s that Miller anticipated.
Neuroscience has gotten wrapped around certain ideas, attractor dynamics and, and you know, population geometry. And we're trying to figure out if this is the right way to go or not and how to incorporate large scale data, but maybe coming together we can solve all these problems simultaneously.
[00:19:34] Speaker B: I just want to defend my personal position a little bit about being begrudgingly, begrudgingly with the word AI because this would be the second time that the word neuro AI was put in a program or in a, in a work, in a concept that I'm working on. When I was at the National Science foundation back from 2020 to 2023, I created a program, a topic that was part of the Emerging Frontiers in Research and Innovation program called Brain Inspired Dynamics for Engineering Energy Efficient Circuits and AI. The original name of that program was not, did not have AI in it, but because I was so hot. My former NSF leadership said AI's got to be in the title. And so that was kind of where I was coming from, Joe, is, you know, we had to use the word AI because it's gotten so in vogue these days.
[00:20:31] Speaker A: Well, I think it's useful to use the words that people are using. Right. And it's a new term. I think it's not fully well defined. And that's kind of what this workshop was about. Let's bring together. So Grace and I have been going to neuro AI conferences and workshops, a lot of them over the last year, as I know you have as well, Paul.
And you see a lot of themes emerging. And so Brain is interested. Well, let's explore what potential roles look like. Is there a piece of this where Brain can help, where it fits within Brain's mission to go forward? And so we want broad input from the community for the workshop. We wanted broad input from the community. I think we got it. About helping us identify what those opportunities are to be considered further.
[00:21:19] Speaker C: Yeah.
What I wonder is, so I hear when you have a new hot term like neuro AI, it makes me think of.
I'm not sure what hats you want to put on if you respond to this, but speaking scientifically or from the nih, but you have to put the word mechanistic in all of your papers now because mechanisms became the hot thing and computational neuroscience is dominated by mechanisms. And I've heard that the sentiment I'm about to express, expressed among grad students just in the past few weeks of like, everything now is neuro AI, and if you want funding for anything, you just call it neuro AI.
And therefore you don't really have to worry about how what you're doing fits into neuro AI. And if it's ill defined or not defined at all, then.
So here's the worry is that like, all right, so then there's going to be this surge of grants and everyone's going to use the hot term neuro AI and that's going to bolster their chances of getting funded. And I'm not sure if there's a solution to that because that, that's just the name of the game. But I don't know if you have a response to that.
[00:22:36] Speaker A: It's always something like that. You can't control the community, you can't control people. Essentially you have to communicate in a way where people need to be clear about what they're doing when they're applying for funding.
And we have obviously the NIH and NSF and other funding agencies have scientific review processes in place to discern who's maybe following hype and over claiming or overusing words versus where the real advances are. And that's obviously not a perfect system, right?
[00:23:11] Speaker C: Well, of course.
[00:23:12] Speaker B: And I'll say from my experience when I was at the NSF for the BRAID topic, we had very strict solicitation specific criteria that would filter out people who are just using the buzzwords to try to get in.
I think having strong review sections would avoid these kinds of problems.
I do want to say that there is a true inflection point here in that the technologies that's been enabled by the BRAIN Initiative is allowing us to observe circuits in animals across many different spatial and temporal scale.
There is a real opportunity. I think just putting the word NEUROAI on any project is going to be easy to filter out.
[00:23:59] Speaker C: Perhaps. So, yeah, I mean I think that little light bulb went off in my head when Grace used the term begrudgingly because that sense that I get from these handful of people who have, it's sort of an eye roll like, oh, that must be neuro AI because that's just a hot term or something. So I'm wondering, man, I already feel like the backlash of this emerging field.
[00:24:25] Speaker A: So I originally wanted to call this workshop the Brain Neuro AI and Theory Workshop because I wanted to bring neuro AI. I wanted that convergence point to be focused on advances in theories, theoretical frameworks and theory driven models because I think there's so much there and I think that's where a lot of the obstacles, in all scientific opinion of my personally speaking, that's where I think a lot of the obstacles have been. I mean there are a few unifying theoretical frameworks in neuroscience and you've discussed them on this podcast over the years with a lot of people who have made major contributions to those theories.
But I don't see that cunean kind of process of the field testing confirmatory hypotheses, falsifying different theories and making progress. And so I think maybe this interdisciplinary process, whatever we're calling neuro AI, can help dislodge some of these stale debates and arguments and help us get to a new conceptualization of what's really going on in brains, which by the way are fundamentally embodied and inherently integrated as biological systems, which is different from AI.
[00:25:39] Speaker B: And I also think being a little loose with our definition is okay because we were able to bring in the neuromorphic community.
[00:25:46] Speaker C: Yeah, it was a huge. I mean, maybe. Let's talk about just the reason that the workshop existed and just how you managed to, how you decided how to frame it, how to organize it. And because neuromorphics was a larger part of the workshop than I had anticipated it might be if one just said we're going to have a neuro AI workshop. And that's all I had to go on.
[00:26:11] Speaker B: So for me, coming from the NSF to the nih, I was shocked that there was very few investment in neuromorphics at the nih. If you go into the NIH reporter, you type in the word neuromorphic, you'll probably get 60 returns and under 20 million investments since the 19 late 80s whenever NIH reporter started compiling information.
[00:26:38] Speaker C: Is that because neuromorphic is inherently slow? Is that the reason?
[00:26:43] Speaker B: Well, it has been slow, as we heard from the last two days. But I think the other part of it is just that the neuromorphic engineers and the neurotech and biomedical engineers go to different meetings. They don't really talk to each other. And so for me it was really important to bring, to close the loop between neuromorphic and neuroscience so that we can better benefit brain health and not just brain, but health in general.
[00:27:08] Speaker C: Yeah.
[00:27:08] Speaker B: And that, that was very critical for me and I actually hosted, co hosted a workshop with my colleagues at the NSF with co funding from Nibib and Ninds in late October in Baltimore. It was a workshop called the Neuromorphic Principles in Biomedicine and Healthcare.
And it was important to capture the health focus for neuromorphic and neurotech at this workshop. And that was why the second day was designed to be more like a tutorial, you know, the first, the first, the, the first session of the second day was intended to teach everybody what neuromorphic means.
[00:27:48] Speaker C: Yeah.
[00:27:48] Speaker B: Both large scale digital computing style of neuromorphic as well as the small scale mixed signal analog computing and neuromorphic sensing which we had Jacqueline Deverry as well as Ralph Etienne Cummings who really I think help teach the audience the differences between the different kinds of neuromorphic technologies and how they may or may not be useful in healthcare. Then the second session was to really bring it home and have clinicians talk about the value of neuromorphic technology.
[00:28:24] Speaker C: I see. Yeah. Just as an interjection. I mean I've done some conversing with people, like I just said, you know, reflecting about the workshop and one of the things I said to another academic was I was surprised that there was, you know, so much neuromorphics in the workshop and this person said, what's neuromorphic. I was like, whoa. And they were a neuroscientist.
[00:28:49] Speaker A: It's surprisingly not very well known.
And that's one of the exciting opportunities and why we wanted to have healthy representation from neuromorphic approaches.
[00:28:59] Speaker C: Wait, because we should. Joe, maybe just say what neuromorphics is, because I realize there's thousands of neuroscientists, I guess, who don't know what neuromorphics is or.
[00:29:07] Speaker A: Grace, let's introduce it.
So this question was asked by an audience member at the workshop, what is the definition of neuromorphic? And nobody wanted to take that on.
Well, I admonished folks not to get into debates about definitions.
[00:29:26] Speaker C: Yeah, yeah. But I just mean, like, broadly, what are we talking about?
[00:29:28] Speaker A: But I'm saying. So Kwapana Bohen, who is also a leader in neuromorphic computing, he's put forward, it's computing that scales, it's scalable computing. So in order to have fundamentally scalable computing, you need to be more brain like, you need to have memory on compute. And so the closer you get to the brain, which is very fundamentally a memory on computing system, then you break or bend some of the scaling laws that make it difficult to scale up conventional CPUs, GPUs, on CMOS processes.
But neuromorphic, I would say it kind of comes to the Feynman quote that we see everywhere, Right? And there were two good Feynman quotes that came out at the workshop, but the one that you see everywhere is paraphrasing, what I cannot build, I cannot understand, essentially.
And so the neuromorphic engineering community is folks who have been trying to build it. I mean, they've been from the synapse level to the cellular level.
[00:30:34] Speaker B: The original lineage, Carver Mead lineage of neuromorphic engineers were people that were trying to emulate these channels on a chip and creating spikes and characteristics that are comparable to what you would measure off of a cat pyramidal cell. That was the original Nisha Mahawa 1991 paper that I think first popularized neuromorphic. And that's what Ralph referred to as old school neuromorphic. But since then there's been all sorts of development, which is why it's so hard to define neuromorphic. It's kind of, it's a word that, you know, it means different things to different people and it could even mean principles.
[00:31:18] Speaker C: Right.
[00:31:18] Speaker B: You want to, you know, as Kai said, you want to have your device that operates with the statistics of their signals from the brain. Right. So some people even think of neuromorphic as Principles.
[00:31:30] Speaker A: Whether it could be physical principles? Well, yeah, physical and material principles as well.
[00:31:34] Speaker B: Guiding principles. Whether or not you actually use a neuromorphic hardware is they. They don't. It's okay. So there's people in the neuromorphic community that think of neuromorphic as hardware and there are others that think of it as design principles.
So it's hard to define neuromorphic but for me it's brain like it usually operates on spikes, but not always. Most importantly, it's energy efficient, six orders of magnitude energy efficiency and it's adaptive to the user. It's a system that can evolve with the user. And those are the four things I think that really stood out and was discussed at the panel. And people went as far as saying because it's adaptive, it's less hackable because you don't actually need a computer to run it. The on chip device is self containing.
[00:32:26] Speaker A: Well, I'd say it's important to distinguish. Not all neuromorphic hardware is learning hardware. A lot of the test bed systems that have been developed so intel had a program called loihi. There were two generations of these Loihi chips that were basically it's an academic research partnership program you kind of applied and you can get some of these chips to devised models for and test and run them. And those were, you know, you could have a large number of spiking neurons essentially like you integrate and fire neurons and you could just run that physically on these chips. But it was very difficult to get, you know, to implement like spike based learning rules like STDP on those chips. So there's other kinds of chips which are more amenable to implementing different types of learning rules. But that's something the field is still working on and trying to figure out. And it's one of the big questions going forward, how do you have these be adaptive in a safe way?
[00:33:25] Speaker B: That's a really good point. There are some chips that are designed for inference only and others that are designed for learning.
[00:33:31] Speaker C: So I totally see how neuromorphics and it sounds like the whole thing was focused on neuromorphics, which is not the case at all. But this is just a way into talking more broadly about the workshop.
I totally understand why neuromorphics then would be something to put a large focus on.
But then the question is like, okay, so what Joe was talking about earlier about needing to bring in theory and get all these people sort of under the same roof to sort of bring together these otherwise kind of disparate ideas to work on the Problems, one could wonder, well, what is that? What is what you guys were just describing? What does that have to do with anything related to theory? Right. So did we make progress on that or is that an ongoing challenge?
[00:34:22] Speaker A: It's an ongoing challenge. I mean, so the workshop was to bring, like Ray said earlier, this has been kind of a. There hasn't been enough conversation between these fields. And so that was very intentional on our part to bring together these different communities to start having that conversation. Even within neuromorphic engineering, the folks who want to use large scale neuromorphic computing systems to basically run large models of the brain, they need large scale brain data to build those kinds of models, connectomes and cell types and that kind of thing.
You can use that as a test evaluation system, modeling a simulation system to confirm or falsify theories about how certain aspects of these networks work.
But then there's the small scale side of it.
Energy efficient analog or mixed signal devices that can be distributed to the edge to do brain like neural, like intelligent computing in a wider array of applications. And that's more towards the translational end. But I think it covers the full spectrum.
[00:35:26] Speaker B: I want to go back to the theory. I don't think we talked a lot about it. So Brad Eimony actually came to the NIH and gave a Wednesday afternoon lecture series talk on October 23rd where he talked about how large scale neuromorphic computing could inform new theories. How you can make observations at scale that you just don't see in smaller circuits.
[00:35:49] Speaker A: It's much more difficult and particular to the types of models that you're building, how to scale them up. And so I think there's opportunities here to break some of those scale barriers within computational neuroscience and theory driven modeling and take it to the scale where the things that we care about happening in brains can actually be studied in a principled way.
[00:36:13] Speaker C: Okay. We've kind of been focusing on neuromorphics and then we delved into theory. But I didn't say in the beginning. The backdrop of this is that the BRAIN Initiative is 10 years old now. So part of the driving force, and correct me if I'm wrong with putting this workshop together, was to figure out where the future roadmap should lead or what avenues are explorable and should be explored moving forward. Is that a fair assessment?
[00:36:41] Speaker A: Well, we wouldn't use the word should. Right. Because we want to get. All right, exactly. We want to get the shoulds from the community. What are the different. We have the neuromorphics. We've Got the people doing metrics and benchmarks, we've got the people thinking about natural intelligence capabilities. How do they all come together? What are the main priorities and opportunities they see? And we want insight about that.
[00:37:08] Speaker C: You got some shoulds.
At least during my last session there, there were some shoulds from the audience which was fun. So good.
[00:37:17] Speaker A: Right? I mean, so from the Brain Initiatives perspective, we want to see, okay, get all these pieces together in this jumbled puzzle and then figure out which of those pieces makes sense for the BRAIN Initiative potentially to contribute to lend the type of work that we do in the BRAIN Initiative.
[00:37:38] Speaker B: Another thing I would say as a co organizer is we invited a lot of other funding agencies to this meeting to be part of the workshop because a lot of these problems aren't necessarily brains problems.
It's coming up with the next super efficient computing system is a great idea and it's great for humanity, it's great for solving the carbon footprint. But that probably lies in a different agency's mission space. However, the knowledge the BRAIN Initiative collects and continues to generate is useful to that longer term mission.
And I was really hoping that would be clear from the funders panel.
I don't know if that, if we hit the mark there, but this is really a collaborative effort where lots of agencies are interested for different reasons.
[00:38:28] Speaker C: Yeah, I want to kind of complete the circle on just what the workshop was about and how you decided what kinds of topics to bring in. So when I think of neuro AI, I think of like the. It's not traditional, it's new, but it's not new, but it's like kind of testing, like using some artificially artificial intelligent models, neural network models as proxies for brain processes and then asking whether you get something brain like representationally out of those systems. Like the sort of early work with convolutional neural networks and modeling the visual object recognition system on those networks. There's been a lot since then, there's been a lot of recurrent network work like that. And then someone like Andreas Tolias, who works on what are called foundation models was represented. So that sort of side of it was also represented. But maybe you guys could speak to what you wanted, what other kinds of things that you wanted to bring into the workshop.
[00:39:36] Speaker A: Our personal scientific opinion and perspective at that time, which came out of the symposium that Grace and I co organized for the 2020 Brain Investigators meeting.
And that was a great panel. We had Conrad Cording and Zach Pitko and Nathaniel Daw, Kanika Rajan and I'm forgetting Brad Pfeiffer on that kind of. We called it dynamical systems neuroscience and machine learning. That's what we were bringing together. But it's prototype for neuro AI thinking these ideas through.
But the perspective that Grace and I wrote coming out of that was that we can't.
A lot of the problems seem to be from this purely computational perspective. And that's the perspective that's kind of grounded in this almost traditional brain as computer metaphor that has pervaded all of these fields. It's pervaded cognitive science, AI neuroscience, neuroscience. Neuroscientists talk constantly about neural encoding, decoding representations, talk about representations as these computational constructs. And that once you have representation and it has some, to the experimenter, some explainable relationship with what we think is going on in the animal. Like we put it in the task. So, okay, so it needs to solve that. Go left at this point after seeing this cube and we see correlations to the right kinds of things and it's like, okay, we found the computational representation. That is the explanation. We're done here. So I think the we're done here part of that has been the barrier, because you're not actually done there because there's still behavior that needs to be in the loop. It's behavior is this moment to moment dynamical coupling between brain and body, between body and environment. And so that's why we wanted to expand outward and bring in ideas from embodied cognition, the 4e literature.
And not to sign onto that, but to say, hey, there's something here about these massive distributed feedback loops through the environment that are a key part of what's going on in cognition in animals.
So that's where we took this.
[00:41:57] Speaker B: And day one was designed to be all computational intelligence, whereas day two was more the embodied neuromorphic translation.
[00:42:05] Speaker C: So that was our. Even during day one, I was impressed with how many people were reflecting on the importance of embodiment. I mean, it came up a lot.
[00:42:14] Speaker B: That was not planned. It was a surprise.
[00:42:18] Speaker A: I may have planted that.
[00:42:20] Speaker C: Did you plant that seed? Is that what.
[00:42:23] Speaker A: Well, for instance, in the same issue in which our paper came out, there was another paper called called Deep Intelligence by Ali Manai, who's a electrical engineering professor at University of Cincinnati.
And so I was aware of that work. And he's done computational neuroscience for a long time now, since I started in the field in hippocampus. So I was aware of him.
[00:42:46] Speaker C: But he had a very evolutionary perspective on it as well.
[00:42:50] Speaker A: He has a very holistic perspective on biology, biological Organisms are inherently integrated. They're integrated through evolution, phylogenetically, they're integrated through development, ontogenetically, they're integrated through learning and aging and experience.
And that's, you know, you have to keep coming back to that because that is kind of. Well, at least that was his perspective that he was taking in the paper and in the talk that we invited him to give. Comparing natural intelligence with AI. And there's so many important distinctions that you can make, but I think that's one of the key ones.
[00:43:28] Speaker C: And so we can't call. It just occurred to me that holistic neuroscience would be a great term except that it would be associated with holistic medicine. I think which.
The word holistic has some positive and negative connotations.
[00:43:44] Speaker A: Right. It gets into the impulse for reductionism and the kind of counter movement of, you know, of looking at downward causation and emergence.
[00:43:56] Speaker C: Well, I just meant the science of holistic medicine is sometimes questionable. So to be a holistic neuroscience, someone might see that and think oh it's woo woo, or you know, something like that. But on the whole it's a pretty good phrase.
[00:44:09] Speaker A: I think it's woo woo. If you do ignore the internal computational representations, you can't ignore them. That's why across two days we had the focus on. Yeah, so personal opinion. I was calling it kind of the mainstream neuro AI. Let's figure out how to map these task constrained AI models to what we see in the ventral visual stream.
And there's been like you said, a lot coming out of that. People are looking at dorsal stream and people are looking at motor system and other areas.
[00:44:44] Speaker C: Cognitive maps. Yeah, cognitive maps. Yeah, you name it.
[00:44:48] Speaker A: Well, cognitive maps are maybe the clearest example of actual high level cognitive encoding in the brain. At least that's my personal opinion as a hippocampal researcher. Hippocampal chauvinism.
[00:45:01] Speaker C: Yeah. But I mean just applying sort of neuro AI models to account for cognitive. Cognitive functions. Cognitive maps has been a big one.
[00:45:12] Speaker A: Absolutely. No, I think it's an important. I mean it ties into the dimensionality reduction, the task based low dimensional manifolds. Yeah. We're hurting hundreds of thousands or millions of neurons now. There's no way to visualize that. If you just throw everything into a U map you get some interesting colored splotches on your screen, but it doesn't tell you how to interpret what's happening.
[00:45:35] Speaker C: Oh my gosh, you're speaking to what I'm. I make some really pretty naturalistic behavior neural UMAP graphs right now and gosh they're pretty, but they're not the solution. They're not. I'm not done.
Right.
[00:45:53] Speaker A: When this ties into the whole interpretability, explainability and mechanism discussion, how do we get at what the important factors are that are driving that high dimensional neural activity?
[00:46:06] Speaker C: Oh, wait, so now you're just jumping the gun and going right into. I mean, maybe that's the way that we should do it, is talk about some of the topics in the paper and then bounce back and forth. But I don't want to also not come back at least to.
Well, actually, let's hold off on going. I know that they're all related is.
[00:46:25] Speaker A: The thing I'm saying.
[00:46:25] Speaker C: You need both, right? Yeah.
Okay, so then, I'm not sure what we've missed about the workshop, but I wanted to get your general reflections on and how it went.
[00:46:37] Speaker A: Yeah, I mean, so from the point of view of the workshop, we were both incredibly thrilled and pleased at the discussions that we had. And thank you, Paul, for stepping in as I discussed it on the first day and for helping with the wrap up on day two. Yeah, we've been reviewing the recordings and when you're in the middle of it, you don't get to really listen and experience it. But it's really great conversations and discussions and questions that we had.
[00:47:03] Speaker C: But at one point in my first discussant panel thing, I got to yell at Terry Sinowski and I thought, like, oh, why am I saying something negative to Terry Sinowski about what he said? As I was saying it, I was having this moment like, oh, you're not in a position to even talk about this with him. But it was fun. It was fun because he's like a hero, intellectual kind of hero to many people, including myself. And so you're in that situation and this is maybe very meta. So I apologize. But you're talking to your heroes sometimes and you realize, either are they a colleague or are they a hero? And it's just kind of a. Not surreal, but an interesting feeling.
[00:47:48] Speaker A: Well, this is why we brought everyone together. We want the leaders of the field who have been around and just driving things forward for as long as anyone can remember. Sorry.
And his colleagues had won the Nobel Prize just a few weeks before the workshop.
And if you talk to people around that and you go back and read those papers from 84 or 86, Terry's a CO author in all of that.
[00:48:16] Speaker C: I was wondering how he felt about that. I'm not sure if it's the right place to discuss that, but I imagine a lot of People have wondered, does he feel like he was missing from that?
[00:48:26] Speaker B: Well, the day the Nobel Prize was announced, the Telluride neuromorphic community overtly wondered why Terry was left off.
[00:48:35] Speaker C: Yeah, no, I saw a few things.
[00:48:36] Speaker B: Like that that was shared upon all of us who attended Telluride this year. So it was good to be able to acknowledge Terry's contribution at the workshop through multiple talks.
And I think you were fine, Paul, you were.
[00:48:52] Speaker C: Oh no, but he corrected me. He corrected me, which was wonderful because he correctly corrected me. And then I was like, all right, I'm not going to get into a back and forth with this person.
[00:49:03] Speaker B: I have to say, every time I felt something didn't go well, I went back and listened to the recording and I was like, oh, it went way better than I thought. So I think the workshop was really a great success and I learned things that I didn't expect to learn.
[00:49:19] Speaker C: Well, I came away thinking that it felt like a win, a great success. And so I'm not sure if you guys want to elaborate more on how you or reflect more on how you felt about how it went and maybe even what may have been missing that will happen next time or how reflecting on what happened here affects how you think about moving forward.
[00:49:43] Speaker B: I feel like the ethical neuroethical conversation was a really important one because NEUROAI is going to bring about a lot of new challenges and that Karen Rumbelfinder's talk was really insightful and I think, you know, if we were to have another one of these workshops in the future, you know, I felt like we didn't give her a chance to ask her a question because we ran out of time, you know, so maybe a little more. Actually, to be honest though, I don't think he could do neuro AI ethics justice in half in a one hour session.
So there ought to be more conversations about ethics as well as regulatory questions.
[00:50:27] Speaker A: We can't speak directly about the future, but clearly Brain put on this workshop is interested in this space and we agree, we think it was a great success as far as our goals of hearing from everyone.
[00:50:42] Speaker B: Paul, do you think there was a scientific community that was not represented at the workshop that should have been, well.
[00:50:51] Speaker C: Oh geez, putting me on the spot here, I mean, in some sense. So I was rereading your position paper and maybe this is a segue into that because yes, there's a lot of embodiment and I'm trying to reflect now myself because as I'm reading through this paper, the title of your position piece, neurodynamical computing at the information boundaries of intelligent systems. And so I'm reading through this thing again and it's so rich and dense and makes the case for embodiment and the importance of environment, body, brain, continuous cycle interactions.
And then I'm reading, I'm like, oh my God, click on every reference. And I'm like, half the time I'm like, oh, good, that's good, I've read that. And then the other half I'm like, ah, I gotta add it to my reading list, you know, and it's like, it's. But so in some sense, and the irony is, my original job was to synthet. The end was to synthesize the workshop.
And I didn't, to be honest, I didn't really know how I had an idea of how I was going to do it, but I didn't have like a set plan. And it ended up being like a more of a moderated discussion, which was great. And then a lot of interaction from the audience as well. But the reason why I said synthesize, and I think Joe might have mentioned that term earlier, is because your position piece synthesizes a ton of stuff with the goal of using so much historical perspective and what's maybe missing these days in AI to synthesize what you call a base layer of computation.
You're going to correct me on this, I don't have the exact quote, but a base layer of neural computation, I.
[00:52:48] Speaker A: Think that's what we called it. Yeah.
[00:52:50] Speaker C: Okay, well, I know it's called a base layer, but I. Yeah, and so you asked me what I thought might have been missing. It might have been that sort of bringing together of the historical contexts and why these things are important. And then an interesting thing happened that two people, Blake Richards and Zach Pitko, Zach at the very end said a great goal for us would be to record the connection strength of every synapse. And that's such a reductionistic approach that is in line with modern reductionist neuroscience.
And it kind of flies in the face of what you guys argue for in this position piece a little bit. And I thought it was odd that there is still kind of this reductionist assumption underlying all these things to measure more. And here's the level that we need to measure at and modern dynamical systems theory, manifolds looking at larger populations. And that lower dimensional structure is somewhat antithetical to that story. So if anything, I thought maybe like that the whole even manifold and more talk about levels and going across levels. What's the right level of abstraction. Why it's the right level of abstraction. And that's more on the theoretical side, if anything. I thought there could have been more of that. I think. Man, that was long winded. I apologize.
[00:54:22] Speaker A: That's more of a comment than a question.
No, I totally agree. So here I have to be very careful to differentiate my perspective, opinion on the field.
Grace and I wrote this paper a couple of years ago now, and this did kind of come out of a wide range of frustrations, which is why I went deep historically, and I brought in ideas from philosophy of mind, philosophy of computation.
Let's think about what is computation? What does computation mean in the brain?
What does information even mean in the brain? Because one of these other things that pervades all these fields, and I do give, like a historical capsule at the top of the paper, cognitive science, neuroscience, AI is this kind of. They have the same conception of what information is. And it's Shannon information, which, as we know from the 48 paper and the later one where it's about communication, where you have a transmitter and a receiver and you have a shared Alphabet.
And that may not be the right metaphor or framework for understanding information in the brain, especially how brains construct semantically meaningful structures and processes and dynamics.
[00:55:42] Speaker C: Well, Shannon was even aware of this conflation of information with. With meaning. And there's a very short. I could put it in the show notes. He actually wrote up a very short piece saying, look, people, this might not apply to your field because everyone was applying Shannon information to their fields. So he was warning against his own work being applied too broadly and misconstrued.
[00:56:05] Speaker A: Yeah, I think I've seen that and it kind of reminds me. But also, Tony Zador in the opening keynote brought up the concept of the hardware lottery. So in a sense, Shannon's information theory is kind of a theory lottery.
It provided everyone a readily accessible tool for like, oh, yeah, information. This is a really important concept. How do we measure it and grab hold of it? It's like, okay, well, here you go. You just run these sums and averages over this kind of distribution.
It's purely a statistical process. It's purely syntactic.
And I think it was Shannon, maybe in that piece, who said, this is purely syntactic. This does not get at the semantics of what this actually means. And if we think about. So this paper, neurodynamical computing at information boundaries, it's because there's different types of information and they are transformed across the boundary of an organism. And so you have, you know, the.
[00:57:11] Speaker C: But now you're not using information in the Shannon sense, you're using it.
[00:57:14] Speaker A: Right, Right. So what is different about information in a biological cognitive organism? It's, it's that organisms construct a boundary. It's, you know, it's our skin, but it's also, you know, our exteroceptive senses and we have ways of taking in information. You have the whole universe of sensory input coming in, but then you also have the internally generated universe of goals and drives.
Right. So what organisms are in constant conversation with the environment. We depend on the environment for energy. That's why foraging is such a fundamental problem for animals. Animals, fundamentally what defines them is that they move and they move in order to forage and find food and energy.
[00:57:57] Speaker C: And then shelter so they can find more food. So they can move more. So they can find more.
[00:58:02] Speaker A: Yeah, well, it turns out ecologically there's a lot of niches that open up if you can move through the environment. Right. Otherwise you're a coral or you're a sea squirt attached to a rock somewhere, or you're a filter feeder.
[00:58:17] Speaker C: People will take umbrage to the idea that those don't move, by the way, some people will, even plants. But yes, I get the point.
[00:58:25] Speaker A: There are sessile animals. So I did not mean to offend the sessile organism community.
But movement is fundamental to all of this.
And so it's that, that dynamical coupling at those informational boundaries that allow the goals to basically stream against the incoming sensory inputs along that kind of hierarchy of both perception ascending, the hierarchy of drives and movement and behavior control descending.
So that was our perspective that you can think of it related to the predictive processing framework, like Carl Friston's unifying theory where he sees the brain as a distributed internally generated feedback model. And you're canceling out prediction errors as they ascend with top down expectancies. And then there's trade offs that are governed by his conception of free energy.
But that unifying theoretical framework has had trouble gaining traction about making direct predictions about what people should be doing in neuroscience. What type of experiment should you design to figure out, oh, this particular function operates this way within the predictive processing framework in a way that's distinguishable from some other framework.
I'm just trying to step back and not put a name on things. But fundamentally organisms construct meaning through kind of managing basically the entropy at this interface. And so that maybe that's prediction error, maybe it's something, some other quantity. But you need to manage entropy. Fundamentally that's what you're doing thermodynamically, we're far from equilibrium. That's the whole game.
[01:00:15] Speaker C: And that's the control theory aspect of it.
Is that where that comes in as well? So what I was going to say originally is injecting meaning back into neuroscience is not the default. And the default to neuroscience is this reductionist brain is a computer. And then we can go from sensation back to, and we can disregard goals and meaning and purpose.
That's kind of been the default position of most neuroscience for a long time. Although in the paper and, and elsewhere it's pointed out like that early cyberneticist movement was more about control.
So I guess that's why I'm asking, is that where the control theory aspect comes in?
[01:01:04] Speaker A: So that ties into. So in this paper, which again was our perspective, we found the most, I guess, simpatico framework out there to be what's called perceptual control theory. And so this is an alternative branch of cybernetics essentially from the 50s and 60s that was kind of brought home or initiated by Bill powers in the 60s and 70s.
[01:01:28] Speaker C: Yeah, but the whole. So like Henry Yen, whom you cite in the paper and who's, you know, he's been on this podcast, like many of the references in your paper would be.
[01:01:37] Speaker A: You influenced us too, Paul.
[01:01:39] Speaker C: Well, just by having, maybe by having people. Yeah.
[01:01:41] Speaker A: Through the puzz.
[01:01:42] Speaker C: Thanks. That's awesome.
Yeah, it's awesome to see when it's awesome to see so many references in a paper, it's like, oh, that person's been on the podcast. That's the person's brother. So I just mentioned Henry Yin and he was saying one of the problems with early cyberneticist research and control theory in general actually is that the reference signal of a machine is external to the machine, whereas we have internal reference signals that we're trying to control to match for those reference signals. And that's a fundamental difference. And that's what neuroscience is missing. And so I don't even remember my question. But that is what you speak to in the paper as well, right?
[01:02:28] Speaker A: Yeah. So Henry's written a couple book chapters with this perspective and they're, they're kind of bomb throwing chapters in a way.
And it's, it's, I think it's helpful to have strong opinions out there. Right. Because it really makes you think, okay, okay, this, this kind of sounds interesting, it's provocative, but where does it go wrong? And so that's. Or does it go wrong? You know, so that's kind of Reading, reading that and reading, you know, in my. I'm not a motor systems person, but, you know, I have passing familiarity with the motor control paradigms. Those theories coming out of the 90s and 2000s, optimal feedback control, conceptions of motor commands, the theories and models about effort and copy and corollary discharge systems. And these, all the traditional motor control systems or frameworks were based on building up more and more detailed and refined internal models, basically forward models to predict the consequences of action and movement and then using that to evaluate different commands and behaviors and then putting that in this much larger, much more complicated control loop and pct, or perceptual control theory is appealing because in a way, it kind of reverses it. It says, no, it only matters is that you're only making comparisons at each level in this hierarchy because you're making direct perceptual comparisons. And so if you have a direct perceptual goal at the highest level, then those reference points come down, they're compared to the ascending perceptual input, and then that descending reference to the next level gives you what you need, and then.
[01:04:27] Speaker C: So eventually filters out through your muscle actuators and your movements in the world.
[01:04:34] Speaker A: But it's not eventual because it's all simultaneous.
Yeah, everything.
It's a staged flow of control signals, right? Of sensory and control signals across a hierarchy.
[01:04:48] Speaker C: I'm individual in terms of time, where one signal flow starts, it takes time to propagate. Not that there's a central organizer that says go and then from nothingness. Because, yes, part of what you push for also is this consideration that perception and action cycles are continuous flows. Right.
[01:05:11] Speaker A: And so that phrasing kind of ties into this conception of everything as being linear input, output.
[01:05:18] Speaker C: Right?
[01:05:18] Speaker A: So if you have a cycle, it's like, okay, first you're at this step, and then you're at the next step. So you're at sensation, then you're at cognition, then you're at motor commands, and then you're at behavior. And then that changes the pose, the orientation of the organism with respect to the environment, which changes the sensory inputs. And now you're back at the beginning of the cycle.
And so if you have a very complex computational forward model in that control loop, then you have to imagine that the delay of computation is now a delay in your control loop. And from a control theory perspective, from a control engineering perspective perspective, the more delays you have, the weaker your control is.
Yet one of the, I think the most dispositive or fascinating properties of animal behavior is that it's really good. It's really resilient.
Animals can accomplish their goals.
[01:06:16] Speaker C: Not always, but they're very good at it.
[01:06:19] Speaker A: Right. But compared to the variability of the observed behavior, so the robustness of accomplishing goals far outstrips the actual movement. It seems like. Wait a second, how is any of this possible? How is it possible for a rat to make its way through this very complicated burrow with basically no light and only a small set of sparse cues?
But it can navigate that burrow really well.
And so it comes down to, well, maybe you don't have all this complex computation going on in the loop. And so this is a conversation that I've tried to have over the years with people working in motor systems and they think either I have the wrong idea or Henry has the wrong idea, or actually all of their theories already encompass this idea. Don't worry about it. And so it's. But it seems to me this is an open. From my personal scientific opinion, this is an open question of forward versus inverse models, ascending prediction error comparisons versus perceptual reference point comparisons up and down the hierarchy.
And that's some of the discussion that I wanted to open up at the workshop. So from brain's perspective, I'm not going to impose my views on this, but I see that that's an important conversation and I think that will open up up a lot of potential opportunities for driving theory forward and data is a part of it. So the reductionism is obviously molecularly characterized. Cell type atlases of whole brains, very fine grained connectomics data sets like the flywire data set that was just released, which was BRAIN supported on a number of grants. And there's more to come. We launched the Brain Connects program last year and, and we'll start seeing data from those projects in the next few years.
Lots of exciting stuff to come. But that's obviously from a very reductive approach to neuroscience. Break things down so we can see everything that's there.
But I think that does need to be in the loop with this more holistic way of thinking. And so I think that's where there was a lot of talk about digital twins, multiscale biophysical modeling and then thinking about different ways of putting this in the loop with behavioral neuroscience and different ways of understanding.
[01:08:56] Speaker B: That was definitely a surprise for me at the Neuro AI Workshop.
[01:09:01] Speaker C: What's that?
[01:09:01] Speaker B: That there was so many talks about digital twins. Oh, even in sessions three and four, it seems like the community is really ready and really want digital twinning in their respective research Areas.
[01:09:14] Speaker C: What's a digital twin and why do we want one?
[01:09:18] Speaker B: Well, that's another definition question.
[01:09:21] Speaker C: Well, you don't have to define it.
[01:09:23] Speaker B: But roughly there actually is a definition that the National Academies of Science, engineering and medicine put out. I have it in front of me. I'll read it to you. Because a digital twin is a set of virtual information constructs that mimics the structure, context and behavior of a natural, engineered or social system, or a system of systems is dynamically updated with data from its physical twin, has a predictive capability and informs decision that realize value.
The bi directional interaction between the virtual and the physical is central to the digital twin. That is like the. That's it, that's the official definition.
And I think people have their own definitions.
[01:10:12] Speaker C: Yeah, I'm sorry, I asked for the definition. Just kidding.
[01:10:16] Speaker B: And they were using their own definition, which is a subset of this official definition. And this actually was a question that came in in the email. So I was surprised to hear so much digital twin talk. And I think that's potentially a new exciting area that could, that the NEUROAI workshop participants can continue to engage in.
[01:10:41] Speaker A: I think there's an important continuum there as well. So we heard some of that discussion in the first session of the workshop between neural foundation models on one side and digital twins on the other.
So these are both large scale ways of using large scale neural and behavioral data, but they have different goals. So neural foundation model is kind of like foundation models in AI, where you want to have a base model from which you can generalize to downstream tasks and application specific domains or to answer particular questions.
And so digital twins kind of parsing. The definition that Grace just gave is really more focused on using lots and lots of data to make very clear predictions about a particular individual system. And so it's kind of individualized, or in the health context, it can be personalized.
[01:11:43] Speaker C: And you can test hypotheses about the natural system using the digital twin.
[01:11:48] Speaker A: Right. And the idea is it evolves with the system you're studying. So if you have a digital twin of, let's say a mouse, and then the mouse is in a particular task, you can be running the digital twin model in silico in parallel with the actual experiments. And now you've got, you know, the title of Patrick Minow's talk, you know, closing the loop with virtual neuroscience. So closed loop neuroscience, we've got in an in silico ghost or simulation essentially of the actual animal in the experiment. And then you can do very fine grained, real time Predictions, modulation. And the idea is that that should be a very powerful approach.
[01:12:26] Speaker C: Yeah. I mean, you can imagine all sorts of things like tracking through the lifetime changes and through development and the lifetime. And not necessarily in humans, because that's really longitudinal, but you can do it on a faster time scale with something like mice.
[01:12:41] Speaker B: Well, it actually came up during Kai Miller's presentation that he, as a functional neurosurgeon, would like to have digital twinning in the future.
[01:12:50] Speaker C: Sure. Yeah. But I just mean the particular idea of tracking over the lifetime. But if he had that in his surgical suite, then he could test things very quickly and then decide whether or not to implement some surgical technique.
[01:13:04] Speaker B: Exactly.
That was really a conversation I wasn't expecting from session four, but it was very insightful.
[01:13:19] Speaker C: Those are the fun things that workshops when something like that surprises you.
[01:13:22] Speaker B: Yeah. And also the combination of Chris Rozell and Kai Miller on the same discussion.
Both are very clinically savvy and taking opposing views for neuromorphic was exciting. Session four. And when Joe brought up the concept of a hardware lottery in the context of Shannon Information Theory, it kind of reminded me how neural text certainly also could suffer from the hardware lottery, given how hard it is to get devices approved.
And so we're often just stuck with what's approved and not what's necessarily the best.
[01:14:01] Speaker C: I'm aware of our time, and I want to make sure that we talk about the. So what's one of the interesting and maybe surprising things because the paper does so much, is that you end up arguing that for a base layer of neural computation. Right. Like we talked about before. So, and you don't have to define it, but what is a base layer, roughly, of neural computation? And then why do we need one? Why do we need to determine what the base layer is?
[01:14:31] Speaker A: Well, okay, well, what I call it the base. This is not like a term that I coined. It's just I referred to it as the base layer in this paper. But that kind of came out of thinking, you know, I was reading some philosophy. Philosophy of computation.
[01:14:43] Speaker B: What?
[01:14:44] Speaker A: You know, what. What are the different types of computation? How do you do computation in physical systems and dynamical systems. Right.
Is a system of odes, differential equations, and you just evolve them forward with, you know, runga kutta, or, you know, whatever your algorithm is.
Can that do computation? Can continuous dynamics compute?
So there's a lot of really interesting questions around computation. And brains are a particular kind of of physical, dynamical, material, chemical system.
So I think just again, falling into the default concept of the neurocentric framework for understanding brains is maybe leading us astray. Obviously, throughout biology, cells are super important. So neurons are super important to brains. But also, neurons are not the only cells in brains, as we know. There's all sorts of glia, astrocytes, oligodendrocytes, as microglia, which have important roles in structural plasticity.
Looking beyond, like, digital computing has gotten us into thinking about computational systems as, okay, there has to be a transistor. There's like some unitary element. That's the lowest level thing. And so if you think about a silicon disk with chips that have been burned into it and a photolithographic process, all it is a material carving and silicon and other types of materials.
And so that's the base layer, like the transistors in the CPUs that we use in all of our computers and phones right now.
That's the base layer of computation for digital computing on conventional CMOS processes.
So are brains just like that? Are cells like transistors? Is that it?
[01:16:42] Speaker C: No, cells are binary event action potential generators. Right. I think McCulloch and Pitts were right, and nothing has changed.
So we should still consider them that. Right?
[01:16:54] Speaker A: No, I mean. So McCulloch and piss. I mean, that was fundamentally wrong.
[01:16:57] Speaker C: Right.
[01:16:57] Speaker A: It's not a binary signal. It's an event.
A spike is an all or none event.
And so some people say, oh, okay, so it's spike timing. It's like we just need a highly precise, you know, what's the timestamp on that spike versus that spike? And then that'll tell us everything. Well, no, because you don't need an absolute time stamp. You need to know what the role of that one spike is in this dynamical system. Because that spike is propagating to downstream neurons. And at some point you hit recurrent connections and it feeds back and then you go up a layer to the next higher level of cortex or whatnot.
Everything is causal, dynamical, interconnected. This is. Yeah, so you can't just say, oh, it's one or a zero. Obviously, that inspired von Neumann. And it's. And it's an amazing insight because that's why we have digital computing technology now. But it's not how brains work. As Walter Freeman pointed out, and as.
[01:17:48] Speaker B: We also heard from yota, there's all this dendritic computing that's happening at the dendrites. Right. And so there's just, I think, so much more richness that we are now aware of that wasn't available to McCulloch and Pit.
[01:18:02] Speaker C: That's true. But now Grace, you just went down a level physically from the point neuron to dendritic computing, which would make Ponyotta very happy.
But you guys want to go up a level and talk about the role of oscillations in this dynamical coupling. And I also found myself wondering, I don't know how engineers think about oscillation.
[01:18:27] Speaker B: The other thing is traveling waves that we really didn't talk much about at this workshop.
[01:18:31] Speaker C: That's true. Which I'm surprised Terry didn't bring up because he likes to talk about traveling waves.
[01:18:36] Speaker B: Yes. And he actually sees oscillations and traveling waves to be one and the same for many parts.
[01:18:42] Speaker C: How would they not be? I mean, an oscillation has to. Has some spatial.
[01:18:47] Speaker A: Well, oscillations repeat. You could have a wave that travels that's not being generated by an oscillating generation process.
[01:18:56] Speaker C: Okay, yeah, yeah, that's fine. You can separate them. But that's like having one wave in the ocean, which is an Oscar.
Once one wave leaves, the other wave has to. I don't know, they are intertwined in general, but I could see you could just do a one, right.
[01:19:11] Speaker A: Space and time are coupled in the brain, Right. And so fast oscillation. I mean, so there's a mathematical formalism called hierarchy theory, which is basically, if you have, you know, oscillations at a fast frequency, you can only maintain coherence over a small amount of space. If you have oscillations at a slow frequency, you expand the region of space over which you can maintain coherence with that clock, with that slower oscillation.
And at least mammalian brains have a really well preserved set of neural oscillations in different parts of the brain at different times that interplay with each other in different ways at, at base frequencies, which are at these really interesting incommensurate fractions between each other. So it's almost like nature needed to find half a dozen different frequency bands that didn't interfere with each other or minimally interfered with each other, because then you can have a theta and you can have a gamma, and you can nest seven of those gamma cycles in one theta cycle. And then that becomes an interesting packet of coordination. So it's not the spike timing. It doesn't matter that, oh, neuron A fired at time t 0 within theta cycle, you know, whatever X.
It's not that absolute index of time that matters. It's the fact that, oh, you got this, you know, this, this packet of activity that's carved out by this, you know, the Sequence of gamma oscillations or gamma cycles within this theta cycle. And that theta cycle is within this, you know, larger set of, you know, slower rhythms.
And it's hard to ignore these laws almost. So this relationship between space and time, we have these conserved oscillations and they do govern the timing of spikes, the activity of neurons. And so there's this feedback loop that kind of goes up a level to a collective behavior like an oscillation. And then through f haptic effects or through just other modulations, they entrain and feedback into causal mechanisms at the cellular level.
[01:21:35] Speaker C: So then I want to ask what the base layer is like, what the proposal is for the base layer. But maybe even before that I could sort of list off the three requirements that you posit for a base layer to be useful computing layer. And by the way, this is a.
I should also say that the proposal is a non reductionist mechanistic account of neural computation, which is an interesting thing itself. Okay, so the requirements that you state for a base layer, and I'm reading directly from the paper here, is that one, that they encompass a macro scale hierarchical control structure. So over which it implements comparator error and output functions. So that's the control theory aspect part of it. Two, to adaptively control access to internal and external information flows generated by physical embodiment and situated embedding in a causal environment. So that's the sort of almost ecological psychology interaction between these continuous flows.
And then three, support discrete neurodynamical states and adaptive high dimensional state transitions across timescales of neural circuit feedback.
And then you list some specific kinds of timescales. And I suppose that links into the reason why oscillations are important. These nested structures of oscillations, the spiking information carried within those oscillations, and how they're interacting across different timescales and structures and flows.
[01:23:15] Speaker A: All right, mouthful, but wow, that's an ambitious framework.
[01:23:19] Speaker C: It's super ambitious. It is super ambitious. This is like a 30 year brain initiative. That's the other thing is reading this paper, it's like, oh, this is like a whole textbook or a whole four special issues in some journal condensed into one thing. I mean, and yeah, it's ambitious.
[01:23:38] Speaker A: Well, I should make clear this document has nothing to do with the Brain Initiative views, perspectives, priorities, plans or any of that.
[01:23:46] Speaker C: But you come out thinking, oh my God, where do you begin? How do you start? And where's the so much to do?
[01:23:53] Speaker A: Well, okay, I wrote this a couple of years ago and I'M not sure I even remember the three criteria that you just listed. I'm looking at the paragraph now, but this all came out of again, kind of a frustration and just kind of wondering, well, what if the whole neurocentric paradigm is wrong? Rafa Justa had a review or perspective paper from a number of years ago saying network centric paradigm is what we.
[01:24:22] Speaker C: Right. He went from the neuron doctrine saying that's old, didn't work. Now we're in a population doctrine era he was advocating for, which is kind of where the field is right now. Like okay, it's all population dynamics and manifolds and so.
[01:24:35] Speaker A: Well, and John Hoffield just got the Nobel Prize. And so yeah, there has been movement in that direction as we've been fundamentally, there's a lot of technological determinism here. Like the better our tools get, and brain initiative is certainly behind a lot of that, the more neurons are going to record at better fidelity, at more throughput, the more you can see. And so that's, we're getting beyond single neurons because we can record millions of them now, but we need to understand what's happening still. And so now the focus is on low dimensional representations. Right, but what's a low dimensional representation? Essentially it's an attractor. It's a small subset of a high dimensional space that's been carved out. And you can say, okay, effectively this million neurons I'm looking at the state of the system is somewhere on this 2, 3, 4 dimensional, maybe 4, but a low dimensional system and you can basically understand it. If you can map the axes, the dimensions of this low dimensional manifold onto task requirements and constraints, then it's like, oh, that's explainable. Like I know what's happening in this neural system.
But then there's questions about, well, okay, the whole brain isn't a single attractor. It's not one giant hopfield network.
[01:25:55] Speaker C: I don't know, some people might disagree, but okay, go ahead.
[01:25:58] Speaker A: But I'm saying, well, I think there's attractor like dynamics everywhere, but it's complex and heterogeneous, modular to a degree, and governed transiently, dynamically by lots of things at different levels of organization, including things like oscillations and things like traveling waves. So you have coherent organization at time, coherent organization in space. A lot of it's quasi hierarchically organized because the flexibility people will think about natural intelligence and its agility. It's flexibility animals or optimizing multiple objectives simultaneously. How do you do that? Well, you do that by flicking on and off different sub networks at different scales, adaptively, in the right way. It's like I need to keep. I'm trying to do three things at once. I want to get food in an hour. I'm trying to wrap up the current sentence, I'm speaking, et cetera. You've got these multiple goals in mind. How do you do that? You need to activate different attractors in different ways in a complementary pattern to achieve the goals of the organism.
So that's kind of where the spatiotemporal organization comes in, but it's governing kind of like this heterarchy maybe of attractors or quasi attractors. And so that's where I went to in this paper is thinking about something. I kind of regret that terminology, but I called them tokens or causal tokens. I was just trying to think of, like, how do we think of an attractor? Not as this, like, oh, my gosh, there's this huge, you know, task manifold. You know, the animal goes into a maze.
[01:27:37] Speaker C: You know, the manifold needs to come up, and then it needs to go along the manifold. Yeah.
[01:27:41] Speaker A: And then all the activity projects onto that manifold. You can figure it all out. And then, and then it's like, you know, if there's a go, no go, then it's like, okay, then the selection vector rotates through it, and then boom, like, the behavior happens and you have kind of like Dave Sicilo's work. And, you know, going back to that 2013 paper, which I think is a great idea, and so something there. But the interpretation of what that means, what is the selection vector versus the command vector or whatever the other space was there. How do we think about communication subspaces? How do they come on and off adaptively in service of goals?
So my idea was causal tokens are kind of like these little quasi attractors, and they can exist at different scales in quasi attractor because, like, you know, you don't get stuck there. The system doesn't get stuck there. You always need unst. You need destabilization. You know, if you fall into an equilibrium state and you can't get out of your debt.
But, you know, as we know, cognition keeps moving. It's always, always moving. So you're always finding little attractors and then being bumped out of them. You know, there's a competitive process. Maybe, you know, it's.
But it's. It's just trying to think of like, what is the base unit of computation, if that's what's happening.
[01:28:52] Speaker C: So what is the base unit of computation that you're Advocating for the base layer.
[01:28:57] Speaker A: Well, I kind of went back to Hebb, Hebb and Carl Lashley.
[01:29:03] Speaker C: And I.
[01:29:04] Speaker A: Should say this was a great review and perspective put together by Drew Marr and Lynn Neidell from a few years ago. It's cited in the paper where they really make the. They reconceptualize what Hebb was talking about. Hebb and his student Carl Lashley really thought deeply for many decades about what it meant for networks of neurons to be connected to each other. What are they doing?
Looking at persistent activity? Well, they're self reinforcing patterns of activity. So basically that's the causal base layer that I was interested in. If you have a supraneuronal group, a group of multiple neurons, and the thing is there's a loop of reverberant activity going synapses within that interconnected loop at some level and it's self stabilizing, like you can like ignite it essentially. And then the connections are such that the non linearities all line up and you get self reinforcing, self supporting, self sustaining activity.
That's the basis, that's the base computational unit that I was speculatively putting forward in this paper.
And then the nice thing about that is is you can sprout loops off of that, right? There's always another connection. You can always reconsolidate those connections in a different way. That's maybe structural plasticity is like maybe there's a side loop that's sub threshold. But then one thing happens experientially for the animal and then that connection gets twisted up a little bit and now the non linearities line up in a different way and the loop expands. And so now you're at a higher scale. Causal token or whatever I was calling it, this quasi attractor now means something because it's incorporated this new correlation from the environment. And so maybe that's a control signal or control parameter that was updated within the hierarchy or something like that. But it's a self sustaining bit of activity governed by all the spatial temporal structure that we were talking about with oscillations.
[01:31:04] Speaker C: It's fun to see you light up like that when you're describing it. You look excited and sounded excited talking about it.
[01:31:09] Speaker A: I think you reactivated my neurodynamical state when I was writing this paper.
[01:31:15] Speaker C: Yeah, yeah.
So then. All right, we have just a few more minutes. Grace, it looked like you might have wanted to jump in there also or. No.
[01:31:23] Speaker B: Well, I just wonder how if any of this could be measured experimentally.
[01:31:27] Speaker C: Oh, good God. We don't have another two hours. Or were you planting? Was that a plant question?
[01:31:36] Speaker B: It's a quick question for Joe because the idea of measuring Avery synapse came up yesterday.
[01:31:42] Speaker C: I mean, well, you could do that. You could do that. That's very straightforward, assuming you had the right technology. Right. But yeah, that's a great question, Grace.
[01:31:51] Speaker A: Well, you brought it up earlier, Paul, as well. And so this does tie in directly. Right. So I'm not saying my idea here that I just went through is absolutely right. I mean, it's just kind of where I went is like this seems like the most likely useful framework for thinking about it.
But if this is true, then the actual particular value, the precise synaptic weight of any given synapse is almost immaterial.
[01:32:17] Speaker C: Okay, that's why you're bringing that up. Yeah, because I questioned that too. I actually pushed back in that discussion.
What did I say? Something about if you just measuring something doesn't give you the theoretical blah, blah, blah. I can't remember what I said, but then I got kind of pushed back for saying that and I was like, oh, I didn't realize that what I was saying was even controversial. But okay. Yeah, so thanks for bringing that up again.
[01:32:41] Speaker A: Right, yeah. So in the paper we say that if this hypothesis were true of this kind of like what matters is self sustaining little clumps of neurons that can expand outward and adaptively, then this is completely antithetical to what we see in AI models based on artificial neural networks, where everything we care about is when you distribute an AI model or transformer LLM them, it's a binary blob full of very precise weights. And then the whole game is to see how far you can quantize those weights down and still preserve the functionality. And so you can put these things on phones and, you know, and, you know, home computers and all of that. So everything that matters is the weights. It's all in the weights and nothing else. The biases too. But if this hypothesis is right, then that doesn't matter and we don't. And to understand the brain, we don't actually need to go around and measure every synapse because they're wildly fluctuating anyway. It's highly volatile. Right.
[01:33:37] Speaker C: Some people would argue against that because I've received pushback saying that exact same thing, that actually they're largely quite stable. And I guess we won't know until we measure every single synaptic strength.
[01:33:51] Speaker A: Well, you have to deconfound the effect of if you do have the self sustaining quasi attractors, then you're going to have synaptic loops which self sustain and do maintain like strong correlations over time that persist about relative synaptic weights. You would expect that.
But it's not the weights that matter. It's only lining up the right set of non linearities so that that group fires in the way that it does and interacts with other tokens or other quasi attractors.
[01:34:23] Speaker B: Sounds like dynamics was.
[01:34:26] Speaker A: What's largely dynamics was switching. Right, right.
[01:34:31] Speaker C: In a hierarchical and heterarchical fashion.
[01:34:36] Speaker A: Or just heterarchical. I think it encompasses all of it.
[01:34:39] Speaker C: Encompasses it. Guys, I have to go here in a minute again. This is one of those papers that I'm going to revisit and then feel guilty that I'm not reading every reference. And that stack that grows ever so larger of what we're supposed to be reading all the time.
But it is just so rich and I'm glad to point people to it. Well, congrats again on running a great workshop and I think a successful workshop and I really hope that you guys get some rest. Get some rest. And I know you have to sort of take in everything now and then reflect, but maybe hopefully that's a little bit more relaxing a process and then you can take a little vacation.
[01:35:22] Speaker B: Yes. And thank you so much, Paul, for coming to the various pre coordination meetings. I mean, I was so impressed at how hard everyone worked. Yeah, we got multiple abstracts, multiple versions of presentations. It was amazing that we had everybody share their files on the NIH box and you could see how people were changing their presentations in response to each other and it was just. Thank you so much to you and everyone else who really made this a great workshop.
[01:35:50] Speaker C: Oh, that's great.
[01:35:51] Speaker A: Thanks for having us.
[01:35:52] Speaker C: Yeah, thank you.
Brain Inspired is powered by the Transmitter, an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advanced research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives written by journalists and scientists. If you value Brain Inspired, support it through Patreon. To access full length episodes, join our Discord community and even influence who I invite to the podcast. Go to BrainInspired Co to learn more. The music you're hearing is Little Wing, performed by Kyle Donovan. Thank you for your support. See you next time.