[00:00:03] Speaker A: I mean, there is this, still unexplained in current neuroscience, enormous increase of cortical capacity in humans. And the obvious explanation would be language.
We don't even understand how the neurons work together to create grammatical sentences and so on. And this is something that we can find out, and this is important. But that doesn't mean that we now can build an automaton which does exactly the same thing, and that this automata is then the same as us, so to speak.
If you have the building blocks from the brain, if you can build it like a brain, then we should at least be able to get away from these absolutely huge and wasteful, essentially dumps huge miles.
[00:01:10] Speaker B: This is Brain Inspired, powered by the transmitter. Hi, everyone, it's Paul. Welcome to Brain Inspired. Gabrielle Scheller co founded the Carl Correns foundation for Mathematical Biology. In fact, Carl Correns was her great grandfather. Great, great, great. I can't remember. You'll hear from her. And he was one of the early pioneers in genetics. Gabriela is a computational neuroscientist whose goal is to build models of cellular computation. And much of her focus is on neurons. And we discuss her theoretical work building a new kind of single neuron model. So she, like Dmitry Shyklovsky a few episodes ago, believes that we've been stuck with essentially the same family of models for a neuron for a long time, despite minor variations on those models. The model Gabriel is working on, for example, respects the computations going on not only externally via spiking, which is the traditional way models are built and the only game in town forever, but. But she also wants to respect the computations going on within the cell itself, within the cell membrane, and then even down within the nucleus. So this is in line with previous guests on Brain Inspired like Randy Galistol, David Glansman and Hassam Aglagpour, who argue that we need to pay attention to how neurons are computing various things internally and how that affects our cognition. Gabriel also believes the new neuron model she's developing will improve AI drastically, simplifying the models by providing them with smarter neurons, essentially. So we eventually get to talking about that work on the single neuron, but we also discuss things like the importance of neuromodulation, her interest in wanting to understand how we think via our internal verbal monologue. So connecting the language that we hear essentially in our minds with the process of thinking, we talk about her lifelong interest in language in general, what she thinks about large language models, why she decided to start her own foundation to fund her science and what that experience has been like so far. So Gabriel has been working on these topics for many years, and as you'll hear in a moment, she was there when computational neuroscience was just starting to pop up in a few places, when it was a nascent field, unlike its current ubiquity in neuroscience. So you can find the links to Gabrielle's work in the show
[email protected] podcast208. 208. By the way, you may be listening to this in Montreal right now. If you're at Cosine, I wish I was there with you. However, my name is on a poster there, so you should swing by and say hi to Aidan and Eric, my colleagues in crime, and check out our research. And I hope you're having a good time at Cosine. I hope that you are well and I hope you enjoy my discussion with Gabriela.
So I vaguely remember when I became interested in the neurosciences, the cognitive sciences. Like, like many people, I wanted to understand consciousness, you know, subjective experience. But you have a slightly different interest that drove you into it. You wanted to understand how we think and how we speak, Is that correct?
[00:04:43] Speaker A: That's right. I mean, I wanted to understand how we think in the sense of this verbal thought or in a monologue or so that some people produce more or less of. I produced a lot of it. I was very introspective and I wondered what is going on in my brain that I have these sort of conversations with myself.
[00:05:05] Speaker B: So you don't. Okay, so you don't equate thinking with that internal monologue. That's just one. The facet of thinking that you're interested in.
[00:05:13] Speaker A: Exactly. I wouldn't say there's no other way of thinking. There's certainly pre. Verbal parts. It's very important. And some people suggest that they actually think very explicitly in visual images. It plays a role, but. So there are differences. But there is no doubt that all of us at times use internal speech or inner monologues. And certainly also when we speak, we all speak and explain something that is going on in our head.
[00:05:45] Speaker B: So I don't want to just get into sex differences right off the bat, but is it true? I mean, are there more proportionately males who think visually and females who think in language? Am I way off base there? Okay, yeah, but. But you have a high rate of internal thinking. But you used preverbal. Yes. To categorize other thought. Is there post verbal?
[00:06:13] Speaker A: No, I think it is more that when we speak or when we acquire the ability for inner monologues at some point in our lives, often only around the age of six or so. Even though we can usually speak by the age of three. So that is also a process where we start to internalize it. But we use it for thinking. We use it and we try to solve a problem. We begin to speak and say, oh, I should probably do this, or something like that. You know, there is some.
I think verbal thought is really very important and very central for our experience as humans.
It is not some additional extra that you can leave out. I had a small conversation with Jande Kern about it because he thought cats are fine as a model. They are not. If artificial intelligence, as I would define it, is an attempt at modeling human intelligence, then no way. Because it is really the case that children who in the past, who could not hear when you didn't. When you don't stimulate them sufficiently early about language, they really have a deficiency in their intellect in some way. And that's why people have learned to do this very early.
It is important, I think, for our internal organization of our thoughts. The ability to resort to symbols and then do symbol manipulation.
Which is why I think there's also, I think Piaget and Vygotsky and people like that have found. Is that you need object manipulation skills in order to learn grammatical sentences.
Because that is what you are doing. You are manipulating symbols, attaching them together, doing them apart, building an actual sentence like you would build something out of stones. And so all these abilities come together there. And I don't think it's so mysterious anymore. You asked me about this upfront, whether I think that we could understand human language. And how. From my perspective, yes, I would say we can. We are close. We can understand it pretty well. It's not much more difficult than odor recognition. For instance, flies can do. They get all these odor runs. And then we understand in their brain how it goes through the different neuronal areas like antenna lobe and the mushroom bodies, where you have these sparse coating. We can understand how out of this odorant environment, they build sort of their own world of odors, so to speak, which then influences their behavior. Of course, language is more complicated, but it has this mechanical aspect.
[00:09:43] Speaker B: Do you ever have this experience, though? I have this experience over and over where I will be thinking about something and a sentence about that thing will pop into my head. And I will think, that is so stupid that I'm using language to think about this. It doesn't buy me anything. And in fact, sometimes it gets in the way. I'm restricted by It. Yeah. Do you ever have that experience?
[00:10:07] Speaker A: Well, actually, interesting for me, language is. Actually has been very. I've used it differently. It's more empowering, I would say. It leads me sometimes from one thought to the next, or a certain thought pops up which I cannot name. But as I think about it, I can begin to put it into words for myself. Plus, the good idea is once I have articulated it in words, I don't even know how many people have this. I can better remember it.
[00:10:41] Speaker B: Oh, yeah, sure. Yeah. It's. It's almost like collapsing the wave function. Right. So it's like not a real thought until you put into words. And then because of that symbolic, abstract nature of it, then you can kind of from afar look at it anew in a new light. And I guess that's why when. When people say writing things down helps them think about it, because it all. It concretizes some idea that you had that was vague and then that can, I think, even change your own thinking?
[00:11:09] Speaker A: Yes, it helps with the memory, also your internal memory. That was the point I was making that I believe as humans, we have organized our. Our.
Our brains, especially our cortex, very much according to language and symbolic principles. And that is probably very different. I mean, there is this still unexplained if in current neuroscience, enormous increase of cortical capacity in humans. And the obvious explanation would be language, because that's what we developed. And as we develop language, our cortices got bigger and bigger and bigger, as if we suddenly can use all that memory which a cat cannot use. You have to access it. And in order to access it, it's probably structured and ordered, I think will help with symbols.
And it's like a hash table or so into your memory.
[00:12:11] Speaker B: Yeah.
[00:12:12] Speaker A: And that's, of course, since you had the topic. Current AI doesn't do that.
[00:12:18] Speaker B: What do you mean? Elaborate on that.
[00:12:20] Speaker A: Current AI does not build a structure built of symbols which references into complex theories or thoughts.
[00:12:34] Speaker B: Okay, maybe we'll. Yeah, so we'll come back to the AI because.
Yeah, so we'll come back to the AI and large language models, which I know you're interested in.
One more random thought about language, I was asking my. So my son is 10, my daughter's 12, and we were driving around and I'm frequently bothered when I see advertisements. For example, what is the name of this phenomenon where you cannot not read something when you see it? It's like language capture. What is the name of that?
[00:13:07] Speaker A: I don't. Sorry, I don't know the word Either. I only know that it's interesting if you're bilingual like me. I cannot shut this off in German, but I can shut it off in English.
[00:13:17] Speaker B: Oh, what do you mean?
[00:13:18] Speaker A: I can actually watch an English movie and decide not to listen. Not to understand the words, just listen to sounds.
[00:13:26] Speaker B: Oh, so that's verbal. I was thinking. I was thinking visual. But you're saying it happens verbally too, as people speak.
[00:13:33] Speaker A: I can't decide. Just let them speak and not. Not sort of not listening into what they are saying.
[00:13:39] Speaker B: Just like the entirety of my listening experience in any language. Come on.
[00:13:43] Speaker A: Yeah, that is. You can. I can choose to understand or simply listen to the sounds, but I cannot do this in German. It's pop too. Too deeply in brain problems.
[00:13:54] Speaker B: Okay, so you had that early interest in understanding how we think, specifically in the. In our language capacity, but have you kept the same sort of worldview? Or. So what I wanted to ask you is, well, what is thinking? Because my conception of thinking has changed over time. Have you kept that the same sort of conception about how to go about thinking, about thinking?
[00:14:16] Speaker A: Yeah, well, yeah, that's actually, no. My original idea was simply, I want to understand it from a scientific point of view. And to me, that is mechanical and mathematical.
And what has changed a bit is after a couple of years in my 30s or so, I became.
I wondered about the spiritual side of human experience. And that was already the time when the machine translation was on the horizon. And of course, you could use a synthesizer to create. Create a violin and all these things, all this artificial was already in the air. And so people ask themselves, me too. And I came to the conclusion that even though we can rebuild all these experiences and we can mechanically analyze it, and that is also very useful for us in many ways, especially in medicine, we do not capture sort of the essence of what is going on. Something else. And I said, you can link it to subatoma physics or whatever it is. It certainly is against. I mean, a mechanical account from the physical point of view, that's Newtonian.
So that is really a 17th century.
But in terms of, as I said, and language and the thought, we don't even have that. You don't even have a mechanical, Newtonian account of what's going on. But now I think it's two different things. And if we have that, that's fine. That is a scientific point of view. And if somebody has an illness, if something is broken, if somebody has an aphasia or a dementia or whatever, that is important, but it is not the essence of who we are, how we communicate.
All these spiritual sort of extra is, I think not.
That is not all, so to speak. When I was younger, I wasn't even thinking about whether there could be a dichotomy between this. I thought once you've explained it in a scientific way, that's what it is. There's nothing else.
[00:16:35] Speaker B: So. But that has not really changed your approach necessarily, just how you think about it.
[00:16:41] Speaker A: Exactly. For the science, I think it makes no difference. It makes no difference, as I said, because we are so, so far behind.
We don't even understand how the neurons work together to create grammatical sentences and so on. And this is something that we can find out. And this is important. But that doesn't mean that we now can build an automaton which does exactly the same thing and that this automata is then the same.
The same as us, so to speak. It will remain an automaton which is mimicking certain aspects of our thought process. That is the difference, you see? That's the difference. And as I said, this is in contrast to Hinton and people like that who seem to think once they have an automaton, once they have a mechanical device which produces pretty much what a human produces, but then the human is not different from this device.
[00:17:45] Speaker B: Yeah, I even. I came across Hinton the other day saying that it's already conscious and.
Oh, come on, guy.
[00:17:55] Speaker A: Yeah, but I mean, what is understandable to me is when you go very deeply into it. I already had said chunka.
Depersonalization effect, I think it's called in psychology, when you suddenly have the idea that you're.
That everybody around you is some kind automaton, so to speak.
[00:18:15] Speaker B: Yeah, yeah.
[00:18:16] Speaker A: And I think Hinton must have that problem that. What? Yeah, well, if you can explain everything, then you don't understand that other people are actually still people and not, let's say, ni or so. So I think it is a psychological problem if you mix these things up.
[00:18:34] Speaker B: Yeah. So, I mean, I like that you maintain that distinction because we live. Our modern scientific world is very mechanistic. It is like the machine metaphor. So as soon as we've explained it in those mechanistic scientific terms, that's all there is. But. But you go beyond that. It's just. You're content with the idea that that is the best way to explain it scientifically.
[00:18:57] Speaker A: Exactly.
[00:18:58] Speaker B: But it leaves out.
[00:18:59] Speaker A: But science is not everything, so to speak. Exactly. But science can be very, very useful and helpful in many ways. No, no questions. Much better to know mechanically what is going on than to simply know nothing at all.
[00:19:12] Speaker B: All right, so I, I was going to bring this up later, but let's go ahead and talk about it because you have taken an alternative kind of path thus far. Or maybe it didn't start off alternative, but at some point you would you say that you're out of. Did you leave academia? I don't know how to phrase this exactly.
[00:19:29] Speaker A: We could see. Well, I see it as a. As an academic, a non profit institution.
[00:19:34] Speaker B: Okay.
[00:19:34] Speaker A: But I was in a way more or less trying to recreate what I understood by academia and also I would say how it started with person because I think the world has changed in that respect.
When I was a student there was a lot more freedom and independence.
[00:19:54] Speaker B: When was this? Tell the listeners when this was and where it was.
[00:19:57] Speaker A: Oh, that was in Munich. And actually I had by chance obtained a job at a computer science company, Digital Equipment. They don't exist anymore. At the time they had money and so they.
I got a freelance position trying to do natural language processing. And the guy who hired me was himself a linguist. And so we understood each other well. And for a year and a half they would just let me give me a computer and lisp and prologue and do something with it. And at some point he actually showed it to his boss because he thought what I had achieved was quite good. And he is very nice, but he never came back to him. And so with that experience I then went to the old professor in logic, he was actually a physicist whom I liked very much and asked him whether I could do much dissertation on this because up to that point I hadn't used a computer. They gave me a computer and I taught myself lisp and prologue and also some language prologue. And then I piece things together.
[00:21:09] Speaker B: What year about was that?
[00:21:10] Speaker A: Very, very nice experience. I was actually paid for that.
[00:21:16] Speaker B: About what year was this?
[00:21:17] Speaker A: Oh, that was in between 1986 and 1989.
[00:21:22] Speaker B: Okay, so computers were still pretty early on.
[00:21:25] Speaker A: It was early on and I was happy about. And I can actually, actually say something in terms of feminism at this point because I really loved the computer experience. I had also this experience as a, at that time in Munich in logic, in the math classes that people wouldn't listen to me. I would say something and then nobody listened. And then some guy, some man said the same thing. Oh yeah, that's very interesting. And it annoyed me. And I had this computer and I thought, ah, I have a computer, this computer doesn't care. Right?
The computer has no conception whether what I'm telling him what I'm trying to make him do that comes from a female or male, you know. And it was very, very liberating.
[00:22:13] Speaker B: So eventually then you eventually you went to the United States and so what we're eventually going to get to. And that's interesting that you just said that your mom was. Had a PhD in biology because I wanted to ask about her sort of approach. Right. Because you have this like. Well, it's mechanisms in science. But the old biologists were accused of just stamp collecting. Right. Just collecting the data without a theoretical background. So it made me curious if maybe your mom.
[00:22:43] Speaker A: Yeah, well, as of course the Karl Cohen's aspect which was. My great grandfather was also a biologist and I used him then for this foundation and I asked the others what they thought and it's okay because it's just a family relation. But then it occurred to me it's not just a family relation. There's something else here. At the time when the manual letters were Mendel's laws were set up and my grand grandfather wrote this paper where he actually added a law that was never Mendel's law. So he really pushed it along.
[00:23:19] Speaker B: This is Carl Corens.
[00:23:21] Speaker A: He was much attacked that said my mother, from the Bergson people. So there was this Henri Bergson and they had this life force. So biology is different from physics because there's a life force force. And because of the life force everything's different.
[00:23:37] Speaker B: So the elan vital. Right. That was much maligned eventually but it's a little bit misunderstood from Bergson's point of view. But there are. You said the Bergson people take it on.
[00:23:50] Speaker A: Exactly. That's often happens like with Marxists of Freudians or so they don't take certain aspects and make them very big. And one of these was that biology is separate from. From the material world, from physics and also from mathematics. You can't use mathematics for biology because it's messy, it has a life force, it is a living organism and so on. And so you have to study it completely differently. And it is not part of science in the same way. And he took a lot of flack from these people because genetics was then it was only appeared as a discipline. And it was the first you could say mathematical discipline in biology. And my grand grandfather had a different outlook on that. Of course he studied plants. We could say in a way he loved plants, but he took them very seriously. As objects. As physical objects. As physical objects which could be understood in a rational way. You didn't have to appeal to some supernatural forces or so to understand how, for instance, he was, I think, the first who pointed out that the chloroplasts which do the photosynthesis probably were earlier bacteria which were incorporated.
[00:25:18] Speaker B: Oh, he posited that theory.
[00:25:21] Speaker A: And so it's actually true, I think. And you see things like that. So that was just this, an outflow from this mechanical physical approach to it, to plants.
And so from this aspect, I thought he was good for the name of the foundation for mathematics.
[00:25:41] Speaker B: Yeah, so you had this linguistics background and you're into the computer science and the math. So what was the turn into neuroscience?
[00:25:49] Speaker A: Yeah, that was exactly the point. That was when I was, I did all this high level thing. And anyway, linguistics, as I said, was degrading into natural language processing. And at the time I thought Google would take it all. I have no interest in this anymore.
[00:26:04] Speaker B: What does that mean, degrading and.
[00:26:06] Speaker A: Well, natural language processing is the question of how to do information processing in a natural language on a machine. And my question has always been how do we do language in a human brain?
[00:26:20] Speaker B: But you don't think we can learn about the human brain by building machines?
[00:26:26] Speaker A: Yes, that's one of the best ways. Yeah.
[00:26:30] Speaker B: So then what's the problem with natural language?
[00:26:31] Speaker A: Oh no, they wanted to use language to communicate and deal with machines. And that is not what language is made for. You have to change it. You have to use a stupid kind of language for it. You have to take many things out, all the jokes, all the fun, all the poetry. You know, the machine won't understand it and therefore it's not a topic anymore.
But for us humans, it's all a part of it.
[00:26:59] Speaker B: Yeah, and confabulation and lying and anything creative, essentially.
[00:27:04] Speaker A: Yeah, yeah.
[00:27:07] Speaker B: This is a total aside, but do large language models generate neologisms? Like, because the nature of language, it's always evolving and changing like a word. It's, it's some, it's called semantic drift. I don't know what it's called, but the meaning of a word changes over time because people elect to use it differently or to use it in a funny way. And then sometimes that catches on, sometimes it doesn't. And so it's always changing. And I was curious, I was talking with a friend the other day about whether large language models can or will sort of what effect they'll have on, on that drift. It almost seems like they will crystallize language into one thing.
[00:27:49] Speaker A: I think what happens is that you have an exchange between brains and this exchange uses this root of language. So you don't need A chip in your head in order to show the content of your brain to somebody else, you can use language.
And if you do this, and if this happens between different people over time, then you will always have access to different experience that these people do have extra linguistic that they have in other areas of their lives. And so as you use this communication method, the sort of, the substrate which interprets it, my substrate interprets what you are saying and yours tries to interpret what I'm saying to match it to your own thought patterns. And this happens all the time between people. And I think this is the explanation why semantics changes and also why it's such an interesting and also I think such a joyful topic. It's very, very interesting. Now if you have a machine, like let's say an ELIZA machine or any kind of QA machine in between, you can of course communicate to this machine. And actually if you know it's a neural network sitting there and does certain things, it may also changes to the communication process. That's what many people are now very concerned about. That if we have lots of AI generated linguistic content that younger generations are exposed to without understanding sort of, that this may affect their thought processes in negative ways. And I think that's quite true. I personally think it's a. To use language for communication with a machine, from my point of view, is something I never wanted, that people I know, of course, computational linguistics. I was a computational linguist myself for three years in Heidelberg Institute for Computational Linguistics. That was one of the goals. He would say, I want a microphone, I want to talk into it and the machine should give me everything that I want.
[00:30:10] Speaker B: And I always thought, no, what's the danger there?
[00:30:15] Speaker A: Also because it's not suitable.
Language is suitable to human brains. It's produced by human brains, it's understood by them and so on. Many communicative issues shape our language. Now, if I communicate with a machine, then I have.
It's almost as if you have.
Of course it's different, but you have a child and you have an adult and you talk to a child and you talk to the adult. You have to adjust yourself, you have to talk differently to the child. And now you have a machine and you have to adjust yourself to the machine.
[00:30:57] Speaker B: Oh, right.
[00:30:58] Speaker A: You have to assume the machine has again, a very different level of understanding.
[00:31:03] Speaker B: Well, I worry and I don't know if this is, I don't know if this is related to what you're saying, but we, it doesn't matter how we treat the machine, right, because it's just A machine. And because you speak differently in different contexts. When I'm talking to a machine, if I say please, I think it's ridiculous. Right. You know, so I'm not going to be polite necessarily as polite to a machine. But I worry that that then affects, especially with younger people growing up, how they then they could translate that style of communication to the real world, like.
[00:31:36] Speaker A: Social media, how people are many, many such aspects. I can always, I always get angry at these things very quickly. That's why I don't use them anymore, the machines, because I have to have a lot of patience explaining something to them. And since I know it's a machine, I don't have patience because I not friendly.
I think we have programming language and so which are made for machines. They are not very made for humans, but they are very well made for machines and that's how we can communicate.
[00:32:08] Speaker B: Okay, well let's go back then to. So eventually you're going to start this Carl Corin's foundation. And what I want to know is like why you did that and how you did it and how I could do it if I wanted to.
[00:32:20] Speaker A: Well, the reason I did it was I had this biocorp at Stanford for about 10 years and this meant that it was actually started by a student and then she went away and did somebody want to go on? Yes, I said I'm ready to go on with it. And so I invited people to give talks in the Bay Area, doctoral students or postdocs.
And they usually came and gave a talk and some of them were very active. Excellent. I mean I read the paper up front. Somebody actually asked me, how do you manage to always get such wonderful lectures? And I said, well, you know, I read the papers, right, I read the papers and when I like the paper then I invite the person. And so we had really wonderful people, they giving talks and very interesting research. But when I asked them, yeah, how's going on with, you know, basic science, academia, law? No, no, no. Pretty much always nobody wanted to do academia. Nobody wanted to do basic science. Everyone was going out and as I said this occurred to me, you know, imagine this like say Paul Dirac, Aminata, Werner Heisenberg. Yeah, they give their talks on, on, on physics. And we are thinking, how are you going to go on? Oh, well, we're going to do a startup and you know, rent out garages or so. And they might be a successor with smart people. But, but it's a waste, right?
[00:33:52] Speaker B: Well, depending on your goals, of course.
[00:33:55] Speaker A: On their go, it's no, I mean I'm talking about society at large.
[00:33:59] Speaker B: Yeah, well, I don't know. I mean, I'm not sure. I mean, a lot of people who go and do a startup have the intentions to change the world.
[00:34:08] Speaker A: Yeah, but what I meant was that academia and basic science was simply not a valid alternative for them. And I could understand them and I would not even have contradicted them.
[00:34:23] Speaker B: So they were getting their PhDs, etcetera, in order to move to India.
[00:34:28] Speaker A: And then so I'd never go back to this place again because they had their experiences. That's what I mean. It used to be in the earlier world, okay, academia, you don't make a much money, but you have your freedom, you have your joy, you enjoy your work. You, if you, if for half a year nothing comes to your mind, then you do nothing.
[00:34:47] Speaker B: You know, that doesn't exist anymore.
[00:34:49] Speaker A: That's what academia. And then you write a wonderful paper, maybe that was academia and nobody bothers you.
[00:34:56] Speaker B: So you had time to reflect and think and work things out. And that's different these days you would.
[00:35:02] Speaker A: Say, oh, of course, yeah, yeah, it's very different. You always, there's always this run for funding, funding, funding. You can't do anything without having fun. That was the second, that was the second observation. Then I talked to the people who actually were professors at Stanford. People where you would might think, oh, they're on top of their profession or something. What, what would they do if they could? And pretty much everybody said the kind of stuff they're doing now, they wouldn't do, they would do something else, but they can't go to, can't get funding for that.
[00:35:33] Speaker B: Right.
[00:35:34] Speaker A: From their own research.
[00:35:35] Speaker B: You have to pretend to be from their own research.
[00:35:37] Speaker A: What they do, they would go in a different direction now in order advance in order to make an impact to something, but they can't because they can't get funding for that. They get funding for something else and they have to do something else. And as I said, the older they were, the more cynical they had become about, about the value of all this. And that was also sad, you know, because this is what this was not some, some tiny place in Romania where they, people are very sad that they cannot do what they want to, to do. This was a place where we would expect them to be able to actually do the kind of signs which they believe in and which they want.
[00:36:20] Speaker B: So you were feeling that also in your own career or what?
[00:36:24] Speaker A: Yeah, no, I felt it all the time, but I thought it was just me.
[00:36:29] Speaker B: Oh, no, no, it's Everybody. Oh, it's 99%.
[00:36:34] Speaker A: Yeah. That's the part that shocks and I was in Munich. I thought it's because I'm in Munich. Okay. But then I come to Stanford and it's still like that, universal. And I thought this is here something wrong. Right.
As I said, if you go to, I don't know, the University of Banja Luka in Bosnia, they have. Don't have that much money. Okay. So you expect once they, if you talk to somebody. Oh, once I get to Harvard, then I will be able to do what's not the case.
[00:37:04] Speaker B: Right? Yeah, yeah. Actually, I was just telling a student in the lab the other day, I said, you're not going to like this. It's going to be obnoxious, but it's true that there is no destination.
Like you're never, you never get there. Right. Because you're always moving. But similar lines. Someone in my lab the other day asked me about my research. Oh, what are you going to publish from this? And I just sort of bristled because like, that's not, that shouldn't be the question. The question is, what are you finding that's interesting. Not like you have to think about when you're doing your research. Well, what is publishable? What can I publish from this instead of what can I do to further advance my understanding of this?
[00:37:46] Speaker A: I know what you mean. That's often the problem and I've, I've solved it for, for me, it's even worse because.
Or even maybe it's even. That's actually a good thing.
It's one thing to have results, but it's a different thing to go deeply enough in order to be able to publish them. And in that process, sometimes you learn something. Of course, that is the good part. And I simply use a blog that I have and some gray literature for things which are not ready to be published.
[00:38:16] Speaker B: I don't think I heard that term before. Gray literature. Is that like preprints, you mean?
[00:38:20] Speaker A: Yeah.
[00:38:21] Speaker B: Is that great?
[00:38:21] Speaker A: Exactly. Preprints. And for instance, on research gate, you can just put in a paper and you get a doi.
And when maybe if four years later you think this thing I could. Can now use for an actual publication, it's still there.
[00:38:37] Speaker B: Oh, so you can. There's no hoops and hurdles to jump through to. To put it up on research gate? No, you just. You can do whatever you put up whatever you want?
[00:38:45] Speaker A: More or less. Yeah. I did it on language. I wrote a piece on language, on the biology of language, simply because I wanted to Put it together for myself and I put it on Research gate and I have no idea whether ever we'll publish it. But when the LLMs came out, I thought it was very necessary to remind myself all the many things we know about language in the brain.
Know many, many things. And this LLM was in the process of sweeping it all aside.
And so I thought, I need to write a book. Need to put this together in just 10 days. Sat down and put together, you know, looked at the current literature and so on about.
We have this N400 for instance. I don't know event related potentials, if you are familiar with them. That's.
[00:39:35] Speaker B: It's when you take a. EEG signals and over many, many trials you can align those signals to an event and then you average them and then this. The way the waveform, the shape of the waveform. Different shapes get different names.
[00:39:49] Speaker A: Exactly.
[00:39:50] Speaker B: Relative to like when it happens.
[00:39:52] Speaker A: Not so many.
[00:39:53] Speaker B: Yeah, yeah. Not so many. Yeah, there's a handful.
[00:39:56] Speaker A: Yeah. It's, it's, it's in the end, 400 is after 400 milliseconds, it's a negative. And you get it with all kinds of language effects. You get it when you have, make a joke because something doesn't align and.
[00:40:09] Speaker B: You laugh and oh, is that the unexpected? Like, yeah, surprise error, mismatch.
[00:40:15] Speaker A: And that's also for garden path sentences. So a sentence which starts off grammatically but doesn't end grammatically, whoosh, goes like this. Right. And that's large to generate this EEG signal. This means that essentially your whole brain is involved in interpreting this ungrammatical sentence. Which is interesting because sometimes people think it's very localized.
[00:40:41] Speaker B: Right.
[00:40:41] Speaker A: But it's not if it's, it's actually a very profound.
[00:40:46] Speaker B: Yeah. Well, it doesn't have to be whole brain, but it has to be more than 10 neurons to be.
[00:40:50] Speaker A: Yeah, yeah, exactly. Well, large part of the cortex maybe be more focused.
[00:40:55] Speaker B: Yeah. But what, so why, why do you, why'd you mention the N400? Like why that would be important in terms of understanding.
[00:41:03] Speaker A: Because this is a sign of how we, how we interpreted, how we interpret language and what we do with ungrammatical sentences. Ungrammatical sentences have a very strong effect on us.
[00:41:17] Speaker B: Yeah. And they can be funny.
[00:41:21] Speaker A: That's true. The N400. I just read it. You can also use this for prices. If somebody quotes you a very high price. N400 goes up prices. Unexpected. Unexpected. Right.
[00:41:36] Speaker B: So what would you take from that? Because you write over and over in your papers like, well, you know, it depends on the level of abstraction. That's important. Right, but. So you don't think like. Well, I think we need to build a system that has an in 400 or, you know, for language, in a way.
[00:41:54] Speaker A: Yes. If we build, if we, when we build our language model which interprets, for instance, feature. Feature landscapes by terms of symbols and so on, and now we give it unexpected series of symbols or a series which first makes sense and then some symbol that doesn't make sense at all. I do want in my model to see what is happening here which could cause this N400. And an LLM doesn't do that because it doesn't even model the brain.
[00:42:25] Speaker B: But, but there might be some readout of expected versus unexpected because it's based on probabilities. Right. And then eventually collapses and the word with the highest probability or the token gets placed next.
[00:42:38] Speaker A: Yeah, yeah, but that is, that's different. This is what they use in order to build sentences here. I really mean a violation of an expectation. Yeah, well, of course, the, the LLM produces it all the time. If it gives you useful answers and then counts the number of Rs and Strawberry faults, you should get an N400.
But the LLM doesn't. It should, but it doesn't.
[00:43:03] Speaker B: Yeah, yeah. Well, I mean, that also just speaks to the lack of any dynamics. Right. In a model in general, of course.
[00:43:11] Speaker A: Somebody could argue that these signals in the brain are irrelevant or peripheral and don't mean anything. And we can build artificial intelligence in a completely different way and, and still be very happy with the results. That is engineering. That is okay. From my perspective. Somebody wants to engineer a language machine. I have nothing against it, but that's not, trust me.
[00:43:41] Speaker B: But you think that there are better ways to do it, to engineer artificial intelligence, and that's why you're.
[00:43:47] Speaker A: Well, there's two things. First of all, I have nothing against people engineering it. It's just something that, that doesn't interest me because I see myself as a natural scientist, as somebody who's looking at the natural word. And a natural language or language is part of a human, natural world.
[00:44:08] Speaker B: Okay. Yeah. So one of the, one of the thoughts that I have recurringly about artificial intelligence is like, okay, it's fine, but first of all, it's just an unfortunate name, and the originator of that name thought it was an unfortunate name. Also, I think John McCarthy, I think, coined it and then hated it or. I don't know, it's always been contentious right from the earliest days. But I just think like, okay, that's fine. It's engineering. It's not really what I think of as intelligence or I need to rejigger what I think of as intelligence. And they're sort of, all of a sudden people are claiming that they're working on intelligence when I think that's actually missing the mark of what maybe I'm interested in. I don't know how to really articulate it, but it just, it's something different. It's not. Not what I'm interested in understanding as far as intelligence.
[00:44:58] Speaker A: Yeah. Well, I. Look, I was also distant from that for considerable time for the same reason. But you see what it occurred to me that of course it can help us focus. When we do computational neuroscience models and we do many of them and they capture different functionalities, different ideas, wouldn't it be useful from time to time to say, let's see whether we can build a functional model which actually do some task or so on the basis of our knowledge about the brain. So let's not just do simulation models which do use experiments and then show in the model how you can explain the experiment, but let's try to build a functional model on a higher level and see whether we can use this to achieve some goal, to let them do some task tasks that's useful. Why shouldn't we do this on the basis of our computational neuroscience experience?
[00:46:05] Speaker B: Because we're so far behind in computational neuroscience.
[00:46:08] Speaker A: I don't. That's actually the contention. That is exactly the idea why I had this idea about the spin off that I thought. I think the time is right.
[00:46:16] Speaker B: Yeah. Okay, so let's go back to the Carl Corin's because I want to. Because there's a lot of science that I want to talk to you about also and the things that you have worked on. But.
And so let's get to this spin off that you're developing or have developed. But go back to. I want to know how to start a call, Carl, because that was.
[00:46:35] Speaker A: I asked friends and colleagues whether they wanted to be on the board. I set up a nonprofit foundation. California law tax deductible. Took some time, took a little bit of money, mostly time. And then you had to, you had.
[00:46:48] Speaker B: To make the decision to step out of academia too, right? Yes.
[00:46:53] Speaker A: Well, it's. I since see it as a part of academia. There's so many. I mean, look, the Ellen, by the way, I, I almost ended up at Open Air at the time because that was a nonprofit. And as I was doing this, people told me, you know, you know about Omni, that's a non profit too. That's just like your idea to do exactly the same thing.
Why don't you work with them? And I looked at them and I already then had the idea. Somehow I don't really trust them. I think that their legal structure seemed a bit strange because.
So I missed out. I admit it would have been nice, but knowing me, I probably would have left way too early and so on.
And besides, I really don't like their approach. But I could. Of course we have a small endowment for the Karl Cohen foundation from our family. And of course that could have been larger. Right. I could have siphoned off some money. Siphoned off some money from the OpenAI in terms of my own work or in terms of. Of how you call it, stock or so.
[00:48:03] Speaker B: Oh yeah, yeah.
[00:48:05] Speaker A: But the idea was right, there are people like that. There's the Allen Institute, of course, they all have much larger, much larger endowments. Much larger.
[00:48:13] Speaker B: Well, how do you keep going though? So I. So do people donate? Like how do you. Because there's a fundraising aspect to it.
[00:48:19] Speaker A: Which would be we have the endowment that generates a bit of money and some people donate and we have essentially just one scholar per year.
[00:48:27] Speaker B: So what does that. So you pay. Is it like an intern almost?
[00:48:30] Speaker A: Yeah, well, you could say that. Something like that. Well, it depends. We had different people. We had a master's student. Now I have a. This year, I Hope there's a PhD student who's doing work on cortical microcolumns and he wants to contribute to our work and so on. So the possibilities are there. If there's more money, we can either put it in the endowment. If we don't know what to do with it right away. This year I would finance another scholar because we have two things for the spin off. One is the cortical micro column, making it useful for language. And the other thing is a Neuron model that we are building. So I could use another one. Haven't got them the second one yet.
[00:49:19] Speaker B: So if I'm a PhD student and I'm whatever, it's my second or third year or something at my institution and I come across the Carl Collins foundation, what do I do? I just. Is that sort of a separate thing that I apply to and then you send me money to do my work related to what you're wanting. How does that work?
[00:49:38] Speaker A: Yeah, usually, I mean, because we have so little, we don't have calls or anything. And I also always think the best system for grant or let's put it this way, different possibilities for grant funding. And I think it's good if several exist next to each other and one is what the Dartmouth used.
And I think there is this magnite or something, some genius grant kind of thing.
[00:50:06] Speaker B: MacArthur, is it? MacArthur?
[00:50:08] Speaker A: Oh, yeah, probably, yes.
What they do is they look for some for people who are already doing work, which they like, and then they give them extra money.
Okay, so you know, grant funding, no competition and so on. And I don't think it's even, I don't even think it's more, it's. The other model is more democratic, but it's not unjust or something because not everybody gets money who has to apply for it. And the people who apply and don't get it, they have the unpaid work, quite a lot of it. And that's why I think this second model about democratic, spreading it to everybody is way too much. There's way too much of it and there's way too little of the kind of money. Of course it sometimes happens. I remember somebody from Stanford, she went to Harvard and some rich family approached her and gave her money. So that's of course also nice. But I think it's best if have all these different possibilities next to each other. But you guys, from our perspective so far, we have asked people whether they want to.
[00:51:19] Speaker B: Oh, so you, are you do you search for people then or do you. Okay, so they don't come to you, you go to them.
[00:51:26] Speaker A: Yeah, yeah.
[00:51:27] Speaker B: Okay.
[00:51:28] Speaker A: We don't, we. We're not making any calls for grants or so because it would be ridiculous with little money we have.
[00:51:35] Speaker B: So that's why you're creating the spin off, right? To generate more.
[00:51:39] Speaker A: The hope that we could generate more money and build our endowment. That's my hope at the present time. If the endowment grows, then we have a regular income for the foundation and that puts us on a safe basis. Then we don't always have to hope that some donation comes along.
[00:51:59] Speaker B: But are you also seeking money from angel investors and stuff?
[00:52:04] Speaker A: Sure, yeah.
[00:52:05] Speaker B: Yeah. Okay. But. Okay. It just seems like a lot of.
[00:52:09] Speaker A: For the start and for the startup. Clearly I already have this problem. I have two developers that I will be talking to next week and I both make very good impression on me. But they're both people who cannot or would not work for free. So we have to see whether. Whether one of them may want to join the company.
[00:52:30] Speaker B: So what do you have like a pitch deck or something when you.
[00:52:33] Speaker A: Yeah, they always ask me for the p. I have something similar but not really because I don't do this VC thing. It's. This is not really very.
It doesn't fit with. With my approach. I am.
[00:52:47] Speaker B: Well, Right, but you're in this situation where you almost have to adapt and.
[00:52:50] Speaker A: Yeah, no, the goal is actually to come out with a platform not so for a simple, simple version of the platform in the near future.
[00:52:59] Speaker B: So what is the. What do you mean, what's the platform?
[00:53:00] Speaker A: The platform will offer algorithms tools from the computation neuroscience world in order to build practical. Solve practical problems. And we come with one or two.
[00:53:11] Speaker B: Demos and do you feel like you.
[00:53:15] Speaker A: And then I want to see what traction we get.
[00:53:19] Speaker B: So are you going to be like competing against benchmarks in current AI, that sort of thing?
[00:53:23] Speaker A: No, it's more like when you look what AI is built on, then you will find that essentially everything is from the early 90s.
And in that time there was a lot of creativity around a lot of different ideas and so on and that dried up. People did other things.
And what came later, deep learning the transformer architecture, then this was called this adversarial training. And these generative adversarial networks, very derivative. These things are not novel. And so I think when you and I, we understand some parts of how the brain works and some of these observations could be put in a platform for people to use outside of our community, but inside the software developer AI community and see whether people become creative with it, whether something will be. Of course I have more ideas which would happen if we had more money. But I've decided the best thing is to gain some traction with the platform before then again addressing so far the problem was with angel investors. They proclaim they don't understand it.
That may be me.
They have their minds full of this AI thing.
[00:54:59] Speaker B: Right? Yeah. You need to work on that pitch deck.
[00:55:03] Speaker A: Yeah, but I said this is not. You see this what they now call AI. The reason it's interesting, it's because human huge amounts of data.
And I think the reason why everybody put pause mano into it because this data is power and it won't be restricted to public data or the problem of copywriting data. It will certainly be private data data that maybe they shouldn't have. But they have collected Google Meta and so on. It gets to the point where they will use health data.
Of course they would say it's not individualized, but that is also a hard sell. They may have your health data and this is something I feel strongly about because I know that at least German health insurance companies sell our data to pharmaceutical industry.
Of course, not identifiable, they say, but still they make money this way.
So you pay your insurance and your insurance takes your data and sells them to the pharmaceutical company. And I had a person here in Munich and she wanted to give a talk about. Because they make these conclusions to death. They can tell you at a point when you are going to die on the basis of how often you went to the doctor or something.
They are pretty good. You think they can, but they cannot individually. But overall they're pretty good.
[00:56:37] Speaker B: Yeah, I don't like that.
[00:56:40] Speaker A: This is all these. All these questions about data analysis rolled up into this AI And I don't want the AI, but I do want the data.
And so what we do is something entirely different.
It is.
It will in the future use the data that you wanted to use.
[00:57:06] Speaker B: What does that mean?
[00:57:07] Speaker A: Oh, the user will provide the data.
When you use an LLM, all the data is already in it.
[00:57:16] Speaker B: Right. But I might not as a user, I might not know which data to put in.
[00:57:21] Speaker A: Yeah, yeah, yeah. It depends on your problem. So one have to. To see how much I admit that's not in.
It goes too far because I'd rather have the basics ready, the algorithms and so on, before discussing all the practical problems. It's not there yet, but since you asked for the Karl Cohen foundation, then I can tell you, yes, we didn't have enough donations to grow.
[00:57:51] Speaker B: Okay.
[00:57:53] Speaker A: And I don't see. I think we have to earn them with our own work.
[00:57:56] Speaker B: Yeah, fun.
Okay. So one of the reasons why I wanted to have you on Gabriela is because your interests. So you come at it from a very theoretical standpoint. I know that you saw some experimental work and thought maybe that's not for you, but because you're coming at it from a theoretical standpoint, you remind me of people like Steve Grossberg. Right. Who basically dabbles in a lot of different things, always coming from that theoretical approach. And you have done lots of a variety of questions that you approached. So I want to talk a little bit. So, for example, I don't, you know, we won't talk in depth about your work on this, but you highlight that neuromodulators have been basically ignored in computational neuroscience.
[00:58:47] Speaker A: That's right. Yeah.
[00:58:49] Speaker B: And so you have ideas about those. One of the things that I do want to discuss more in depth because it crosses paths with a lot of previous guests I've had on the podcast, people like Randy Gallistol, David Glansman has some Al Kapoor, which for a different reason that I'll bring up later. But this idea that it's a single neuron model that you're working on and the people that I've mentioned have made the argument that our memories, that we need something more permanent to store our memories, that memories are not stored in synapse, in the synaptic clefts, for example, that we need like internal cellular processes, something more stable, that's the word for it, to, to allow storage essentially. And you have this model, you've taken that on, Right? I can ask you this in a second, but you seem to more or less agree that the synapse is not where it's at with regard. And so you posit this model of a neuron where there are external processes, internal processes and then like core processes in the, in the nucleus. So tell me a little bit and presumably is this what one of the things that's going to be available in the spin off as well?
[01:00:11] Speaker A: Yes, exactly.
[01:00:12] Speaker B: Okay.
[01:00:12] Speaker A: We will have conventional neural models. And that's exactly what we're working on right now. We want to have one of these novel models. And the reason why I believe these are important again, the good thing is why do we need such a model? And the answer is in computer science, if you have complex blocks, then the system as such can be much simpler.
[01:00:41] Speaker B: Elaborate on that.
[01:00:42] Speaker A: Yeah. If I take a neuron which is nothing but an activation function and synapses which can change the weights, it is no surprise that I end up with neural networks the size of a city because each element is so smallish and has so little possibility. And I gave as an example already, deep learning imposes a structure on a normal neural network with a hidden layer. And by imposing the structure certain a lot of problems become manageable which mathematically you could express it with just one hidden layer. That would work. The deep layer is more restricted. But if you try to train a three layer network on the kind of problems that they could solve with deep layer networks, you probably can't do it because it's much, much larger, larger network that you would get. So this is already a step away from the universality of the network towards a more special structure. And this more special structure makes many problems easier to solve because the structure is small. And now I believe when you take instead of the neuron as an activation function, the neuron has a more complex building block with storage inside, which has the possibility of internal memory. And as such is then when I begin to stack or combine these models, I will probably be able to have useful functionality with much smaller systems than I have now, even though I lose in generality, in universality. So this is why it's tricky because you have, we already know in the brain it works and so there are, therefore it should work. But it's tricky because you have to make the right abstractions. If you take a building block, which is no good, so to speak, they probably can't build the system at all. So this is, and that's what I say. There's the joy, there's a fun of it, there's interest in it, which I probably wouldn't have if I would have to worry all the time who's going to fund me next.
Well, this thing, they don't like it. And how do I can, how can I position it so that some funding agency feels compelled to give me money for this? Isn't it, you know, the danger that they say, what does she want there all the time? And I don't want to hear it and so on. And I work over my grant proposals all the time and I shout at my family, you know what I mean? That's not worth it.
[01:03:28] Speaker B: Yeah, well, all right. So, you know, neuroscience for a long time has sort of agreed writ large on the idea that it's all, all the information is in the spiking and we don't need to worry about the ion channels, right? As when we're explaining how cognition works. All we need to do is explain the pattern of the spikes and the way that the populations dynamically unfold in a low dimensional space, etc. All through the spiking. And we don't want to worry about the details of all the stuff happening inside the neuron. All of that stuff is for homeostasis and staying alive and maintaining the ability to spike. But, but you see it more, all of that internal richness, you see it as computation, as how do you see it?
[01:04:24] Speaker A: I see it as the problem of finding the right abstraction. I could easily say I don't want neuromodulation because it makes it more complicated, but I know there is neuromodulation, it's very important. And so I asked my. What is the indispensable function which this offers to my neuronal cell? Why has biology kept it and kept it and kept it? Because I can do some very cool and smart stuff with it. And I want to have this in my model because I want to use this particular thing, for instance, neuromodulation. As everybody knows, by activating neuromodulators, you can change the ion channel composition of the neuron essentially changing their open probabilities via the G proteins and the internal signaling. So you can alter the ion channel membrane expression and therefore alter the function of the neuron, its activation function, for instance, on a time basis of seconds to minutes.
And so this is obvious.
There is the alternative, the horror on the other side. You remember Henry Markram and these IBM Blue and this Blue brain and so on, and ended up in ebrains. And so I remember when you first got the money, and I remember that people made remarks that he won't use it properly and that they built huge neuronal cells with all the biophysical and biochemical detail they could think of. And then they didn't know what to do with it.
So that's the sort of the danger on the wall that you build all those neuromodulation ion channels, internal signaling and genetic nemo. And then you say, what is this? Think what for?
[01:06:22] Speaker B: Yeah, right. Okay. So then your approach differs.
I thought it was. One of the interesting things is in the paper that I read where you talk about this, you posit, here's the level of abstraction, right? You say, all right, I'm going to call these unknown variables that are occurring internally in the cell. I'm going to call these parameters.
And.
And so instead of modeling the number of vesicles and how the ligands bind and communicate, etc. Instead of modeling all that detail, you're saying, well, that can come later. Right now I'm going to call all those things parameters and then figure out what those parameters need to be doing, what those variables need to be doing to.
[01:07:06] Speaker A: This is one way, as you know, I've written papers before that were purely about internal signaling and they easily had some hundred or so proteins. But the reality is we have some 15,000 or so which may play a role. And before I. And so there is always these decisions to be made, where do I draw the line or so. And this is correct, this is what I said this would be the wrong approach. When you say I have 15,000 relevant proteins in order to understand internal signaling and get the. And get the parameters at the external membrane, which I need, I need to build a dynamical system with 15,000 variables. And at that point, maybe you get the money for it, even because the funding is always very crazy. But.
[01:08:00] Speaker B: Well, yeah, but in that case you can like point directly and say, we know these exist and therefore they're important, and I'm going to do something with those, which is different. What you're doing is much more abstract. Saying doesn't really matter what exists. There's a computation that has to occur and we'll figure that out later.
[01:08:16] Speaker A: Yeah, because I said the simple thing is of course, to have an individual learning rate. So you assume that the internal signaling somehow figures out the level of plasticity that. That the cell undergoes. You could say that the cell is zero plasticity at some point and then there is some internal signaling going on and this raises the plasticity level of the cell. So that would manifest then on the external membrane level as simply as a learning brain.
[01:08:48] Speaker B: I just had Mitya Szczeclovsky on the podcast and he views every single neuron as a controller using what's called data driven dynamics. The cell is trying to match, you know, see how its output affects its input and then it changes what it's doing, basically based on that mismatch. And one of the things I'm curious about that I want to ask you is like, how does the cell know when to be plastic? What. How to. Is where. What is that internal reference signal that the cell is aiming for? Right. How is it. How is it so smart? How does it know what to do?
[01:09:23] Speaker A: Well, to a certain degree there's not. I think there must be a lot of accumulation of evidence going on. Similar as in decisions, you have to accumulate evidence from several sources until you have sort of a critical amount, which means that you're going to persist you. But when you mentioned Chklovsky, I think you were right. That was one of the few novel neuron models which I've seen in the past and I was very happy at somebody. Also, like nice said, we have essentially just one type of model in the. In hall of. In all of computation neuroscience.
[01:10:04] Speaker B: Oh, the point, the point, the abstract point.
[01:10:08] Speaker A: It is always a model of.
Essentially we have a model which tries to explain when the neuron spikes.
Yeah, yeah.
[01:10:18] Speaker B: And that's the thing to explain. Yeah, that's kind of fun.
[01:10:21] Speaker A: That is. As a matter of fact, we think that in our model, we want to have a rate model.
We want to have a rate which changes over 100 milliseconds.
[01:10:32] Speaker B: So the rate is just the average amount of spiking over some time.
[01:10:35] Speaker A: So in that 100 milliseconds, it either spikes or spikes two times or whatever. And so within a second, it spikes one time or 10 times. So it has its frequency. Because that is one of the earliest things I found about neurons having different intrinsic frequencies and sticking to them very much. This was back in 2006 at a time when people reported on neurons always in averages yeah. And suddenly it occurred to me that this can't be right. And I tried to get the data from the experimentalists and they said, yes, no, of course, they're not all alike, they're different. And I look at it and I see, look, it's a log normal. It's a power law.
[01:11:18] Speaker B: This is your log normal.
[01:11:19] Speaker A: Yes. And the experimentalists, they didn't care what.
[01:11:23] Speaker B: Describe what that is.
[01:11:25] Speaker A: Well, the thing is, in theory, what people did is they initialized all neurons in the same way. You had homogeneous neurons. They were all initialized as if they were all alike.
[01:11:37] Speaker B: And these are like integrate and fire type neurons.
[01:11:40] Speaker A: And the parameters, they were all alike. They all had. So for instance, they all had 10.
[01:11:45] Speaker B: Hertz, for instance, all of them on average. Okay.
[01:11:49] Speaker A: Yeah. And every single one.
[01:11:51] Speaker B: And then they would do cognition, then they would do cognition by modulating within some range.
[01:11:57] Speaker A: Yes, it was again, it was an ideological thing about emergence from interaction of identical particles, and therefore the neurons had to be all identical.
From the ideology.
But that. And that the experimentalists knew that, but the theorists didn't care.
[01:12:17] Speaker B: So what, what is log normal and why is it important?
[01:12:20] Speaker A: Well, it turned out when I, when I build a neural network with all these different kinds of frequencies, with the high frequency neurons and low frequency neurons and so on, then I, as a matter of fact, I have not mathematically been able to prove or show this, but I get all these effects like I did with a symbolic paper.
I get the effect that the information content concentrates in only a few neurons because the neurons which have the highest firing radius also have the most connection.
[01:13:03] Speaker B: So now we're. You're talking about your mutual information work as well, right?
[01:13:08] Speaker A: Yes, that derived from that. Because I have all these different neurons in my model.
[01:13:13] Speaker B: Okay, all right.
[01:13:14] Speaker A: They are not initialized in the same way. They're initialized over. Over a range. And I have long novel, in the sense I have many low firing neurons and a few high five.
[01:13:26] Speaker B: And why is that important? Because it allows the system to be scale free. What is the importance of that distribution?
[01:13:34] Speaker A: Good question.
I can't really say too much about it. Really. I always focused on the idea that what it means is that I have hub and spoke neurons. That I have hub and spoken neurons. Exactly. That I have a structure of important and lesser important neurons.
[01:13:51] Speaker B: Like small world, kind of.
[01:13:53] Speaker A: Then it was my idea that the important noons need to speak to each other without regard of the lower important mu and such ideas. But in this paper, I actually wrote this down that I hope somebody with More mathematics could, yeah, could, could prove just how many patterns you could really store. And in such a network. And I would like to know whether the storage capacity, for instance, is comparable to an associative network.
Associative memory network.
It's not for me to calculate these things. It would be great if somebody did that because at this point I don't really know. I don't really know whether it is actually really much better in terms of the number of patterns you can store in a memory or whether it's just comparable. And really no, no improvement at all.
I don't think it is worse because I've looked at the numbers and they seem higher.
For associative networks, you know, the real usual thing is you take a vector and you store the vector and you store the next vector, the next vector. And if they're orthogonal or so, then here is so and so many vectors that you can store per size of the network. Okay, that has been done, has been calculated and I don't know how one would calculate it in this case.
[01:15:24] Speaker B: What does vertical and horizontal functions mean in your single neuron model?
[01:15:31] Speaker A: I already wonder whether it was a good choice of words because the internal and external is in a way sufficient. It's much more intuitive. The idea was the neuron has its connections to other neurons and there's calculations going on in a network. You can call it a horizontal network for calculating information. And then on the other hand, the neuron has, As I said, 15,000 or up to 70,000 different proteins that interact with each other in an internal signal network. There's a lot of, there is the metabolic network, of course, which overlaps with it. So there are many, many proteins and they have certain rules of how they interact and how they run as a dynamical system. And it has any amount of complexity comparable to a neural network. So it's like you have other people have same ideas. They saw a neuron model where somebody said, you know, all this dendritic integration, we just throw a multi layer perceptron into the neuron and then the neuron gets inputs, does a multilayer perceptron kind of thing and puts out, gives an output. And here we have essential neural network inside the neuron. Okay, if I model the protein network as an, as a neural network, then I have a neural network inside. That would be the vertical.
[01:16:58] Speaker B: Wouldn't it be more like a graph neural network though? Like, like, is that maybe that's too technical because there's no idea.
[01:17:07] Speaker A: But that would be the vertical. So I Have the neuron and I have the vertical. I have a whole complex signal network here, a dynamical system. I can model it with the help of ordinary differential equations, which is nice, but also, as you know, very complex because as soon as my concentrations are not very precise, then this network may give any kind of information because of too many, you know, once it's more than three or four, it goes out of errors, become larger and larger. And one would say that neural networks have a lot of success because they don't have these problems. They're not built from differential equations. And as soon as have you a couple of wrong concentrations in network, the results completely not useful. So and then I have. And then the neuron interacts with 15,000 hundred thousand of other neurons building, making a computation, even though inside of itself it has approximately the same complexity in terms of calculating its own activity, its metabolic needs, its ability to read out DNA information.
When do you need which DNA information? When do you want to have more AMPA receptors, when do I need additional dopamine receptors and so on. Always have to talk to the DNA.
[01:18:45] Speaker B: That's a lot to keep up with. Is that, what is that when epigenetics becomes important?
Signaling at the nucleotide level for that?
[01:18:53] Speaker A: Exactly. That's almost between it. So there's no, no loss of complexity in this vertical cellular environment. This is amazing.
[01:19:05] Speaker B: Well, that's the thing is I don't want to keep track of the thousands and thousands of proteins. Right. I want to just be able to say, okay, at some level I'm just going to model this as a dynamical system. And I think that's like comforting to somebody, someone like me, who I don't want to talk about the ion channels when I'm talking about working memory. Right. Or trying to explain working memory. I mean, I know that those things are important. And if, if it is the case, right. That we do need that internal cellular machinery, that it is essential for our cognitive processes. Part of that story, I want to be able to model it at a level of abstraction that I'm comfortable.
[01:19:42] Speaker A: That is the point. I personally am fascinated with this world and I would love to go into details and read up on every single protein because it's fascinating. But I agree with you. And that's the advantage of trying to do something functional in computational science, that you have to tell yourself just by studying what BDNF does in all circumstances, it's not gonna help the function of the model.
[01:20:13] Speaker B: Right.
[01:20:14] Speaker A: As you said, we need then look for a problem amount of abstraction but.
[01:20:19] Speaker B: Aren'T you tempted, though, because you're doing. You're sort of opening that book. Aren't you tempted to read all the words in the book then? Or are you, like, me and you want to, like, keep it somewhat at best?
[01:20:32] Speaker A: Unfortunately, no. I am that kind of person. I like many, lots of. That's why I did linguistics, in a way, and even started to learn different languages. And so there's lots of details, information. I didn't mind that. Of course. My goal was to understand how languages operate, but I didn't mind learning a couple of languages.
[01:20:52] Speaker B: Yeah, but that's the thing is, like, we want to understand how languages operate. And now you're going to have to study the nucleus of a single neuron.
[01:21:01] Speaker A: Yeah, it's interesting.
[01:21:01] Speaker B: It seems like a stretch.
[01:21:03] Speaker A: Yeah. Yeah, that is really interesting. Yeah. I had a conversation with a botanist who is actually a specialist on Mendel, and maybe we're doing an interview or so, but I don't know. And he said, well, it sounds strange, but if you think of it, it's logical. If you want to understand how thinking works, you need to understand the nucleus of the cell, because that's not.
[01:21:28] Speaker B: That's not obvious.
[01:21:29] Speaker A: That's what we use. That's what he said.
[01:21:32] Speaker B: Yeah, but we also use atoms. And we don't have to understand atoms.
[01:21:35] Speaker A: To understand thinking, but our thinking uses exactly these things. And the idea that the electrophysiological events are sufficient for the, let's say, cognitive content of what's going on with the brain, that is already put at absurdity. As soon as you, let's say, smoke.
[01:22:00] Speaker B: Some cannabis, which we'll do together right.
[01:22:04] Speaker A: After this interview, just as an example. It works on your brain, but it works on those CB receptors. It goes to the G protein. The G proteins affect your ion channels. It goes into the cell, all kinds of places, certain parts of the brain. And your processing is different. And don't tell me that a neural network with electrophysiology would be sufficient.
[01:22:27] Speaker B: Well, okay, right. But what I want to say, the way that you just, just described that, I want to redescribe it by saying that smoking the weed alters the shape of the manifold. Right. That I'm operating under. And however that is, whatever those constituent parts that give rise to the emergent property of a manifold, I don't know, G coupled receptors. Receptors, Cannabinoid receptors. Fine, you can talk about that. But at my level of wanting to understand these things, I can just talk about the dynamical system aspect of it.
[01:22:57] Speaker A: I have my doubts about that. I mean, think of something more drastic. If I call you names now, it could happen that your adrenaline goes up and that's somewhat way, way down in your, in your kidney. Right?
[01:23:09] Speaker B: Do it. Call me names. Let's do it.
[01:23:11] Speaker A: I'm just. No, no, it wouldn't work right now because you're laughing, but I know what you mean. You can be in a situation and somebody talks to in a very angry way and. And your adrenaline goes up and your whole body goes into flight or fight motors because of the tone of voice or just the words that you heard if you just read them. Yeah, you haven't even if you just read them. That's not even. That's clear. It's only the language.
[01:23:37] Speaker B: But so now, now of your account on that, that's exactly what will go eventually into the media. And then we'll have the anger molecule. Right. Like dopamine. Right. Is the happy molecule or whatever, adrenaline.
[01:23:50] Speaker A: No, adrenaline is very important. The brain very similar to dopamine. And it certainly activates all kinds of. Of neurons in your brain and also change the ion channels on the operation.
[01:24:03] Speaker B: And yes, I think it'd be fun.
[01:24:05] Speaker A: If you don't watch out for like 20 minutes, 30 minutes afterwards, your. That your cognition changes because you're still angry at an email. Somebody wrote you something?
[01:24:18] Speaker B: Yeah, I had to give it.
[01:24:19] Speaker A: I want to make it clear that this sort of talk about electrophysiology and some dimensions which change or so is too far away from the properties language really has or what our brain really operates with.
[01:24:39] Speaker B: So how would this. Let's say you are successful in your endeavors to model neurons this way. Right. So some are important. With high mutual information.
You need these vertical and horizontal accounts, these in this internal machinery. How would that change artificial intelligence? Would you know or do you care?
[01:25:01] Speaker A: Yeah, that's what I said. Once we have. Once we have complex building blocks, we can build simple systems and the building blocks are adequate for the tasks that we want to use, namely human cognition. I make this different. I told you that I make this different with engineering and actually building. The original idea of artificial intelligence was a small field of computer science. It was only sort of building systems that are like human, like intelligence. They've changed the meaning all the time. But if you just build a system that does a certain task, that's engineering these days. It's also called artificial intelligence. But the original artificial intelligence which I stick to was actually building human like models. Building blocks like humans. Yes. And if you want to do that and if you have the Building blocks from the brain. If you can build it like a brain, then we should at least be able to get away from these absolutely huge and wasteful and essentially dumps huge models.
[01:26:09] Speaker B: It's funny like when I.
The rote response or the very typical response when I say like often. So this is. This podcast is called Brain Inspired. And often I, you know, point to the fact that modern AI doesn't pay any attention to brains and what a shame that is and stuff. And often I have neuroscientists on and, and I say, well, what do you think of the modern AI? And without fail they have to say, well, of course I'm very impressed with the abilities of modern language models and stuff, but I, I sense that if I asked you that you would not begin.
[01:26:43] Speaker A: I'm not impressed in the least. I know very well how they work. And as I said, there's my friends at who is building something in Austria. And long before the Chinese would deep seek you already showed that you can shrink it enormously. And then these people came about and so on, and it's pure engineering.
And the fact, and actually, you know, language as such is not so complicated.
And the number of words we use.
[01:27:10] Speaker B: Thank you for saying that.
I feel. Yeah, thank you for saying that.
[01:27:16] Speaker A: And as I said, and the number of words we use in everyday discourse. So that's like some 10,000, 20,000 words.
But we are able to communicate in spite of that and quite a lot.
That is, that is the interesting part. And that is. And what interests me is not so much the. The tool of communication that we use, but what is behind it. So with the simple words I use, I still, I'm still able to cause you to come up with similar things and ideas that relate to it, which are interesting to me.
[01:27:57] Speaker B: Right, right.
[01:27:59] Speaker A: And this is not. This in spite of. We have so few words, but in spite of that because we have these complex memories and experiences and so on, and that's a way to access it.
[01:28:13] Speaker B: I mean, you Germans, you have many more words than actually neologisms are. Perhaps the German language is.
[01:28:21] Speaker A: Yeah. Full of them. It's true.
[01:28:22] Speaker B: Yeah. Best example of that. Yeah.
[01:28:24] Speaker A: Everything is always a word. Yeah.
Every new expression, many new expressions can be. Can be used as a word right away.
[01:28:33] Speaker B: So. Okay, so then that's AI. Right. Okay. What I originally asked you there was how this sort of model would change AI and what would it just. It would make everything more efficient or your. Your pro. Symbolic approach to AI, I suppose.
[01:28:48] Speaker A: Yes, yes. Yeah. There are also people. There's a Core movement, the neuro symbolic where we. They are trying to put these things together. But I already shrink in horror because there was a lecture how to put logical reasoning into large language models.
[01:29:09] Speaker B: Okay.
[01:29:10] Speaker A: And I'm jam it in there. No, no, no, no. Just the other way around. How to employ large language models to give you the kind of information you need for your reasoning.
That would be my, my type of question.
[01:29:24] Speaker B: Wait, okay, explain this to me more. So you don't want to put reasoning ability into the model. You want to use the model to.
[01:29:33] Speaker A: To give the knowledge that I need for my reasoning.
[01:29:35] Speaker B: Oh, okay. Just use it as a tool, you mean? Not.
[01:29:38] Speaker A: There's still the old problem of Reagan. You know that it was, there was this old problem in AI was of course, always where do we get our common sense information.
[01:29:49] Speaker B: Right, right.
[01:29:50] Speaker A: And these people have made good, have good progress. And then there was always the question, how do I immediately know that I don't know something?
And there the LLMs are not so great.
[01:30:05] Speaker B: Oh, okay, yeah, well, they're not great. There's a cottage industry of showing what they're not great at.
[01:30:10] Speaker A: But yeah, they make it up then they like to make it up when they don't know something.
[01:30:16] Speaker B: Right. Do you use LLMs?
[01:30:18] Speaker A: I don't know. I don't remember the examples, but I don't know whether it's a good example. It's like asking you did you see Nixon or so? And you immediately know yes, you met him or no.
[01:30:28] Speaker B: Right, right.
[01:30:29] Speaker A: Yeah, because it's Nixon somehow. Let's say Reagan Nixon.
[01:30:33] Speaker B: These are old references, Gabriella.
Say Obama or something.
[01:30:39] Speaker A: Exactly. Or Clinton. Did you ever meet Bill Clinton? And you would probably be able to answer that.
[01:30:44] Speaker B: Yeah, yeah. Charlie Chaplin. You're gonna.
[01:30:46] Speaker A: Yeah, okay.
Yes, I know. But it's this ability to immediately be able to, to you don't have to. To think about for a long time whether you ever met him. You know, the negative.
[01:30:59] Speaker B: Right.
[01:31:02] Speaker A: So and, and the LLMs don't know their negatives.
Okay, not, not very well.
[01:31:08] Speaker B: Not add it to the list. Yeah, but it seems like, you know, any criticism of AI then it eventually gets fixed. Right?
[01:31:15] Speaker A: I mean, that's what they say. But you can fix. Look, this is something simple. You cannot fix the fact that there are always high percentage of errors in it or that they have this hallucination. Because if you train a neural network and you train it perfectly, then you have overfitted it, you've over trained it and you have no generalization. In order to get a generalization you have to Leave off. And even the training has to be below 100%.
[01:31:44] Speaker B: But there's that, that after that dip on that this is where big data comes in. Right. And lots of training where generalization decreases like eventually, but then it re. Increases. And that, that is like. So Uri Hasson talks about this phenomenon as direct fit where you actually we are overfitting and that's how we generalize because we're interpolating. We have. We've fit so much that it encompasses everything we need and that, and that we don't need to intentionally understand the.
[01:32:18] Speaker A: Areas where we believe we have no lack of data. But in very many areas we do have a lack of data and always will.
They automatically assume that you have tons of data, but you don't always have them.
[01:32:35] Speaker B: Well, you don't have maybe the right data.
[01:32:37] Speaker A: Yeah, that too. But that also. But in general, I would say right now it's a low hanging fruit. You see all the results from the areas where lots of data exist.
[01:32:49] Speaker B: Right, right.
Which is the stupid.
[01:32:53] Speaker A: I still remember teaching in the beginning teaching the GPT tool about Zaata Langesch, that is a Frisian language in the north. Germany. Germany. So I asked him about it. He didn't know it and so I explained to him that it was an upper Bavarian dialect and. And so on. And the system, you know, happily responded and told me yes, now I know what it is. It is an upper Bavarian dialect. And so sure, I don't know. This was not. I don't like these systems. It was not one times it's fun but the second times are fun anymore.
[01:33:29] Speaker B: That's funny that you called the system him and he.
[01:33:31] Speaker A: Yeah, yeah, that's true. Yeah, that's I should call it. But it is a German computer.
[01:33:38] Speaker B: Yeah. Right after this I'll go start my non profit and see, see how far I can take it. Yeah.
Okay. Gabriella, thank you so much for joining me and continued success with your work.
[01:33:49] Speaker A: Yes, thank.
[01:33:56] Speaker B: You. Brain Inspired is powered by the Transmitter, an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advanced research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives written by journalists and scientists. If you value Brain Inspired, support it through Patreon to access full length episodes, join our Discord community and even influence who I invite to the podcast. Go to Brainspired to learn more. The music you're hearing is Little Wing performed by Kyle Donovan. Thank you for your support. See you next time.