Speaker 1 00:00:03 It seems to me a particular computation or mental, uh, ability or experience is grounded by or super on to use another term of art, certain neuron circuits, some neurons do something other neurons do other things. And what they do is identified with different functions. I also, uh, meditate on whether a brain state that explains behavior has to be incorporating the, the state of every channel and every membrane of every neuro, if it does the game's over, when it works fast is when there's multiple competing hypotheses and you can conceive of an experiment, you know, where the outcome resolves a set of questions, more decisively, and it's exhausting. And, and, you know,
Speaker 2 00:00:58 Yes, it is. It's true.
Speaker 0 00:01:04 This is brain inspired.
Speaker 2 00:01:18 That was the voice of Jeffrey D. Shaw, a familiar voice to me because I was a post-doctoral researcher in his lab at Vanderbilt university, I and Paul, hello, everyone. Jeff has recently picked up and moved his lab to New York university. And he has for years been studying the neural and computational decision mechanisms that guide control and monitor behavior. That's straight from his website and the vast majority of his research centers around making decisions with psychotic eye movements in non-human primates and connecting the neural instantiation of those processes with mathematical models. A <inaudible> by the way, is the rapid kind of eye movements we make all the time to look at things as opposed to smooth pursuit, eye movements we make. When we track objects, moving in space. When I was in Jeff's lab, I worked on the neural basis of, and a model of how we make decisions and choices and how we can withhold our responses at the last second, as we're preparing to make the response, something called response inhibition, that's just one of multiple tracks of research from his lab.
Speaker 2 00:02:28 So the circuitry involved in how we move our eyes is well known, which makes studying cognition in the realm of eye movements, a seemingly straightforward process, but not. So my friends, it turns out there are many confounds and twists and turns even in this well known system. So that mapping cognitive or psychological functions, and sub-functions onto the activity of single neurons and populations of neurons within circuits is an intricate affair. One of the reasons I wanted to have Jeff on the podcast is because he has maintained a few guiding principles throughout his career to help clarify how to ask the right questions and how to know whether the answers are reliable so much so that every year in his lab, we reread the same set of papers that outline these principles. We talk about two of those principles today. One is called linking propositions from DaVita teller in the 1980s, which is a systematic guide for how to understand the relationship between neural activity and psychological functions.
Speaker 2 00:03:31 And the other is called strong inference from John plat in the 1960s, which is a systematic recipe for how to most productively and efficiently do science. We discuss these concepts in terms of the many projects Jeff has ongoing and partly in reference to two review papers. Jeff wrote, which go way deeper in the world of decision making with examples from Jeff's work and eye movement related research. In general, we also discuss, uh, how the game may or may not have changed over the years as we can record more and more neurons simultaneously and relate those recordings to the large, deep learning models. We often discuss on brain inspired. We talk a little free will among other things. And Jeff takes some guest meta science type questions. So I encourage you to read all four of the papers that I just mentioned, uh, which you can find in the show notes at brain inspired.co/podcast/ 140. Thank you for listening support brain inspired on Patreon. If you value it or consider taking my online course about this emerging neuro AI world, learn
[email protected]. Okay. Enjoy Jeff. Jeff. We were just talking about how you don't age, uh, and you mentioned your knees, but I remember you telling me that, uh, you at least could dunk a basketball. How long has it been since you dunked a basketball?
Speaker 1 00:04:52 Oh, years and years, years and years, but I enjoyed, uh, coaching my, uh, son when he was in middle school and showing those boys how to run suicides. Oh
Speaker 2 00:05:02 My gosh. Yeah. The suicide makes me, uh, hurt. Just thinking about it actually, it's he, he's not still playing basketball, is he?
Speaker 1 00:05:10 No, no. He actually is a police officer in Nashville. Now he's a shooting instructor.
Speaker 2 00:05:16 Speaking of Nashville, uh, you were in Nashville for over 30 years, right?
Speaker 1 00:05:22 That's right.
Speaker 2 00:05:23 So at Vanderbilt university where I was your underling, your, your postdoc there, where you taught me many a thing. Um, but you have recently moved to York university. Uh, and I know that it was quite a process. I, I guess COVID has, has, uh, had something to do with that. So what is your new title? I, I'm not sure. Sure. I even know your, your new title at York and the nature of your, of, of your job. Can you tell me about it?
Speaker 1 00:05:50 Well, uh, I'm a professor in the biology department, in the faculty of science and appointed as the director of a center for visual neurophysiology. And what that means is that I'm, uh, involved in helping plan, um, and, and, and equip, and I guess staff a, uh, facility where non-human primate visual neurophysiology experiments will be done. So this is in the context of a large grant that York got from, uh, CF ref is the acronym in Canada is a 30 million grant or so, and York university committed additional funds to, uh, fund new faculty positions and build a new building. And so the ground floor of that building will be a vivarium, uh, in which, uh, five researchers will do neurophysiology experiments of various sorts on vision and action.
Speaker 2 00:06:50 Uh, speaking of 30 years ago, you've been recording, uh, neurons for quite some time. And, you know, you've told the story in many, a lab meeting that I've been a part of, of how relative to these days things have changed and how you used to, um, sort of manually do things that are now automated. I'm wondering if you could just tell a story of how you used to go in and, you know, put a, put a use holes in boards and such to, uh, make experiments. Can you tell a story about like, what that was like, and, and then we'll compare it to these days?
Speaker 1 00:07:26 Yeah, well, I mean, in the, in the, uh, during the postdoc period with Peter Shiller at MIT, we made our own electrodes glass coated platinum Meridium. So you had to etch the wire, make it the right. Pointiness not too pointy, not too doll. And then, uh, apply the glass coating with a particular device. And these things required, uh, manual dexterity and skill, which only arrives through practice. So you ruin a lot. Yeah. So once the electrode was made for the day, then you could, uh, insert it in the, uh, micro manipulator, the, the, uh, device that was attached to the chamber on the monkey's head to advance it into his brain, but it was one electrode at a time. One contact, hopefully isolating one neuro at a time.
Speaker 1 00:08:18 And the isolation process was fidling with knobs. The, the, you know, we get voltage as a function of time on various scales. And at the scale of a, a neuronal spike it's in the scale of, let's say three or four or five milliseconds. So that's the trace of voltage by time on the aScope screen, we looked at, and there were other electronic devices, variously known as Schmidt triggers and, and other kinds of spike isolation devices that allow you to set thresholds in voltage and time to generate a TTL pulse. Only when the voltage by time wave form satisfied the criteria that you set. And you kept an eye on those because over the course of the session, it would drift and change. And one could chase the electro, I mean, chase the neuron with the window discriminator and with the micro manipulator, moving in and out one neuron at a time, if you got one neuron a day, you felt real
Speaker 2 00:09:20 Good. Oh, that's what, uh, my PhD advisor mark summer always, always touted that as well. One neuro a day. That's a, that's a great day. That's a little, a little different from these days, which now you, now you can't chase neurons because you have so many, uh, electrodes on, uh, so many leads on each electrode going down. It's just massive populations of neurons that you're recording and you don't really get to choose so much.
Speaker 1 00:09:45 Well that's right. So one of the, I don't know if you say revolution, it's a technical evolution for sure is the ability to put more than one contact in the brain in more than one brain area with either linear electrode array or Utah array or a variety of others chronically implanted or, or placed day after day. One of the concerns of course is whether the quality of the isolation of the many neurons on many electrodes is, is comparable to what we used to do. Mm-hmm, <affirmative> where we could focus on one spike at a time. And, um, well, there's different points of view about this, of course, and many people are working hard on algorithms that efficiently isolate spikes according to rigorous criteria. And of course they never work perfectly. You're never sure that what you've missed and what you hit, uh, but you can't sort spikes it infinitely. And so other people adopted the point of view that maybe it's not that important, it's just the spiciness of the signal and other frequency bands that provide a perfectly adequate signal to do things with like drive robot arms. Let's say mm-hmm <affirmative>.
Speaker 2 00:11:01 I mean, I remember feeling that consternation because I grew up, you know, recording one neuron at a time, maybe two a day, that was a great day. But then, you know, in your lab, we started using these multi contact electrodes where, you know, you'd have to <laugh>, I, I constantly felt like I wanted to better isolate the neurons, but there's just nothing that you can do about do about it. So, um, I, I'm not, I don't know if, how do you feel like, uh, you know, back in the day the, were, were they the good old days recording single neurons, uh, because you know that, you know, you could isolate them because there's a, there's a give and take also because recording one neuro at a time, you know, you're lowering the electrode, you're having your animal do a task, and you're listening for a neuron. That's also sounds interesting, right. Relative to the tasks, which also you can't do anymore. Um, was, was that the good old days are the good old days gone or is this a better era?
Speaker 1 00:11:57 Well, this is a more informative era for sure. We can, we can answer questions that were really impossible or hard to answer before. So for example, the, uh, recent work we've been focusing on in the lab involves placing linear electrode arrays, like what you used in front eye field mm-hmm <affirmative>, but placing them in areas where they can pass through the cortical layers, uh, perpendicular mm-hmm <affirmative> areas like the supplementary eye field or, or parts of V4 on the lunate gyrus. And so we can assess the properties of cells across layers in a manner like Hubel did and others, you know, in the sixties and seventies, but this information is, is rarely available in other cortical areas. Four S got more than anywhere or else I think, but in the frontal lobe, there's hardly anything. Now
Speaker 2 00:12:48 I, I just had Matthew Larcom on the podcast and he makes the argument that we need to be paying more attention to where the D rights are not so much the cell bodies. Right. So, you know, thinking about recording across layers of cortex, how do you think about that? Right. Because you know, you have sources and syncs from a few years ago and, and you're listening, you know, for the action potentials, which is the output of the neuron, but, um, thinking about the ding rights and where signals are coming in relative to that, how does that affect your thinking about what you're doing?
Speaker 1 00:13:18 Right. That's a great question and, and a great connection. So one of the observations he, he is known for and others have made is the description of these, um, calcium spikes that are emerging in the applicable dendrites of layer five Peral cells. Um, so I've been fortunate to be in a collaboration with a colleague at the universe at, at Florida international university. This, uh, name is Jorge Viera. So in this collaboration, uh, one of the, uh, products is a biophysical model of layer five parametal cell that includes these calcium spikes. So we, we were thinking about that concretely. And one of the approaches, the validity of this has not been, um, well, I don't know exactly how valid this is quite yet. We haven't published a paper, had any reviews, but one of the approaches to looking at sources and syncs relative to a spike, let's say recorded in layer five. So we can say we're in layer five because the linear, electro array and other information that lets us align it. So a spike in layer five can be, we, we can synchronize the current density on the spike, do spike, triggered current density. And when we've done that for, uh, spikes in different layers, we see various kinds of patterns that relate in interesting ways to the possibility that a current sync in the top could be those calcium spikes.
Speaker 2 00:14:50 Very good. Um, <laugh> so in preparation to speak with you every year in lab meetings, at the beginning of the, of one of the semesters, right, every year we would go through, uh, a set of papers that you kind of hold as dear. And, you know, I, I reviewed those papers again, we're gonna talk about some of the concepts from those papers, but it, it, it was really interesting going back thinking about where we are today with like these massive recordings and also the quote unquote machine learning or deep learning approach. And I, I want to get your reflections on some of these <laugh>, uh, ideas, one of which is linking propositions. So I also went back and read your 2004 paper, uh, on building a bridge between brain and behavior. Um, and by the way, one of the things that is fun for me is just how, maybe not that I didn't realize, but maybe I was just less educated, uh, of course, than I am now. But, um, how much more I appreciate how steeped you are in the history of philosophy and all of the related issues, um, related to mind psychology versus brains, which I didn't appreciate as much back then. So, um, just a belated, congratulations, uh, and admiration to you for that.
Speaker 1 00:16:06 Thank you, Paul.
Speaker 2 00:16:08 So you, you wrote this in 2004 and you talk about linking propositions, and I'm gonna ask you to explain what linking propositions are in a second, and then you in 2000, 19, 15 years later, um, revisit these ideas with updating, with everything that we've learned about <inaudible> production, response, preparation, decision, making, visual attention. So I, I, I want to get your thoughts from back then relative to back then and how you're thinking today about linking propositions, um, and where we are and where we're going with them. So what is, what is a linking proposition?
Speaker 1 00:16:41 Well, linking proposition is a, is a term of art that I didn't, uh, formulate. It comes, uh, through Devita teller from a, a, a vision scientist named Brinley. The, the concept is that there's certain identifiable, let's say psychological functions, cognitive functions, so perceptual abilities. So which neurons enable that ability and the neurons that enable that are the, are the bridge locus, the place where the linking proposition holds. So if we wanna study visual decision making, let's say it's unlikely. The olfactory bomb has much to do with that. So part of the identification of the, of, of a, of a linking proposition is ruling out the neurons or the circuits that can't be involved. And this of course is a process of elimination where you falsify certain hypotheses, which brings us to the strong inference approach, which was one of those papers. So the, so it just said it puts in more concrete terms, what it seems to me, everyone believes in some sense that a particular computation or mental ability or experience is grounded by or super on to using another term of art, certain neuron circuits, some neurons do some things, other neurons do other things.
Speaker 1 00:18:09 And what they do is identified with different functions. So I, I don't regard it as a very controversial concept at all, but its value is in grounding it and, and slowing the thinking down to avoid many, many authors will write. And I've written this too, before you'd say this neuron represents this, right. Well, won't this represent me, right? <laugh> in what sense is it representing? And so there's more to say and more to unpack in that concept. And the concept of linking propositions provides a path to help you think these things through,
Speaker 2 00:18:48 Even. So
Speaker 1 00:18:51 It just structures, uh, a set of logical inferences that have to do with, you know, a, if the neurons do this thing, then that mental state exists. If the neurons are disabled, the mental state doesn't exist. If the mental state exists, according to another measure, the neurons better reactive and, and so on, it, it, it, it makes us slow down and think about what we mean. And then the bridge locus concept reminds us that, that we're not sure what level of description is the adequate one. Mm. So in that paper, I also, uh, meditate on whether a brain state explains behavior has to be incorporating the, the state of every channel and every membrane of every neuro mm-hmm <affirmative>, if it does the game's over, cuz we can't even, we can't measure it and we couldn't keep track of it if we could probably it doesn't though, just like, and this is this concept of functionalism. I mean, we're, uh, we're running a computer program that lets me see your image and you hear me through the internet. I don't know if you're running a windows machine or a, or a, or a Mac mm-hmm <affirmative> and it doesn't matter. It matters fundamentally. I can't put my CPU in your computer nor yours in mine, if we're not wearing, you know, using it saying hardware, but somehow software is different. So the same software can run on different hardware.
Speaker 2 00:20:22 The, you didn't use the term multiple realizability in that paper, I think, but that's essentially what we're talking about. I don't know if that was not a term.
Speaker 1 00:20:29 Well, it, it is a concept I was familiar with and, and I think it's that paper near the end. I'm I'm I'm meditating on how was <laugh> I'm sorry. I haven't looked at that paper in a while. <laugh> so I close. So there's the problem of a related problem in all is if all we do, if all behavior is caused by neurons, discharging and gland secreting, and there's nothing else, well, that's a very deterministic position. And according to many people, then there goes free. Will you know, how can, how can my wants be anything I control if they control me mm-hmm <affirmative>
Speaker 1 00:21:12 And yet in the law and in personal relationships, we do hold each other accountable and we do excuse each other. So the reasons for actions matter, at least in social discourse. And so one of the challenges is reconciling intentional reasons with neural causes and the multiple realizability according to many philosophers is that is, is, is, is, is that crack in the window that allows for planning of alternative futures to mean something to think about, what do I wanna live in Durango, Colorado or Posa Springs <laugh> until you committed to one or the other, those were both li lively, viable possibilities,
Speaker 2 00:21:58 Decision making, and specifically psychotic decision making is kind of in this sweet spot. Right? So thinking about a, a bridge locus and linking propositions for, let's say, um, you know, like a motor neuron that innovates the muscles, right. Well, that's pretty clear, but, and in that paper and in your more recent paper, you still, I believe worry that higher cognitive function is not necessarily amenable to this approach. Right?
Speaker 1 00:22:25 Well, that's part of what was going on with this trans neuroscience papers, thinking about what we can know and where our uncertainties are in terms of, of, of, of bridge low side for, you know, let's say the stochastic accumulator decision making kind of framework. So, um, the work that you were involved in at Vanderbilt, that, that several of us had been working on sparked from the, the race model of the, of the stop signal task, the counter Manning task. Well, the race model that Gordon Logan formulated explains how behavior in this particular task arises. And it's an abstract model as, you know, go in a stop process that have random finish times and they don't interact the end. That's the model and the mathematics of finish times, let you estimate quantity that you couldn't otherwise see a called stop signal reaction time. Well, for the first, you know, 15 years of its existence in the literature, you know, it was, it was a number you could get from behavior and it changed in kids with ADHD and other disorders, but what it was neuro neurally was entirely unknown. So Ben, Doug Haynes, long time ago ran into the paper and suggested we do it with monkeys and, and it turned out to be very useful and informative. So we found neurons doing just what they needed to do to be implementing that race model. But now we've got this level of description of an abstract math model and we got neurons and, and we need to communicate across those levels more deliberately, which leads to the interactive race model, uh, part of which you accomplished in the, in that eye science paper,
Speaker 2 00:24:17 You mentioned Gordon, Logan. I'm just kind of curious where he is, where he is in thinking about neurons. Does he care about neurons these days, or is he still, because is it was interesting lab meetings, um, to have you, although you appreciate psychology and you know, the math psych models, but, um, you know, hardcore neurophysiologist and then Gordon on the other side being a sort of hardcore, I don't care about the neurons. This is the way the model works and it was hard to, uh, move forward on the psychology of these things and, and meetings. So do you know where he is? Uh, these days on that
Speaker 1 00:24:51 <laugh> well, we're still collaborating, uh, with Tom Palm, Mary mm-hmm <affirmative> and, and, and, and a, and a group of really talented postdocs. He still animated by the questions we're working on one project right now that, uh, was, was launched with another postdoc several years ago, bra sand, you knew bra recognized that we should call it EBU. And it was a, a not a model, a simulation of how ensembles of ramping or accumulators can make reaction time distributions that are realistic. Well, we're, we've extended that more recently to a choice version. So there's two ensembles of ramping accumulators, and we can now instantiate speed accuracy, trade offs, and try and understand how these, these ensembles of accumulators, um, work together. They're not interacting yet, but a number of interesting ideas have emerged about how speed accuracy trade off could be governed that are, are beyond just changing the threshold, which is the standard psychology model way out of this has also arisen, uh, some new insights into the judgements of confidence that one can probe after having made one of these choices.
Speaker 1 00:26:21 So in doing this work, uh, Gordon and Tom, and I have different views about what we're doing. So I well, and it comes to the use of the word simulation. So we've debated whether EPU is a simulation or a model, and the reviewers treated it like a model that could be to fit behavior and explain something. And in my view, that's not what it is. What we're doing is simulating the essential aspects of a particular group of neurons. And then evaluating that performance. One of the things this new modeling is doing because we have choices now is instantiate choices across the speed accuracy, trade off, and then, you know, simulate distributions of correct error RTS, and then fit those with one of the psychological models like the linear holistic accumulator. And so it's been an interesting thing to explore, you know, as above so below, is, are the parameters of the, of the psychology LBA fit to the performance of the supposed neural instantiation. Do they map onto each other very nicely and accurately? Um, so we're still all engaged, but our unique perspectives coming from our careers lead us to this rich, uh, ultimately synergistic outcome.
Speaker 2 00:27:51 Yeah. I was gonna ask you about this, um, later, but I'll, I'll just ask now, because, because of this kind of, you know, collaboration with psychology at large, I, I suppose you just use the word rich, right. You know, how important is it for, let's say a neurophysiologist to, you know, um, get that perspective from the other side, quote, unquote, I mean, it's been a very productive collaboration, you know, specifically with Gordon and Tom, and I know, you know, there's lots of others, but since we're talking about Gordon and Tom right now. Yeah. Um, but it's also sometimes causes a little friction. I remember as well. Uh, but I, I suppose we need that friction to make progress.
Speaker 1 00:28:30 Yeah, I think so. I mean, um, the, the friction happens just when either of our collective assumptions are violated or, or, or compromised. And we have to think, you know, why are we saying this? Mm-hmm <affirmative> how do, why do we think we know this and out of each of these, um, as you said, sometimes, uh, uh, uh, fractious and even heated conversations, cuz we care, calms new insights that would not have been arrived at unless we'd have engaged like that. I mean, why should a neuroscientist know about psychology? Certainly cuz that's what the brain does. If, I mean one could study the brain just cuz it's a cool organ in and of itself. Yeah. It's beautiful anatomically and in, in sort of in inner working sense and the investigation of other organisms and other nervous systems is, is, is really interesting, really also enriching.
Speaker 1 00:29:26 But if we're interested in human behavior and you know, dealing with, um, disorders of human behavior and cognition and emotion and so on, if we wanna relate neuroscience or neurophysiology, let's say to the human condition, we need to say what the human condition is accurately and use words carefully. And the problem is the, the scientific terms of art, like decision or attention that we try to use those in a scientifically rigorous sense. Yeah. But there are words we use commonly every day when we go home with our kids. So again, back to the linking proposition idea and, and, and, and a math psych approach, the goal is to expose the assumptions in the use of the words and eventually ultimately to kind of replace the word with its kind of more functional even mathematical or neuro mathematical, uh, underpinning
Speaker 2 00:30:27 Going back to the, you know, the idea of linking propositions. Another thing I was struck with revisiting the whole decision making quote unquote decision making literature specific to psychotic <laugh> eye movement related decision making and choosing is just how thorny, um, every step is right. And how detailed and rigorous one must be to study these things. And in some sense, going back to what I was saying earlier about the right level of cognitive process to study that links up with the idea of a bridge locus and linking proposition, which seems most amenable to the neuron doctrine right. Of, uh, Horus Barlow back when single neurons were considered to be doing cognitive functions. Um, but even this one little step, the psychotic system in terms of visual attention and, and choosing targets, uh, becomes really thorny with the linking proposition and, and you know, you, uh, <laugh>, again, something I admire have that rigor and that attention that is required, um, to go down this road, but even in something that is, you know, maybe less higher cognitive, right? Uh, like decision making in two alternative, uh, force, choice task, even then it sort of explodes and there're so many different issues. So how far do you think the, the concept of linking propositions can take us in terms of quote unquote, higher cognitive functions, emotions, and you know, this deliberation process, et cetera.
Speaker 1 00:32:07 Right. Well, I wanna make sure I'm hearing the question. Well, the question is, how does it, how does the linking proposition framework translate to, uh, more complex mental states and behaviors,
Speaker 2 00:32:21 Which you, which you mentioned in 2004, uh, that, that we may not be able to get there. Right.
Speaker 1 00:32:27 Right. Well, we sure are not gonna get there without having gone through the effort of figuring it out for simpler systems. I mean, we worked out the hydrogen atom before we did any others. Right. You gotta, you know, you, you, you, you walk before you run, but certainly so, so there's, there's a, there's a, a bifurcation here on the one so that we can talk about, uh, higher order function, language, social cognition, that kind of stuff. Yeah. That's one way to go. And, and we can, the other thing is the single neuron is, is recording that neuron is that neuron, the bridge load is mm-hmm <affirmative> surely no, I mean, it's that neuron's embedded in a circuit and now we're now, okay, well, how big is the circuit? Which, where is the circuit? You know, what are the boundaries of the circuit? Which neurons are part of it?
Speaker 1 00:33:16 I'm obviously now there's gotta be anatomical connections, but I can draw a path from the olfactory ball to the visual cortex too. It's a roundabout path. Right? Right. Well, that's not the circuit we me, we think, right. So again, these, the, the, the, the framework of the approach to, to, to identify the questions that need answering, I, think's gonna be useful all the way through then also, I mean, this kind of refers to the calcium spikes, too. Neurons have properties that we didn't anticipate things ho Barlow didn't know, like calcium spikes, inable end rights, active dendrites in the first place took a while to understand, right. And even in the brain stem like the models of production, just second production, eye movement generation that David Robinson gen you know, of, of Johns Hopkins, those models are, are, are powerful and effective, cuz they translate into the clinic effectively and held with diagnosis.
Speaker 1 00:34:13 But what's been discovered is that there are properties of the membranes of certain neurons in that brain stem circuit that were not for that ion channel. It wouldn't work the way it needs to work. So, you know, okay, well that's the bridge locus too. And we are talking about channels and it's a good thing we are because there's certain drugs that can be given that act on that channel, that treat eye movement disorders. So, you know, we don't wanna be hamstrung by these concepts either now for something like social cognition, for example, I guess we can use that as an example or, or, um, you know, more complex decision making about interpersonal relationships and stuff like that. It's still a brain doing it and maybe the, a factory bulbs more involved now. Right. Cause you know how someone smells matter. Right. But, um, it's not clear to me that it's a qualitatively different problem. It's less, we know less about it. And maybe, and if we're talking about language, like what we are doing, it's, it's a uniquely human capability, which means there's certain data we may never get. Right. But that's sort of an ethical thing. Scientifically the phenomena underlying the data we would like to get are happening in our brains too.
Speaker 2 00:35:40 Do you think that the single neuron doctrine set neuroscience and or psychology back, or do you think it was kind of a necessary stepping, stepping stone because now people talk about the population doctrine. Right.
Speaker 1 00:35:52 Right. Well, I don't think that seeing her doctrine or well, even the were doctrine was kind of self congratulating <laugh>, but you know, that's the data we had. We had spikes of neurons. And as we said, we could only get one at a time and it was sure fruitful. We discovered their tune for orientation and motion direction. And if you show a monkey arrival with stimulus like Nikos and I did well, some neurons discharge when, you know, the, the motion is the thing the monkey says he sees. So single neurons are pretty smart, but again, they're embedded in networks.
Speaker 2 00:36:30 Right. But like a place, you know, in a cortical area like frontal eye field or superior colliculus, you can still record single neurons. And it's, you know, you have these distinct types of responses that they give. So some are like VI respond to visual stimuli. We'll just talk frontal eye field for a second. Some, uh, respond, respond just before a, an eye movement. Right. It's like a movement neuron. And then there are some that are in between. Um, so you can kind of make a story out of recording these single neurons in an area like frontal eye field. But then you get into an area like supplementary eye field, or other parts of the cortex, right. Where it's less clear or there's more variety in the types of, uh, responses of neurons. So in some sense, you know, the frontal eye field is a good area to be in, if you want to make these linking propositions. <laugh> right. Yeah. Cause you can tell, you can make progress, um, in that way.
Speaker 1 00:37:21 Well, it's true. But, but partly because the questions were well enough framed and there was a background of knowledge and so on and so forth. So you're right. When you move to an area like supplementary eye field where you've recorded too, it isn't quite as clear, but the same kind of deliberate approach that says, let each neuron tell its story
Speaker 2 00:37:42 Mm-hmm <affirmative>
Speaker 1 00:37:43 And develop well mathematical models of alternative functional processes that they could be engaged in or pardon me representing, um, has allowed us to sort things out. And, and so one paper that, that, uh, Amir Saja is the first author on a nature. Neuroscience describes the laminate organization of supplementary eye field neurons, uh, in monkeys doing the stop. They're not doing stopping, they're not doing reactive innovation, but there's a lot of neurons active when monkeys make errors and when they're gonna get their juice mm-hmm <affirmative> or when they're not gonna get their juice. And so those neurons can be distinguished functionally, like when do they discharge and how does the variability and discharge rate relate to other parameters? But importantly, they're also different in their distribution across the layers. So if they're different in function and they're different in layer, they're certainly different in connection and morphology. So now we're at that circuit and neuro level, we have another manuscript that is being, it's been accepted. It nature communications from growing out of the same dataset, describing three other kinds of neurons that, that you would recognize you'd see the profile and see, I saw that neuron before
Speaker 2 00:39:05 Uhhuh I'm sure. Yeah.
Speaker 1 00:39:06 Right. But now there's sort of some other explanations for it, some other possibilities for interpreting it. And so there's the next step now from the population coding idea, of course, lots of, lots of labs are happy to put many electrodes in and then, and then combine the activity as a whole through dynamical systems, approach, information theory, other, you know, other kinds of things. Mm-hmm <affirmative> now much of that in, in, in my understanding has been for the purpose of, uh, brain machine interfaces, you know, making a robot move out of motor cortex. Right? So now it's an engineering problem and it doesn't really matter how the brain works. It matters how my robot works and how I connect my robot to the brain.
Speaker 2 00:39:53 It's a decoding problem.
Speaker 1 00:39:55 Yeah. Yeah. So more power to 'em. I mean, this is, this is important if they can make progress and help people fantastic. But I don't think we should deceive ourselves into thinking that's how the brain works because structure and function are so intimately connected that if you ignore the layer in which a neurons recorded, for example, then you you're missing a big part of the, of, of the essential neuroscience story.
Speaker 2 00:40:28 Okay. So this, this brings me to, uh, deep learning, right? In these really large mm-hmm, <affirmative> deep learning models that have become all the rage and that we discuss a lot on, on this podcast. But it's interesting. I have a, I a slide in the course that, that I create all about this neuro AI landscape. And it shows the old way of doing things where you have a hypothesis and then you might build a model. Uh, and then the new way of doing things is you build a model and you train the model and then, you know, and then you compare the model to your data. But I, what I realized embarrassingly, going back and reading the strong inference John plat paper, is that I need to update the slide because you don't make a hypothesis. You need to make multiple alternative hypotheses. Right. <laugh> yeah. So, um, I don't know if you wanna discuss, um, you know, well, maybe you could just discuss, uh, describe what strong inference is because it's a pretty simple thing. And, and then I want to ask you about what your thoughts are on this alternative approach of just creating these really large models, training them and then comparing them to brains. And whether that is amenable to a strong inference approach.
Speaker 1 00:41:39 My understanding of strong inference is that it it's, it's basically eighth grade science, the way we were taught.
Speaker 2 00:41:45 Right. <laugh> which no one does
Speaker 1 00:41:49 <laugh> right, right. You ask a question that you can answer and it, and the answer is yes or no, a very Sherlock Holmes kind of approach. So that if the answer, you know, whatever the answer is, you have some, uh, confidence that the state of the world is such that it's a, and not B. And then if it's a, it could be a prime or a double prime. So now we do the next experiment, but it, it requires grounding the hypothesis in, uh, kind of a, a rigorous network of statements and concepts and facts and math. And so on that allow you to articulate something meaningful. Sometimes now let's be clear something that works when you know, something well enough to ask that
Speaker 2 00:42:37 Question, the right questions. Yeah.
Speaker 1 00:42:39 Yeah. Lots of aspects of brain science are still exploratory. So it would be premature to, to be too rigorous in your hypothesis testing until you know, enough about what's going on there. So, you know, kind of just looking and seeing what's going on, there's still plenty of room for that. But when we're expanded is when there's multiple competing hypotheses and you can conceive of an experiment that divides, you know, where the outcome resolves a set of questions, more decisively
Speaker 2 00:43:14 In a sort of Parian falsification process, I suppose.
Speaker 1 00:43:17 Well, that seems to be the most rigorous, doesn't it? And it's exhausting and, and <laugh>, you know,
Speaker 2 00:43:25 Yes, it is. That's true.
Speaker 1 00:43:26 And it, it's rare that such papers get into glossy journals for some reason. Mm. And that that's a driving force, a social influence that we all have to acknowledge. But, um, again, those social influences are not what, how scientific progress rigorous scientific process happens. I mean, just cuz the church said he shouldn't Galileo did see those moons <laugh> and that's that,
Speaker 2 00:43:57 What do you think of the, you know, you, you know, training a deep learning model and then comparing it in a, in a sense you're not really even asking a question, do you see, is there room, uh, you know, within the machine learning kind of modeling approach and um, comparing it to brain data, is there room for strong inference using that approach? Or is this something that is less than ideal in your eyes?
Speaker 1 00:44:21 Well, it's a great activity cuz the, the, the, the, the network can do things for us. Maybe some of the things it's doing for us, we should think more carefully, whether we that done like facial recognition that MIS, identifies certain categories of people, more likely than others. So now it's a social problem. Right,
Speaker 2 00:44:42 Right.
Speaker 1 00:44:43 But the scientists have to be responsible for that. If we stay closer to this world of like, how does the brain work understanding how, how, how, how it, how intelligent systems networks work, there's things to learn from the, the, the kind of, uh, uh, machine learning neural networks. I mean, I think everyone should appreciate that in the beginning of kind of modernish neuroscience in the 1950s is arising at the same time the computer's being invented mm-hmm <affirmative> and touring. And, uh, and a lot of people are they're, they're all the same ideas. So that, that they should be considered separate seems artificial too. Now with the machine learning networks, there's also, uh, because they're so powerful and because they're so complex, often the, the, the person selling the service cannot explain how they work. Right. And that's becoming a increasing problem. As you know, I've, I've been involved in another kind of activity in, at the interface of law and neuroscience.
Speaker 1 00:45:51 Yeah. And so we invite the students to think about situations where a artificial intelligence system, like in a, in a hospital setting, for example, leads to bad outcome. And the patient wants to Sue the hospital. When, when the doctor cuts off the wrong leg, we know that the doctor made a mistake and why, and how we see how the system did it. Right. If the AI system, if no one knows how it works, it's hard to assign blame. And so I'm, I'm familiar with a, a new, or it's new to me growing interest in, uh, in the phrase, explainable AI, so that we understand how it works well enough that we can trust it. And when it goes wrong, we know why and what to fix.
Speaker 2 00:46:38 But, so, so thinking about in terms of models of the brain, right, there's this problem of model mimicry mm-hmm <affirmative>, um, that has, well, I won't say plagued because the, the problem is that multiple different kinds of models can explain psychological behavior essentially. And a lot of what your research program has been about is using neural data to decide, which is the better model, because there's this problem of model mimicry and it's. And so, you know, we were talking about the race, um, model where it's very simple, there's a, a, a go accumulator and a stop accumulator and they're racing, right? And these, this that's two units. And then, you know, you can add more units for choice and things like that, but then these really large, deep learning models, it seems like model mimicry, uh, would become more of an issue because lots of different deep learning models can be trained to do the, the same thing.
Speaker 2 00:47:34 So then to adjudicate between to say something about how the brain is doing it, you know, and, and there are people like Jim DeCarlo who, you know, set up like a convolution neural network, and then the layers of the network seem to map onto activity in layers of our, in hierarchical layers of our visual cortex. On the other hand, you could probably make 30 different models, uh, of the same ilk that would also explain a lot of the variants. So how much, you know, how much of a problem do you think model mimicry is in this deep learning approach? And by the way, before I can, before I forget to tell you this, it was funny. Um, I had someone in, uh, a discord server that I run for, um, the podcast supporters who said he was, uh, using a, <laugh> a recurrent neural network with one unit. And it looked like the, like one, like, you know, recurring unit. Yeah. And he said, it looked like what was happening was the unit was just accumulating to a choice. And I was like, oh, okay. You just built a, uh
Speaker 1 00:48:35 <laugh>. Yeah, yeah.
Speaker 2 00:48:36 You just built a model. Like I used to work, but in a recurrent neural network, quote, unquote, you know, in those deep learning terms anyway, make sure you heard that.
Speaker 1 00:48:45 Thank you. Well, my, my instinct is to say, if, if we're talking about object recognition, let's say sure, but keep it in, in, in the DeCarlo lab framework, mm-hmm <affirmative> and we can tell cats from dogs, and now the network can tell cats from dogs. Now your brain and my brain are not identical.
Speaker 2 00:49:04 Right? Right.
Speaker 1 00:49:05 We, we both have V1 and V2 and so on and so forth, but at different places in the network, they're gonna be radically different because your dog and cat growing up are different from my dog and cat. So at some point there's differences. And yet at the level of, is it a dog or a cat categorization? We both satisfy the goal of the task. So this is one way I've thought about, you know, if you, if you can build N convolution neural networks and they all tell dog from cat mm-hmm, <affirmative> starting with pixels, you know, so there's the V one-ish thing. And at the end, it's, that's a cat on a dog. The stuff in between is gonna have, can have as much variability as can be the case, but there's going to be some aspects that are similar across all systems. For example, I think, I mean, I don't know that this is true in all the, all the publishing neural networks, but I, I think I've understood that the input level is more granular, higher resolution. Then you get the lines and features and then you get components and surfaces, and then you get objects mm-hmm <affirmative>. So that flow seems to be the way to do it. I don't know, has anyone built a system that doesn't have that sequence or could have any sequence? I don't know.
Speaker 2 00:50:29 Good question. That that's all eventual stream stream as well. And dorsal stream is a different, uh, beast itself. Although people are building these hierarchical networks that are, I I'm unfamiliar, and I should be with, you know, the, the authors and Stu and such, but there is being, there is progress being made in the dorsal stream as well, which is the, the how or, or where region.
Speaker 1 00:50:51 Yeah. Well, and the motor cortex too. And so, yeah, sure. I mean, we think we understand that neurons are just nodes and networks where they influence each other through ex siding and inhibiting, and there's lateral inhibition and there's feed forward and there's feedback and there's recurrence. Well, that can be instantiated, lots of different ways. And then it's a common function.
Speaker 2 00:51:13 Are students in your lab these days, is anyone wanting to use these kind of deep learning approaches because, um, in my world, like everyone's using deep learning. Right. So, um,
Speaker 1 00:51:23 Yeah. Yeah. I forbid them.
Speaker 2 00:51:26 Yeah. So that, okay, that's what I'm getting at. Oh, come on.
Speaker 1 00:51:32 Different problems right now, this, this problem space of course, is one that's very active in Toronto. And, uh, many of the York faculty are interested in active in this area. So it's, it's one of the reasons that it was fun to move to York where this kind of exploration is so vivid and, and active.
Speaker 2 00:51:55 So I'm gonna harp on the deep learning aspect just a little bit more here, uh, because it's, there, there's been this wave, right with the quote, deep learning revolution of popularity and using these approaches to do other things, but also to, to study brain areas. And you've had, uh, a long career. And so you've seen lots of waves of popularity, of various brain areas, various cognitive functions to study right now, cognitive maps in the hippocampus seems very hot. It's hard to tell from where I, I sit. I know everyone has a different perspective on these things. Mm-hmm, <affirmative> do so in your judgment, do you think that this little deep learning wave is this, is it here to stay, do you think it's gonna, uh, pass, uh, by and move along?
Speaker 1 00:52:37 Well, uh, I haven't paid as much attention to know. I mean, it is a, it, it does feel fadish of course,
Speaker 2 00:52:44 FA that's the right word. Yeah, yeah, yeah. But it hasn't affected your, your work so much, right? It hasn't. No,
Speaker 1 00:52:50 It doesn't. I don't, I don't read that literature to give inspiration how to think about things. Mm-hmm <affirmative>, but it, so on the one hand it's, it's incredibly useful and profitable, so they're not going away. And the, the problem of, uh, understanding when, when a convolution neural network goes wrong in a bank or a hospital or on a military device or something like that. Yeah. That's serious. So understanding how they work, I don't think that problem can go away and it's, it's not clear to me that if you answer it for this network doing this thing, that it won't be that you'll not have to start all over again for a different network doing a different thing with credit cards now, or I don't know. So that's acted. No. Will they help us understand the brain? Well, as, as sort of intuition pumps about how you'd organize a ventral stream?
Speaker 1 00:53:51 I mean, my reading of the DeCarlo and others work is that it sort of endorsed this idea satisfied as that starting with this granular more pixelated representation of an image that gets features. And then they get bigger receptive fields that integrate more information that are shaping, you know, coming to surfaces and shapes, and finally to objects, objects that you have to learn. So, you know, the Gables, nobody knew about agree until Mike tar and is got invented them. Now you can have grievable, you know? Right. So the learning element is a key part of this as well. So it feels to me like we've, we've had sort of an insight in how you make an object recognition system in a primate brain. What, I mean, what else do you wanna know with them? I mean, the, you know, the varieties of networks that can, well, I don't know. I mean, I know at MIT, they enjoy these contests of networks, you know, the network that is the best right. Recognize a best right. Recognizer, you know, categorizer or whatever, whether a network that categorizes as well as people is the network, like the human brain. I'm not sure that's guaranteed, and it's not self-evident to me that that's a as useful in activity as exploring the human or the primate brain directly. Mm. But it may be, again, as we've said, I don't live in this world.
Speaker 2 00:55:26 Right. Right. Well, this is why I wanted your perspective on it as well.
Speaker 1 00:55:29 Yeah.
Speaker 2 00:55:30 But J in terms of just fads, let's say, and not just, you know, the deep learning fad, do you get better throughout your career of recognizing when something is going to is just a passing phase and what seems to be more important and will stick around?
Speaker 1 00:55:46 Well, in my own work, I feel like I, I'm confident that I'm addressing the best questions I can address given where I come from and what I've done and resources and so on are, I mean, there's other, other really important questions other people are addressing, like, for example, years ago, I mean, it's still the case, but years ago when oscillations became a fad.
Speaker 2 00:56:09 Yeah.
Speaker 1 00:56:10 I remember, I don't remember what year it was, but all of a sudden they're at, at the society for neuroscience meeting multiple labs, we're reporting oscillation for yeah. Yeah. Last year they weren't working on that, but now they are because, you know, so scientists are fadish like everybody else. And again, it's sort of this social, uh, the social currency of getting the glossy journal paper and, and, and, you know, being perceived as working on the problem. I haven't been motivated to chase the hot problem cuz I feel like I'm, I'm happy working on these hard problems that seem relevant and fundamental.
Speaker 2 00:56:51 There's a lot to do. There's still a lot to do.
Speaker 1 00:56:54 Yeah. Yeah. So it's a big tent though. There's lots of room for lots of people to do things. If we had enough funding.
Speaker 2 00:57:02 Oh that's the yes, of
Speaker 1 00:57:04 Course, right. To your representatives.
Speaker 2 00:57:06 <laugh> what what's going on in the lab these days. What's new. What, what, what are you, uh, working on? And what's keeping you from making progress, what's keeping you up at night.
Speaker 1 00:57:15 Yeah. Well there's three main things. The linear electro array data collection is, is the big data collection thing. And so I referred to some work in V4 and some work in supplementary eye field. Uh, the V4 work was done by a graduate student named Jake. Westerberg working with Alex Mayer and we've, uh, published some papers and there's more to come. Part of it was understanding how V4 the sales across the layers of V4 contribute to visual search performance, uh, both the target selection, attention allocation, and, and, you know, in association with Saad production, very, very like what we've done before, but in before now. And so there's stories to tell about that and discoveries that, that have been made. The other aspect of this is relating the Lamor distribution of local field potentials to the current source density that produces the EEG signal. And so during visual search tasks, there's an EEG event related potential component called the N two PC discovered by Steve luck, long time ago.
Speaker 1 00:58:27 And Jeff Woodman, our, uh, uh, friend and colleague had worked on it and, and recognized that there was a fruitful path to look at EEG in monkeys and understand where these different components come from. So a paper's being revised for neuro image in which we can do forward modeling. We can, we can take the currents and convert them to dipoles calculate the dips. Those currents are producing. And with a model of the connectivity of the head, the brain skin, scalp, and bone and everything calculate what the voltage distribution would be, which is, has unique solution. We're able to do that only because we're collaborating with really smart people. Jorge is, is the, is the leader of this lab. And the graduate student is, uh, Beatrice Herrera. So the N two C comes from V4, but L I P contributes from field while it has wants to do. And influences V4 in the circuit as a biophysical generator of the M two C has nothing to say be, or very little because it's, it's, it's too far away and it's pointed the wrong direction.
Speaker 2 00:59:38 Oh, okay. Gotcha.
Speaker 1 00:59:40 So this is a really interesting insight that a given cortical area can be computationally critically involved, but bio physically invisible
Speaker 2 00:59:52 Well through, through EEG anyway,
Speaker 1 00:59:54 Through EEG. Yeah. Right, right, right. So, so that's one line of work. The other line of work is extending the medial frontal, uh, recordings from supplementary eye field down into both banks in the singular ulus in the monkey. So Amjad and Steven Arrington have collected a, a, a rich database from two monkeys doing the Soka stop signal task with obstacle to ask with different reward amounts and kind of reversal learning component to it. And we'll be describing, uh, how the dorsal and vent bank and the singular sous and the fundus are similar, are different to each other and overlying supplementary eye field and how each of them contribute to the error related negativity and the feedback related negativity.
Speaker 2 01:00:40 You have so much going on still.
Speaker 1 01:00:42 And there's one more to go. Yeah. There's one more thing to tell you, cause this isn't even well, the data's collected Caleb Lowe and Thomas RET collected, uh, recordings from frontline field of, of monkeys doing a complex visual search task. So it was complex in two dimensions, two interacting dimensions. And we, we, we, we've never talked about this. You won't know about this. Okay. So, uh, it's a color Singleton search. So he is looking for the red among green or the red among not so red or the green among red or the green among, and, and he doesn't know what color anything's gonna be until the trial, until the erase present mm-hmm <affirmative>. So he is got no set for what the target is. And so the inner, right, so you got kind of easy search and hard search that's one dimension. And then, then what he does with his eyes is dictated by the shape of the stimulus. Okay. And so in, in the first period in the first run of this, let's say if it was vertical, make a PROCO if it's square, make Noosa code, and if it's horizontal, make an anti
Speaker 2 01:01:52 Yeah. Hard to train this, I'm sure
Speaker 1 01:01:54 Hard to train, hard to train and, and a smart monkey learn to cheat in a really interesting way. Now, uh, we manipulate the, the difficulty of encoding the cue by making the LGA, making the thing really long or really stubby now in the data that we've collected. We got rid of the Antigo for, for interesting reasons that you can find on a bio archive. <laugh> paper with Caleb low, but we now have what we could call. I don't know if this phrase works, I'm still playing with it, but it's two dimensional decision making. Psychologists would call it multifactor decision making. Okay. But in the, in the, you know, the famous dots task, it's hard on one dimension, you know, motion, coherence mm-hmm <affirmative>. So this is hard on two dimensions, the, the identifiability of the target for distractors and the, uh, categorization of the queue. And those two factors are independent of each other. So you get distributions of reaction times that are fast. If things are easy in both dimensions and progressively longer, if things get progressively harder. So there's a, we, we learned of a theoretical framework called system factorial technology.
Speaker 2 01:03:19 Okay.
Speaker 1 01:03:19 Which sounds like a mouthful. Yeah. And it is Jim Townsend. It, uh, Indiana university conceived of it with his coworkers. But uh, everybody says signal detection theory and feels quite happy about it, but it's the same mouthful. And it's the same principle. You, you start with mathematical principles out of which you extract from performance, key parameters, you know, and signal detection, theory discriminability and bias system. Factorial technologies is a sequence of analysis of the reaction time distributions that under, under the appropriate assumptions reveal the architecture, producing the behavior is it are, are the factors being, uh, processed serially or parallel. Oh,
Speaker 2 01:04:09 Okay. That kinda
Speaker 1 01:04:09 Arch race or exhausted, you know? So
Speaker 2 01:04:14 The process, the process architecture you mean?
Speaker 1 01:04:17 Yeah. Yeah. So we've recorded neurons in front eye field, and we know neurons in front eye field. Some of 'em select targets and the time it takes to select the target varies with how discrimin the target is. Mm-hmm <affirmative> we know other neurons make the odds and when they turn on, depends on how discrimin things are and there's RT variability there. So the expectation is by looking at the time and modulation of different neurons will be able to partition reaction time into these different operations
Speaker 2 01:04:49 Uhhuh <affirmative>
Speaker 1 01:04:50 They may overlap in time, but we may be able to detect that on the, uh, premise that the different neurons, again, it's a sort of a strong inference approach given that certain neurons instantiate one process and not another, then we can see when those processes begin and terminate relative to one another
Speaker 2 01:05:10 That's on bio archive. You said,
Speaker 1 01:05:12 Well, the cheating monkey paper is on bio archive. It's an N of one monkey paper.
Speaker 2 01:05:18 Yeah. Oh, okay.
Speaker 1 01:05:20 Yeah. So we replicated the be show effect.
Speaker 2 01:05:23 Mm-hmm <affirmative>
Speaker 1 01:05:25 So Kurt Thompson, when he was in the lab in our CSPI show, discovered that monkeys, uh, partially trained to do one color among another color and search had frontline field, in which half the cells select the target immediately, their color selected. Well, one of the monkeys, it was Darwin.
Speaker 2 01:05:49 Okay. Uh, yep. Yep. Memories flooding. Yeah. With memories maybe.
Speaker 1 01:05:54 Yeah. Yeah. So he had so much experience doing stuff that he saw the cheat. And so his friend live field cells discriminated the orientation of the queue right off the bat, or many of them did because he wasn't, he was just getting the cue and he wasn't worrying about where the, where the Singleton was till later.
Speaker 2 01:06:17 So this is similar to the, uh, to this recent cheating monkey
Speaker 1 01:06:21 <laugh> yes.
Speaker 2 01:06:22 Yes. That's not that won't make it to a, because you have to have an in of two to publish in a, uh, journal. Right. So it's gonna be forever bio archive bound, I suppose,
Speaker 1 01:06:30 Out there for people to find if they want, they can evaluate whether the data seem reliable and the conclusions follow.
Speaker 2 01:06:37 All right, Jeff, I told you I had a few guest questions. Um, two of them come from, you've mentioned a lot of people, two of them come from Bram Z belt. So I'm gonna play these questions for you. And then if we have time, I'll play a third from another previous. So I, I should say Bram was a, a postdoc in your lab. I think you'll actually say this. So I'm gonna play this and then you can react to it. Okay.
Speaker 1 01:07:00 All right.
Speaker 3 01:07:00 Hi, Jeff. I worked in your lab between 2011 and 2014. It was a privilege to work with you and with all the talent and smart people in the lab, Paul being no exception, quite a few SHA lab members from the period don't work in academia anymore. So my question for you is how is it for you as a supervisor to see the graduate students and postdocs that you trained choosing a career path outside academia?
Speaker 2 01:07:29 This is a, um, I guess this is kind of a hot topic in, uh, academic society. It seems like people are leaving in droves, especially these days, but, so, yeah. Um, do you have a, do you have a reaction to that or an answer?
Speaker 1 01:07:44 Well, I don't blame you.
Speaker 2 01:07:48 Oh, that's right. I'm I'm case point <laugh> yeah. You're looking at me
Speaker 1 01:07:52 <laugh> well, and, and you is, is collected. So I don't blame you. I don't blame Bram. I don't blame, you know, I mean the most recent is this Caleb low that, that I mentioned earlier mm-hmm <affirmative> so he was lined up to do a postdoc actually with Stephan Everling.
Speaker 2 01:08:08 Oh, right. Oh, he told me that. Yeah.
Speaker 1 01:08:10 Yeah. And a variety of things happened such that he just decided he wanted to work as a data scientist. Um, it seems that the training, you, you guys collectively, not just in our lab, all, all the labs like this, you, you, the training you get has market value out in the real world.
Speaker 2 01:08:30 Right.
Speaker 1 01:08:31 And so he seems to be happy doing that. Brad and Perel. Uh, I don't know if he still is, but he, he, after a really successful post talk with Roosevelt Keani and a K 99, basically in his pocket chose to work for Squarespace. Oh. And, and that's fine.
Speaker 2 01:08:51 K 99, by the way, is a grant that kind of ushers you into faculty position just for
Speaker 1 01:08:56 Oh, thank you. Yeah. Yeah. So, no, I mean, were I, at that time in life faced with situations like we are, you know, limited grant funding, fewer jobs, various kinds of challenges and, and more opportunities than were certainly available. When I was at that stage, when I was coming out of graduate school and postdoc, I don't think I was marketable for anything.
Speaker 2 01:09:24 Did you, did it, was it tempting to, to, I mean, because you seem, you know, you're a lifelong at academic, um, rigorous scientist, like I've like we've been discussing, but have you ever been tempted to jump ship? But on the other hand you've been extremely productive. And as far as I know, you've always been extremely productive and I, you know, I don't, I'm not sure if that that's played into it or, or, or what, well,
Speaker 1 01:09:48 Yeah, I never wanted a job. Well, so, so the family business was, uh, farm selling farm implements. And I guess maybe we can reveal to the, to whoever might listen to this, that, that you are living in the town over Wolf Creek pass Uhhuh, Colorado, where I, where I grew up in a, in a farming community. So the family business was selling international harvester farm equipment. Okay. And Heston hay equipment. And so on Shaw Ironworks. And my dad came home every night. He said, if it weren't for money and people, this wouldn't be bad job. And so I never wanted a job with money and people. And of course I didn't get that, but I, I have a freedom of exploration as a PI. That is, is unlike what one has in the, in the business world where whoever says what you're supposed to do. So I always was committed to this academic path and just strove to do what needed to be done, to be here and stay here.
Speaker 2 01:10:51 Not a lot of people have, go ahead. Sorry. Yeah.
Speaker 1 01:10:53 Well, I just wanna, I wanna reiterate loudly, loudly, and clearly for everybody that, uh, that, and, and, and each of you guys who's left, the lab knows that I've supported you in these new positions.
Speaker 2 01:11:04 Yes. You certainly have at least in my personal experience. Yeah.
Speaker 1 01:11:07 Yeah. So, uh, it, it's a, it's a new day. And when you get rich in the real world,
Speaker 2 01:11:14 <laugh>
Speaker 1 01:11:15 Endowments are welcome.
Speaker 2 01:11:17 Ah, okay. Very good. Yeah. But, but isn't it, um, I, I would imagine it's a little sad. I don't know if sad is the right word, but, um, you know, because you invest so much time and effort in training us then to have someone essentially not disregard, but then move on to a space where they're not gonna be using that training necessarily in the future, um, directly. Anyway, is there some, some part of you that's a little sad when someone, uh, takes a different course?
Speaker 1 01:11:50 Uh, I wouldn't say it's sad, but, but there's a, here's a reality when you leave academics, my neuro tree doesn't grow anymore.
Speaker 2 01:12:01 Right, right. Nor your legacy. Right. Because those, those are directly.
Speaker 1 01:12:06 Yeah. Yeah. So as, as, and again, I'm, I'm gonna elaborate a little bit just for the benefit of those who might listen in the United States, the NIH funds training grants, they're known as a T 32 to institutions. And through all the years, I was the PI of it. The renewals were accredited by the number of trainees who went on in academic careers.
Speaker 2 01:12:32 OK.
Speaker 1 01:12:34 And the trainees that, that went into other careers outside academia just didn't count as much because they're not publishing papers. Mm-hmm, <affirmative>, they're not sustaining the legacy of the, of the lab, the individual and the institution. So that's a Frank reality of it.
Speaker 2 01:12:52 Yeah.
Speaker 1 01:12:53 Now I've never said to anybody, no, you have to stay unhappy in academics just so I can have another citation or something like that.
Speaker 2 01:13:04 No, I can vouch for that. You, you, uh, always give a really nice celebration. I remember mine very clearly, um, when I was moving on and, um, have always been very supportive, which, uh, I don't, you know, not everyone's like that. So another thing I appreciate about you scientifically as well,
Speaker 1 01:13:19 The other thing, well, well, let me, the last thought I wanna make on this is almost everybody who's, who's left has stayed engaged, or we've stayed in touch and we've published papers after the fact. So rich Heights
Speaker 2 01:13:35 Mm-hmm <affirmative>
Speaker 1 01:13:36 You overlap briefly with, we're still, there's another paper on supplementary life field with, with Thomas reer leading it, the
Speaker 2 01:13:44 Accuracy trade off.
Speaker 1 01:13:45 Yeah. That, that manuscript provides after cell reports right now.
Speaker 2 01:13:50 All right. Very good. All right. Here's brom's uh, other question,
Speaker 3 01:13:54 You earned your PhD in 1986 and started as an assistant professor at Vanderbilt in 1989, more than 30 years have passed since then. I'm curious, how has academia changed over the period, according to you, what has changed for the better and what has changed for the worse?
Speaker 1 01:14:13 All right. So yeah, I I've been at Vanderbilt or was at Vanderbilt for over 30 years and now at another institution. So some comparisons are available. Um, so here's a thing, here's one of the things that's changed for the better that is. And, and it was, it was, it was vivid at, at Vanderbilt for a long time. And that is the interdisciplinarity and the appreciation that you shouldn't stay in your silo. Mm. So the places, well, the interesting places are places where people can work across departments and faculties and colleges and so on, uh, in a meaningful, collaborative way. And ideally the institution either prevents or lowers barriers to that, or even maybe rewards it by things like getting teaching credit for teaching a course, that's for an interdisciplinary major and not your main department, that kind of thing. Mm-hmm, <affirmative>, that's one of the things that's that I think has improved, or it certainly changed over the years.
Speaker 1 01:15:23 One of the things, you know, a change for the worst is I would identify with the salaries of the, of the presidents and chancellors than a number of administrative staff or, or, um, sort of the, the Dean's offices have seemed to grow in, uh, number and cost out of proportion to the, what benefits faculty get and the salaries of the staff, you know, the janitorial and the kitchen and that kind of thing. So the, the factors that have led to disparity of wealth in businesses, you know, the CEOs making, I don't know how many, a hundred times more than the average salary, the same trend has happened certainly in us schools. And so why is that? Do we know why? I, I don't really know. I think my experience in, in, in, in interacting with the chancellor's, the various chancellor at Vanderbilt and provost, is that, uh, boards of trust, uh, reward, uh, well, the boards of trust are many CEOs of corporations where that's, the mindset is the first thing.
Speaker 1 01:16:34 The second thing is this, this almost illness to be the guy with the most money mm-hmm <affirmative> and compare yourself to your peer group in all these various ways. And, and then of course, kind of the conky thing that once you're a millionaire making another thousand isn't much DNCE, you have to make another a hundred thousand different. And then when you're a billionaire, it's gotta be a million more, you know, so it's an insidious process, but the, the, the expansion of administration at universities and the, sort of the, the, the overhead that that's created in terms of just more forms to fill out more. Yeah. Well, less efficiency.
Speaker 2 01:17:20 Do you think that's gonna continue?
Speaker 1 01:17:23 Well, I don't know how, I don't know. Who's, who's turning it around.
Speaker 2 01:17:28 How long are you gonna be assigned to? How long are you gonna be, um, in academia? So I know you just started a new job, so this is terrible. We won't play this for your university perhaps, but, uh, do you ever, do you see yourself ever, uh, it's not on the horizon, right? Retirement or anything like that? No.
Speaker 1 01:17:41 No. I feel very invigorated with, uh, new kind of questions. We're, we're working on here and new faculty, so new opportunities for other kinds of collaborations in a new environment. Yeah. Uh, so I'm not tired at all. And have, I haven't run out of ideas and I still enjoy revising manuscripts and working with trainees to get the figure just right. So the process is the process is what I enjoy and I still enjoy it. I taught, uh, in the winter term taught, uh, systems and cognitive neuro class undergraduate class for a new undergraduate major here and great students, great conversations. So, uh, I feel very fortunate. Well, maybe here's a thing that's changed for the better too, in, in many places, there's no mandatory retirement age. So I'll keep going a while longer.
Speaker 2 01:18:33 We were, before we, before I hit record, I was just marveling at how you don't age. So you don't look anywhere close to retirement. And I know, you know, you you've, as long as I've known you, you've, you've seemed invigorated, uh, with these questions. It's quite impressive. Well, Jeff, this has been a lot of fun. I'm glad we finally got you, uh, on the podcast and I appreciate your time. And, um, here's to the next 30 years at your good luck with the new environment. And I hope it continues to go well,
Speaker 1 01:19:00 Thank you, Paul, and best to you and your, and your lovely family in Durango, Colorado
Speaker 2 01:19:11 Brain inspired is a production of me and you. I don't do advertisements. You can support the show through Patreon for a trifling amount and get access to the full versions of all the episodes. Plus bonus episodes that focus more on the cultural side, but still have science go to brain inspired.co and find the red Patreon button there to get in touch with me,
[email protected]. The music you hear is by the new year. Find
[email protected]. Thank you for your support. See you next time.
Speaker 0 01:19:45 The, into the snow, the covers up the.