[00:00:03] Speaker A: A cognitive ontology basically says, what are all the parts of the mind? You know, what are all the things that we think minds do?
A lot of the ways that we kind of chop the mind up are almost certainly wrong. Where by wrong I mean they don't reflect the computational organization of the brain.
[00:00:24] Speaker B: What's the secret? How do you. How do you maintain such a healthy balance while still being productive?
[00:00:30] Speaker A: You're assuming that I actually maintain a healthy balance.
[00:00:39] Speaker B: This is brain inspired.
Hey everyone, it's Paul. When I entered graduate school, I wanted to study consciousness, how our brains give rise to awareness. It didn't take long to realize just how little consensus there was about what consciousness is, let alone how to study it. Depending on who you talk to, very little or some appreciable amount of progress has happened in that regard. But even beyond that often contentious question, it may be surprising to realize even the mental functions that we take for granted maybe aren't on as sound of footing as we thought or as I thought. So if I want to study the neural basis of some mental function, like working memory, for instance, do I even know what working memory is? What other component functions it may involve or depend on or overlap with or relate to?
So a while ago, I had David Popol and Yuri Buzaki on to discuss whether neuroscience or psychology provides a better path forward for understanding our mental lives. Today, Russ Poldrack joins me and we focus more on cognitive ontology, the parts that make up our mental processes, and the relations between those parts. Russell's scientific research over the years has focused on the neural basis of things like decision making, executive functioning, learning, and memory. But over his career, he's turned a big chunk of his focus onto finding solutions to the many meta science problems that his own research field has faced. So a few years ago he co founded and runs the Stanford center for Reproducible Neuroscience, where they focus on how to ensure that we're essentially doing reliable science that will stand the test of time. Things like establishing standards and making tools for sharing data sharing, analysis tools, and so on. He also wrote the book the New Mind what Neuroimaging can and Cannot Reveal About Our Thoughts, which is a great overview about the history and the present and the future of fmri, which also touches on many of the metascience problems and solutions. So we also talk a little bit about those kinds of metascience issues and the field broadly. Show notes at BrandInspired Co Podcast 92. Support this podcast on Patreon if you value it and can afford a couple bucks a month. Be good, be well, and enjoy. Russell Poltrac.
All right, so we're going to start a little bit in left field here, Russ. So I was thinking about cognitive ontologies, and I happen to be in kind of a back and forth conversation with my friends about musical genres. And you're a punk fan, right?
[00:03:37] Speaker A: Yes.
[00:03:38] Speaker B: What's a punk band that you like from the early days?
[00:03:41] Speaker A: You know, I was in. When I was in high school in the early 80s, I was really into the Dead Kennedys, really into all the, like Southern California, like SST bands like Black Flag, Circle Jerks, that kind of stuff.
[00:03:54] Speaker B: Yeah, I knew you were going to say Black Flag.
How would you. What. What musical genre subgenre is Black Flag?
[00:04:02] Speaker A: That's a good question.
I would just call it hardcore punk.
[00:04:08] Speaker B: Okay, so you're going to get in trouble with my friends because I wouldn't even go so far as hardcore. I would just probably call it, I might say early punk, but it probably has a technical name. Anyway, I was thinking about cognitive ontologies and thinking about these subgenre musical labels. Pre punk, post punk, postmodern ism, whatever, punk, you know, And I was thinking about how these genres are kind of imposed from the outside, from the critics, while the artists themselves, probably many of them resist being labeled. Right. So I made this terrible analogy between a cognitive ontology, like mental functions that are proposed by psychologists and from anyone who's just thinking about these things, these folk psychological terms and concepts versus the actual cognitive functions that are resistant to. They don't want to be labeled. Right, right.
[00:05:00] Speaker A: But it's interesting, I assume that if you look at the fans of the different artists, they're probably going to cluster around those labels. Right. And in some ways that's where the labels probably come from. It's an interesting question. Yeah. As to where, like where the right place is to decide that the ontology is useful or not and what data should go into it. I agree that it is kind of funny that those are very top down and very much like what psychologists do.
[00:05:31] Speaker B: Yeah, I should say this is audio only, but in the background, Russ has his guitars, at least three of them. I see back there you have a little collection going.
[00:05:41] Speaker A: Yeah, try not to get any more.
[00:05:45] Speaker B: Very good. Well, so maybe we can apply your cognitive ontology approach eventually to the musical genre spectrum as well. But Russ, welcome to the show. Thank you for being on the podcast.
[00:05:57] Speaker A: Yeah, thanks for having me.
[00:05:58] Speaker B: So you've been a major influence in multiple meta science issues that have become big over the past. I don't know, decade or so, things like the reproducibility crisis, just how to do better science in general. You're an advocate for open science. You introduced the problem of reverse inference, which you've talked about at length and will likely come up while we're talking here. I'm wondering though, how much of your thinking and your career is devoted now to these meta science kind of issues relative to earlier.
How much did you predict you'd be working on these sorts of things when you were earlier in your career?
[00:06:41] Speaker A: Yeah, no, I certainly did not predict that I was going to be spending this kind of time on kind of meta scientific issues. I mean, I'd say at least half, if not two thirds of my effort these days goes to thinking about writing, about talking about these kind of meta scientific issues.
Really in the last five years it's kind of exploded. So when I'm, or I guess six years. I moved to Stanford six years ago and was lucky enough to get funding from the Arnold foundation to start a new center. This was when Chris Gorgoski was part of my group as well, before he moved to Google. We started a center together we called the center for Reproducible Neuroscience. And in part the decision to buy into spending that much time on meta scientific issues really came from this growing gnawing feeling that I had that, that I just couldn't believe a lot of the work that I was seeing published. Cause I, you know, it's like I knew all the tricks they were playing and it's like I didn't want to be part of a field when I couldn't like believe or I couldn't know what to believe. Right. I'm sure some, you know, a lot of it is reproducible, a good part of it isn't. But you know, it's like I couldn't. When the base rates are as low as I thought they were, it was hard to tell. So that's really what inspired me to move in that direction with, you know, this intuition that like, if we can't believe the work that is being published, then one, I don't want to be a part of a field where I can't believe the work that's being published. And it seems unethical to take a bunch of money from the public and use it to do science when we know that the methods that we're using are broken.
[00:08:25] Speaker B: I mean, the other issue is that science is a, well, ideally a super slowly but slow correcting, self correcting system. And if these problems don't get fixed a century from now our era is going to be a real laughing stock.
So I guess that's to be avoided.
[00:08:44] Speaker A: Yeah, that's exactly right. And so I really want to be someone who's doing everything I can to try to figure out how to fix it.
[00:08:53] Speaker B: It is the most important.
Well, I don't know how to rank importance, but it is a super important problem. So thank you for working on these things. I'm curious. So you run the center for Reproducible Neuroscience, and I'm wondering if you would have. So you just said that you know all the tricks and how these things happen in papers and whether you should believe it. And I'm wondering if earlier in your career, because you started at the center when you were an experienced researcher and at the top of your career, perhaps thus far. And I'm wondering if that makes a difference with respect to. Okay, looking back now I can study these things now that I've gotten to this level, or if people should be focusing on that earlier in their careers as well.
[00:09:40] Speaker A: Yeah, that's a really good question.
It almost certainly is the case that a junior person trying to get tenured in a psychology or a neuroscience department doing metascience is going to be very challenging. It was only because people knew me as a neuroscientist and a psychologist who has done some things that I think are at least somewhat impactful in those worlds that I could make this move.
And so I certainly counsel all of the trainees in my lab that you need to have fundamentally, you need to be asking scientific questions and doing interesting science and then you want to do it, obviously in the best possible way. And if doing some meta science along the way is, you know, something you want to do, then you should do that. But ultimately you're going to be judged for, you know, hiring a tenure promotion on the scientific impact that you make.
[00:10:39] Speaker B: Yeah. So I guess that's an evolving question on how to proceed at different stages of your career.
You've stated in the past that you think of the reproducibility crisis while solving it as a design problem.
First of all, are you familiar with the Designing youg Life book that was written by the Stanford design team?
[00:11:02] Speaker A: I'm familiar with it. I haven't read it.
[00:11:03] Speaker B: Okay. It's one of the books that really is part of five or six of my go to's back, just as the self help genre that everyone's really interested in and how to apply principles from design into your life and career. But I'm wondering, what do you mean when you think of the Reproducibility crisis solution as a design problem.
[00:11:25] Speaker A: I think about it in terms of choice architecture. This idea from behavioral economics that whenever we go into a situation, there are features of the situation that were designed either explicitly or implicitly, to drive people towards particular choices.
The default settings in a software package are the most obvious ones. People are very likely to use whatever default statistical threshold is built in if it's there. And so using what we know from behavioral economics about how do you modify choice architecture, Thaler and Sunstein talk about nudges. The idea that we can kind of push people to do the things that we think are the right things to do without limiting their freedom. You can always choose another threshold, but to the degree that we know that there's something that's a good thing to do, we should make sure that the situation drives people to do that thing.
[00:12:18] Speaker B: Incentives.
[00:12:19] Speaker A: Well, it's incentives plus affordances. Right.
It's not so much that you're incentivized to use the default, except for the fact that it's easy.
[00:12:29] Speaker B: Gotcha. I want to ask you one more thing, just based on larger, broader questions before we get into more nittier, grittier stuff. I've heard you give the advice to learn as much technical skills as possible as early on in your career as you can. I mean, when I went into graduate school, all of us, and this relates to open science as well and collaborating. When I went into graduate school, I had to learn matlab, and so did my associates, and we were all learning matlab, our own versions of matlab, making our own idiosyncratic mistakes and awful, awful code, which, as you know, every year you think, now I'm a good coder. And then you look back, the next thing you think, ah, it's terrible. But now I'm good, now I'm good. But everyone had their own style and everything. And, you know, specifically just for coding. I'm not even sure if this has changed toward this favor, but I mean, why is coding not a required class or skill in going into, like, graduate school, for instance, in a science like neuroscience?
[00:13:36] Speaker A: Yeah, I think it should be. And de facto, certainly for my lab, it is. I have a blog post where I wrote about graduate study in my lab, what I expect. And I used to accept graduate students into my lab who didn't know how to code, the idea that they could learn how to code. And I've now realized that they basically end up spending the first two years of their life just learning how to code. And so now I basically say, if you're going to come into my lab, you need to know how to code. But I think that beyond that, just knowing how to code doesn't mean that much in terms of these issues about quality and rigor of the coding.
In the last year, I've become really interested in software engineering and its role in science and thinking about what we can bring to bear to try to improve the quality of scientific software.
We recently had an interesting thing that happened in my lab where we had posted a preprint and with it posted all of the code and it was for analyses of this open data set, the ABCD data set, and the team that had developed that data set. Our preprint criticized some of the choices they had made in their design. They dug into our code and found an error.
And so what?
[00:14:56] Speaker B: They were probably looking pretty hard for that error.
[00:14:58] Speaker A: Exactly. Yeah, yeah. And it was, you know, it was basically because, you know, the person who wrote the code had kind of written it in a really kind of obfuscated way with a bunch of like, you know, nested boolean operations. And so we sat down in the lab and did. We wrote a blog post about this on our reproducibility blog. We basically try to dig in and say, why did this happen? What did we do wrong? Why did this happen? Kind of patterning it after. There's this thing that happens in the medical world at academic medical centers called the Morbidity and mortality conference, where it's basically like a blame free zone for talking about medical errors or possible problems. So we basically did that. We said, look, everybody makes mistakes. Let's figure out why this happened so that it doesn't happen again.
[00:15:53] Speaker B: Yeah, I mean, even after I learned to code when I was a postdoc, every year we would have, at the beginning of the semester we'd have a big lab meeting about how we needed to all have combine be able to combine our code and trade code and work seamlessly. And at the end of the lab meeting we all felt like, all right. And then we all went back to our offices and continued just as we were only to convene again the next year or semester with the same issues. I mean, this is an ongoing problem as well because there are so many meta skills to learn, coding, how to do science, how to think about things. It seems like there's hardly any time for science. Do you think that this is a big issue or is this something that we just need to get better at building into the system?
[00:16:37] Speaker A: No, I think it is a big issue, particularly for people in a field like cognitive neuroscience where, you know, there's so much to have to know. Right. You have to understand statistics, you have to understand image processing, you have to understand how to code, you have to understand how an MRI scanner works, you have to understand neuroscience, you have to understand psychology. Right. There's just, you know, you're supposed to.
[00:16:58] Speaker B: Understand all those things anyway.
[00:16:59] Speaker A: Exactly. Right. Yeah. It's sort of an inhuman amount of stuff to expect anyone to know. So the question is, how do you deal with that? I mean, one strategy that people have started proposing if it can be done is great, is this idea of having research software engineers who are basically, you know, like right now most universities have like statistical consultants, right. Who will, you know, you can go to and for free, they'll help you solve your stats problems. And those are often like grad students in the statistics department.
The idea would be sort of similar to that, having software engineers who one can go to and get help with like software problems, you know, to do a code review or to, you know, whatever the issue is that one has with one's code, obviously that's something that's going to work well at a well resourced place like Stanford. A lot of places aren't going to have the resources to hire people like that.
So then the question becomes how can one try to do a better job at software engineering? So I've been reading a lot of software engineering books lately and reading a lot of that literature and it's really interesting actually. It's not just impacted the way I think about coding. I think it has made me a much better coach in terms of helping people in the lab with code review. But it's also given me some thoughts about meta scientific issues as well.
But we, for example, in my lab, we now regularly, probably once every month or so do a code review session in our lab meeting where one of the trainees in the lab has a piece of code they want to go through and we just kind of walk through it and tear it apart and try to rebuild it.
[00:18:38] Speaker B: That's a great exercise for everyone, I think.
[00:18:41] Speaker A: Yep.
[00:18:42] Speaker B: Okay, well, all right. So these are ongoing issues and I'm glad that people like you are working on them. I've been having some guest questions and we're just going to start off with a guest question from Kendrick K. And I'm just going to play the question and then you can have at it. My question to Russ, given that he.
[00:19:04] Speaker A: Has a nice broad view of many.
[00:19:06] Speaker B: Different types of thinking out there, different fields from psychology to fmri, of course and computational work. So the question is, I mean, we all have limited resources, so you have to dedicate your resources somewhere. And of course, our decisions are reflected in our actions.
[00:19:24] Speaker A: But I guess from your perspective, Russia, if you had limited time to spend.
[00:19:29] Speaker B: On either, and these are loaded terms, of course, theories.
[00:19:33] Speaker A: So better theory, better modeling, better analysis.
[00:19:39] Speaker B: And software, or better data, where would you put your resources?
Okay, so the key here is. You can't say all of them, of course.
[00:19:50] Speaker A: I think that's a great question. And in part, I mean, so I can answer it the way, you know, in a personal way. I think that, you know, each individual is probably going to fall in a different place. And in part, it depends on kind of, you know, thinking about what your strengths are. I would love to think that I'm like a strong theorist or modeler, but I just. I'm not sure that that's where my strengths lie. I think, you know, historically, my strengths have lied in, like, finding interesting new problems and like, doing, you know, doing a few experiments on them and then moving on to some other problem rather than, like, you know, kind of building compelling theories or models in those areas. So for me, I think it would be, you know, more about spending time on analysis and on data, and that's, you know, cashed out in the way that I've been spending my life.
I don't actually think those are the most important things for the field. I think that right now, especially in psychology and neuroscience as well, I think that theory is really the thing that we need more of. Right. And that we. That we need to focus more on.
Because, you know, I think that there's. In the Time of the Brain initiative, you know, there's been this idea that, well, we just need to record more neurons faster and better, and then we'll just understand the brain. Right. That understanding will just emerge from the data. And it's become pretty clear that that's, you know, that's just a.
An incorrect way of thinking about how science progresses, that we really need, you know, theories to help us understand the data. And so I think actually that, you know, theoretical neuroscience is probably the most important area right now, and theoretical psychology as well, even if that's not the area that my. That I have the greatest strengths in. So I wouldn't. That wouldn't be the way I would focus. But if I was telling somebody else where to focus part, this is like, you know, where do I think that there's the most, you know, the most fruit to be picked that would be theory.
[00:21:54] Speaker B: Gosh. But it's, well, two things. One, he, he guessed that you would say analysis and. Or software as opposed, as opposed for yourself, as opposed to theory or modeling or data. Although you did say data. But then the question is, how do we, you know, change the field to.
Because we don't just need theory, we need good theory. So how do we influence the scientific community to promote more of good theory and less bad theory and less data for data sake, sorts of approaches?
[00:22:31] Speaker A: That's a really interesting question. I mean, I think in part you have to hope that if you have more theory, period, then ultimately good theory will outweigh bad theory.
That's an article of faith rather than a data driven belief.
[00:22:49] Speaker B: You need a theory. Ontology, Right?
[00:22:52] Speaker A: Exactly.
Yeah. So I think that just kind of getting more people to do theory is the first step.
[00:23:00] Speaker B: Okay, so before we get into cognitive ontology, I kind of want just your broad assessment where we are and where we're headed in neuroscience and in AI. I'm wondering if you think that we're on the verge of a paradigm shift, a la Kuhn's kind of paradigm shift. Because as the voices like yours rise about, hey, we may be doing it wrong, we need to reorient and rethink how we're going about even doing it and the questions that we're asking and reformulate the questions and so on. So I'm just wondering where you think we are in the current time and where we might be in the near future.
[00:23:44] Speaker A: Obviously, it's a really exciting time to be doing neuroscience. A lot more exciting for people working in animal models than in human models, though even in human models, the imaging techniques have become pretty amazing.
As I've already said, I think that we're in a relative glut of theory. The big hole that I see is one that I think has been characterized well in a couple of recent papers. One was by John Krakauer and his colleagues and then Yael Niv just had a preprint recently on the primacy of behavioral research for understanding the brain.
[00:24:24] Speaker B: Rolling my eyes, but okay.
[00:24:28] Speaker A: So I think that if you interact, if you're a psychologist or a cognitive neuroscientist and you interact much with people who do cellular molecular neuroscience, they don't even hide rolling their eyes. Often. When you talk about psychological theory or imaging results, and there are certainly reasonable questions one can ask about psychological theory, imaging results, but I think that there's an idea, there's this kind of very deep seated reductionism in a lot of cellular molecular neuroscientists. That, well, once we understand the circuits and the ion channels and all these sort of things at the cellar molecular level, we don't need to care about all that goofy psychology stuff.
And here's an example of a place where I think this is a problem. So if you look at the pages of Cell, I think the journal with the highest impact factor, you regularly see papers with titles like, you know, depression involves a disruption in circuit X.
[00:25:29] Speaker B: Right.
[00:25:30] Speaker A: Where that circuit is defined in like, these amazingly precise terms of like, you know, specific sets of neurons in particular regions with particular types of connections. But what they don't tell you is what is. They say depression involves a disruption, but it's really a rodent model of depression that has tenuous validity for the human depression phenotype. Right. So we have these amazingly precise biological models built around really imprecise and often invalid models of the psychological phenomenon.
[00:26:02] Speaker B: What would be a better way to word that?
[00:26:03] Speaker A: Well, I think saying anhedonia in a mouse model of depression involves a disruption in circuit X.
[00:26:11] Speaker B: Right.
[00:26:12] Speaker A: But that doesn't sound as fancy, Right.
[00:26:15] Speaker B: A little bit lower impact, I suppose.
[00:26:17] Speaker A: Right.
[00:26:18] Speaker B: Yeah. So, yeah, this is. John Krakauer rails against this sort of the exact same thing that you just described. And I don't know, it's so strange to me.
I just have a hard time believing that that's still the case, that maybe within the cell and molecular neurobiology world, I just think of it all as being everybody's multilevel and thinking about all these things and all the different levels are interacting. But am I just naive in thinking that, or just. Is it because I came from a monkey neurophysiology lab where we attempted to tie spiking rates to higher cognitive functions and so on?
[00:26:57] Speaker A: I think it is that you're naive, and it is also because you came from a systems neuroscience lab that I think takes cognition seriously. Certainly people in systems neuroscience, I think, are much more, in general, much more open to taking seriously the psychological side.
But the papers in Cell are not being done by them. Right. They're being done by circuit busters who really care about doing optogenetics on these very particular circuits. I mean, there's a deeper problem. I think that actually the issue that I have with the way that John and Yael and sort of others have kind of framed this critique is I think it actually goes deeper than this idea that behavior is the bottleneck. Right. So they frame it in terms of we have to understand behavior. Right. But when I, you know, so in part, this is kind of, you Know, speaks to where I came from. Right. When I was starting graduate school in cognitive psychology, you know, the memory of the cognitive revolution was still fresh in the faculty's minds. And so my, you know, my training instilled in me this deep sentiment that you can't understand behavior without understanding the mental representations and the processes that underlie it. Right. So it's, you know, obviously understanding behavior is really important and characterizing behavior well. And I think a lot of the, you know, there's, there's a lot of cool work that's being done in kind of, you know, building especially in like, you know, rodent models, building models of, you know, the behavioral repertoire of animals. But I think that's not going to get us to where we want to be. And unfortunately. So, you know, Yael points out some examples of this in her paper.
Work focused at that cognitive level is increasingly shunned by the funding agencies.
So it's as if NOAA was focused on understanding weather, but they were to say one of our missions is to understand coastal flooding. But since water is made of quarks, we're only going to fund research that uses high energy physics techniques.
I think there's this, there's this naive and sort of implicit kind of eliminated reductionism amongst a lot of neuroscientists. Right. Who really think that once we understand the neurons then everything else is just going to kind of fall into place and kind of not recognizing as legitimate these kind of higher levels of kind of emergent organization. Like the cognitive level.
[00:29:23] Speaker B: Highlight of the podcast thus far. Russell Poldrack called me naive. And I mean I really, I must be naive because yeah, maybe I'm just looking at all, at all through rose colored glasses. It's just hard for me to believe that people do have that notion that it will all just work out if we figure out the structure, what's connected to what. And the implementation level stuff.
[00:29:46] Speaker A: Yeah. And you know, the C. Elegans example kind of shows that that strategy is not going to you to where you want to be, even in terms of understanding behavior. So there's certainly good evidence that it doesn't work.
[00:29:59] Speaker B: Yeah, well, we could spend all day talking about just this, but let's move on and get closer to cognitive ontology. So you've been thinking about this kind of stuff for a long time and there's this old phrenology approach that you've highlighted in blog posts and such that areas of the brain do mental things. You can map on mental functions two areas of the brain and you know, you talk about that in your book the New Mind Readers. You talk about how you know what we have known, you know, the history of fmri, what we can know and what we can't know based on fmri and you know, detail the reverse inference problem in that book, which I really recommend. It's a great overview of fmri, but it also starts to touch on some of these cognitive ontology issues. I'm wondering how your thoughts about FMRI in general as a tool for understanding minds has changed over time. I know that you didn't initially think you were going to be doing FMRI work when you started your career, but you were sucked into it.
[00:31:03] Speaker A: That's right, yeah. In fact, it's funny, in graduate school I got my PhD in 1995. FMRI was invented in 1992. So in the early 90s a lot of people were doing pet research. There was some new FMRI work coming out. It was all pretty low hanging fruit kind of work, like, hey, we show people words and this part of the brain lights up and if you show them scrambled words, it doesn't. So it was easy to ridicule. And so when I went to do my postdoc, I wasn't really interested in doing imaging. I wanted to do patient work looking at basal ganglia and skill learning. And for various reasons I ended up getting sucked into doing imaging in part because I'm a geek and I like using know, messing around with computers and data.
And that's something you do a lot of in fmri. So am I more or less enthusiastic about the ability of FMRI to inform our understanding of the mind? I think that I'm fairly optimistic in at least in one particular way. I think that there's ways that people have started to use imaging FMRI in the last decade that I think do have a much better ability to actually tell us something interesting theoretically. So in the early days you would do some subtraction. Let's say I show people high frequency words and low frequency words and look at the difference in brain activity between those and then try to find the regions that are differentially active and say something about their function that's not that useful, I think for psychological theory, because if I have a theory about word frequency, it's a psychological theory that probably doesn't say much about where in the brain that lives. Now you can imagine if we know the computations the different brain areas are doing, we could kind of use that to help understand what that activation might tell us. And I think that's a strategy that has been at least a little bit successful. But I think that the more useful strategy for telling us about cognitive theories, even though it's not clear to me that it's really been cashed out fully yet, is this kind of like pattern similarity idea. The idea being that when we do pattern similarity analysis in fmri, what we're doing is basically saying, instead of saying what regions are more active than others, what we do is we say across a bunch of stimuli or task conditions or whatever it might be, what's the similarity structure of patterns of activity across the whole brain or in particular regions? And then we can use that. So, for example, most theories, even if they don't. Most psychological theories, even if they don't say anything about kind of where in the brain things live, they almost certainly say something about the degree to which different stimuli should be processed in a more similar or different way. Right. So now, using pattern similarity analysis, you have a way to start actually testing predictions about theories, or either theories as a whole. Or the other thing you can start doing is saying, well, I think often in the history of psychology, we've had this kind of pathological binarization of hypotheses. We've had all these debates. Is it serial or parallel processing? Is it analog or propositional information? And almost every time when we have those debates, people end up realizing that the debate was pretty much kind of off base and kind of a little bit of both of them were right. The one thing you could imagine is that we can now start saying, well, it may well. So let's take, for example, in categorization. Some people think that categorization relies on memory for exemplars. Other people think categorization relies on some kind of prototype. Right. And it may well be that only one of those theories is right. But it may also be that different brain systems implement different of those processes. So now you can use pattern similarity to say, well, this one system looks more like an exemplar system. This other system looks more like a prototype system. And I think that starts to let you much more closely tie psychological theory to brain function.
[00:35:12] Speaker B: It's never simpler than you conceive of it. It's always more complicated than.
It's never an either or. Yeah. I mean, has that sort of, you know, the development in FMRI and the way that you've thought about that, has that also altered over time your conception of our minds?
[00:35:30] Speaker A: It's an interesting question.
[00:35:31] Speaker B: I mean, it's impossible to isolate that specific. How has the FMRI specifically changed? But maybe Even more broadly, how has your conception of mind changed over time? And if so, can you articulate how?
[00:35:47] Speaker A: I think the main way that it's changed has become more computational. I started out from a tradition. I did my PhD working in a lab that focused on memory and memory disorders. And that's a very kind of box and arrow model type of field, at least it was back in the 1990s. And obviously people have been doing computational modeling of various sorts across psychology for a long time. So in some ways, I'm just kind of catching up with where the field has been going. But one of the things I've been really struggling with lately is how to think about what does a cognitive ontology look like when it's framed in terms of computation rather than in terms of what you might call folk psychological concepts. So I did an analysis, I wrote a blog about it a few years ago, where I, I took all the terms from the Cognitive Atlas, which basically just tries to describe all the different parts of the mind that we think we know about as psychologists.
I took all those terms and used the Google Books database to ask what proportion of these terms were in the literature, in the English literature, starting back, going all the way back to 1800. And basically what I found was the Cognitive Atlas terms. I think for the single words that are in the Cognitive Atlas, 80% of them were in the English language as of 1800. And for the phrases, the majority of them were in the English language certainly by the early 1900s. Whereas if you take the gene ontology, which is probably the best known biomedical ontology that describes the parts of cells, molecular functions, and biological processes, very little of those terms were in the literature until well after 1900. So it says that we're using a set of terms. Also, if you look at William James, 1890 Principles of Psychology, the chapter headings there are, other than the way they're worded, are things that people are studying today. And so the way that we chop up the mind certainly has not changed in more than 100 years. I think that the computational turn is a really big change in the way that we think about this. So one of the things that I've been trying to think about is how do we rethink describing the organization of the brain in computational terms? And an intuition pump for this has really been the, the work over the last few years that's been using deep neural networks to try to understand the visual system. So the work that I know best is the work from Dan yeamans and Jim DeCarlo, where they basically take a deep neural network or really take a class of hierarchical convolutional neural networks and examine, train them to recognize objects without telling them anything about the brain. And then also simultaneously or in parallel record from non human primates in the visual system, basically recognizing those same objects across the ventral visual stream. And then what they see is that you can well predict the activity of neurons from the activity in different layers of that hierarchical neural network. And so the question is, what have we learned when we do that? So let's say that area V4 is best predicted, its activity is best predicted by convolutional layer 5 in this particular deep neural network. What have I Learned about what V4 does by knowing that? And one answer might be, well, it does the thing that convolutional layer 5 does and that you can't really say anything more than that. And I think that's the way that my colleague Dan Yeaman's kind of views it is that trying to put verbal labels, functional verbal labels on these things doesn't really make sense because ultimately it's described in terms of the computations, the particular transformations of information that are being performed by those layers in the network. But I, as a psychologist, I have this, I think deep seated, need to give a kind of a low dimensional verbal description to what that particular circuit or system or region or network is doing.
[00:40:16] Speaker B: So do you think that then in that case, and we may come back to this, because we'll back up in just a second, but in that case, right? So we have this intuition that a given area of the brain needs to do something, right? It needs to have a function. And in this particular case, let's say a V4, we need to be able to look at what it's doing and give a name to it. But the low dimensional description that we, that we give to that kind of processing is that it is a fairly higher step in the ventral visual stream on the way to object categorization or object identification. And you know, is that where it ends? Is that that how we describe? Is that. Is that where our ability to put words and phrases to this in a low dimensional space ends that we have. Then we describe the actual trajectory of the layers versus what each layer is doing at each given step. Or because, you know, in V1 it might be a little bit easier, right? Contrast, enhancement, line detection, things like that. And it may just get more and more abstract or maybe we need to invent new terms or maybe we need to use equations, you know, computational equations. What is that what you're getting at?
[00:41:26] Speaker A: Yeah, yeah. I mean, you know, obviously there's lots of strategies that people have used.
So for example, you can try to ask, you know, you can take the neural network and kind of do in silico electrophysiology and ask like, what are the stimuli that best activate these particular units in the network? And then kind of look at those and say, that kind of looks like a high level feature, that's like an object, or it looks like a lower level feature and has a smaller receptive field or a bigger receptive field. There's various tricks you can play, but ultimately it's not clear what those buy you, what kind of predictions you can make or what kind of understanding you get about the system that's sort of useful beyond kind of what the network.
[00:42:06] Speaker B: Being a step in a process or something.
[00:42:08] Speaker A: Right, right.
[00:42:09] Speaker B: Yeah. All right, maybe we'll revisit this. But all right, let's talk ontology and cognitive ontology. So the word ontology in philosophy, as you have pointed out in your papers, has to do with what's real, what things exist in the universe that really exist, no matter what we call them, what are the real things that exist? But a cognitive ontology is slightly different than that. So what is a cognitive ontology?
And then why do we need one?
[00:42:37] Speaker A: Yeah, so a cognitive ontology, or what is called often a biomedical ontology or a formal ontology is rather than being a description of what really exists in a kind of a metaphysical way, it's really a description of what we think exists. Right.
It's basically a formalization of our conceptualization of the world. Right. So like the gene ontology says, what are all the parts of a cell? What are all the things, you know, what are all the processes that a biological system does? Right. So a cognitive ontology basically says what are all the parts of the mind? You know, what are all the things that we think minds do? Right. And so those could be things like memory, those could be things like, you know, task set shifting, they could be high level, they could be low level, and then they're, you know, in a biomedical ontology, they're generally described in terms of a specific set of formal relationships. Things like, you know, X is a kind of Y or A is a part of B. And so that's what, that's the way that we think about, you know, a cognomentology. Now why do you need it? Well, let's ask a question a different way. Why did I start spending time working on this?
[00:43:47] Speaker B: Yeah.
[00:43:49] Speaker A: And you Know, I started thinking about this more than 10 years ago back when I was at UCLA, sort of inspired by my colleague Bob Builder, who, you know, was thinking a lot about, you know, gene ontologies and how we might, you know, kind of bring to bear those sort of ideas on psychology. And so we, you know, we basically, I was inspired to think about, like, you know, what are the things that we're mapping onto the brain? Right. So one of the terms that sometimes used to describe the enterprise of cognitive neuroscience is quote, unquote, brain mapping. Right.
So the question is, like, what are. If you're going to map things, you need to know what the things are that you're mapping on to the places. Right.
And so the question is, like, what are the things? And in part, you know, I sometimes like to say that there was sort of a subversive agenda around the cognitive atlas, which is, you know, I went into this being pretty sure that, you know, a lot of the ways that we kind of chop the mind up, these ways that we inherited from, you know, from William James et al, are almost certainly wrong. Where by wrong, I mean they don't reflect the computational organization of the brain.
But if we want to figure out where. How we're wrong, we need to kind of figure out exactly what it is that we believe to start with. Right. So the. In some ways it was like, you know, let's write everything down as precisely as we can so we can figure out how to break it.
[00:45:17] Speaker B: Can you just describe what the cognitive atlas is? This is kind of where you store your. The cognitive ontology, I suppose is one way to say it.
[00:45:23] Speaker A: Yeah. So the cognitive atlas is a website that was sort of inspired by Wikipedia. You know, it's meant to be kind of a community project so anybody can come on and sort of contribute knowledge. And basically it outlines, it describes two separate. At least two separate sets of things. The main two sets of things are what we call cognitive concepts or mental concepts. These are the. These are the latent things that we can't see but that we think exist in the head. Things like working memory. Working memory, exactly.
[00:45:54] Speaker B: Yeah. Okay.
[00:45:55] Speaker A: And then we. Then we have a separate kind of description of what we call mental tasks. These are the things that we measure the mind with. And one of the. One of the really problematic moves that people in psychology and neuroscience often make is basically equating tasks with functions. Right. They'll call something a working memory task.
[00:46:16] Speaker B: Yeah, Right.
[00:46:17] Speaker A: That's a theory of what is involved in performing that task. Right. And the task almost certainly involves lots of other things. You know, we, coming from the memory world, we always had this saying that, like, no task is what we call process pure, right. And this is basically just trying to highlight that fact that, you know, tasks and processes are not isomorphic.
And so we just, we try to describe the relationships, you know, what are all the parts of the mind, what are all the relationships between them, Right. So is working memory a kind of memory is, you know, visual selective attention, a kind of visual attention and so on. And then we try to describe how those things are measured by particular contrasts or comparisons on particular tasks.
[00:46:59] Speaker B: And then the goal is to define formal relationships between the tasks and the concepts and the activations in the brain related to those tasks and concepts, correct?
[00:47:13] Speaker A: That's right, yeah. I mean, the neuroscience part is kind of the next step. The idea is that we start with these formal relationships between particular tasks and particular measures on tasks and particular cognitive functions. And then the idea is like, well, now what we can do is get data on those tasks and ask various questions. So, for example, my student John Walters right now is doing a project where he basically takes people doing. There's a data set that Yearn Diedrichsen, Rich Ivory, Maeve King collected that I was involved in sort of helping them analyze. We published in Nature Neuroscience last year.
It's this multitask data set where people do 40 something different task conditions. And so what John has been doing is basically taking all those tasks and a group of us sat down and basically annotated each of the tasks to say for each particular comparison of task conditions, what psychological functions do we think are tapped by or required to perform this task? And that's not always clear. And sometimes we can spend an hour or two talking about one task, but we've done that for that set of tasks. And now he's building models to basically say, okay, so can we train? So actually, models that are inspired by work that people have been doing in the visual neuroscience literature for a while, most prominently the work of, of Jack Allen and colleagues using encoding models. The idea being like, you know, you build a model that relates cognitive functions to brain activity, and then you test the model by predicting patterns of brain activity for tasks that the model specific combinations of cognitive processes that the model has never seen. And we're actually finding we can. Surprisingly to me, given that I thought our cognitive ontology was pretty bad, we can actually do pretty well at pre predicting brain activity patterns on tasks that we've never seen before, based on an annotation of what cognitive functions we think are involved in those tasks.
[00:49:15] Speaker B: I mean this is a set of.
Well, we were going to get into this later, but anyway, this is a set of multiple tasks and multiple, I mean the whole idea is to do this sort of in mass and then you can really figure out where the lines are and the joints are. And that's how the modeling works. So. Well, I suppose that's right, yeah.
[00:49:35] Speaker A: And even 40 something tasks is a pretty, is spanning a very small part of the psychological space. Right. And then this in part was my motivation for about a decade ago getting into the data sharing business. You know, there's obviously there's a reproducibility angle for the data sharing idea, but there's also the idea that like my lab is never going to be able to collect data on all the tasks that we would like to collect data on in order to do this kind of mapping.
[00:50:03] Speaker B: Yeah.
[00:50:03] Speaker A: And so if I can get a lot of other people to share their data, then that allows us to be able to, you know, expand the set of things that we can try to model. Now the problem there has turned out to be this annotation issue, right. That to do this right, each of the task comparisons needs to be sort of annotated using something like a cognitive atlas to say what functions do we think are engaged in this particular comparison? And that's just a time consuming enterprise. And so doing that on hundreds or thousands of tasks is just really challenging. People don't do this in their papers in general. It's interesting. Back in the 90s you would regularly see papers that would have a little sort of a chart showing what do I think all the subtractions in my analysis, what psychological functions are they tapping into?
And this was particularly common back when people were doing PET imaging. And you just don't see that anymore. I think people don't think as deeply about the implications of subtraction as isolating particular cognitive functions.
[00:51:08] Speaker B: This is I think a good point to play. The second question here that I have from you and then I want to go back and talk about what you've been describing so far as kind of a top down approach and you also do a bottom up approach to developing a cognitive ontology. But before we get too deep into it, so I had David Poppel and Yuri Buzaki on the show talking about whether we should go from top down, which should have epistemological primacy psychology in naming these mental functions and then we confirm it with neural data. Or Yuri, his preferred approach, which I'll ask you about in a little bit, is to look at neural data and try to infer maybe not mental functions, but at least properties of the neural data that will help us better, maybe build a cognitive ontology, for instance, of mental functions. But. All right, so here's David's question.
[00:52:05] Speaker C: Hi, Russ, it's David. I hope all is well and good and that you're having fun talking to Paul. So I'm very sympathetic to the problem you're grappling with here. And I like the framing of defining ontologies. I wonder how you handle the tension between ontologies that come from different ways of approaching the problem. That is to say, the ontologies we derive from psychological investigation or text mining or the cognitive sciences are of one form. And the ontologies we might derive from biology straight up are quite different. And so we end up with ontologies that are not necessarily well aligned or even linkable. And I wonder if you think you have an approach to deal with this problem or if you think one or the other ontological type actually has, let's say, a kind of epistemic priority. In any case, I think it's a really important problem and I'm excited about pursuing it further and I'm glad you're working on it.
[00:53:12] Speaker B: Okay, thank you.
[00:53:13] Speaker A: Great question. Yeah.
So, yeah, I do not believe that any of these individual levels has an epistemic priority, and I'll look forward to discussing that in a bit more. When we talk about Yuri's ideas, I guess I take some degree of inspiration from the gene ontology, which talks about things that are in principle very different. It talks about parts of cells, endoplasmic reticula and lysosomes and all those sorts of things which are very different from molecular functions, phosphorylation or biological processes like citric acid cycles.
You might think that those are very different things, but they're obviously related in the sense that particular molecular functions are required in order to achieve particular biological processes, and particular parts of cells implement particular biological processes or molecular functions. And so you can. Even though they're different, they're ontologically very different types of things. Some of them are. And actually the parallel with minds is really interesting, right, because some of them, obviously parts of brains, are observable things, right? We can see neurons in CA1. We know that they're physically present things. We can't see memories, right? We might be able to see the traces that memories are associated with in physical brains, but we can't see a memory because a memory is an abstract latent thing, right? Similarly, we can't see The TCA cycle, we can see the evidence of particular molecular functions in particular parts of cells. But the TCA cycle is also kind of an abstract thing. And so I think that we can actually hope to relate these things.
Going back to the comments earlier about this kind of a limited reductionism. If we kind of buy the idea that there really are a set of hierarchically organized levels of organization, none of which is primal in a sense. Obviously the higher level ones depend on the lower level ones, but it's not as if they can be reduced to the lower level ones. There's a level of organization that emerges that can't simply be described in terms of the lower level.
[00:55:49] Speaker B: So this is kind of a follow up. Related. Wondering whether what you're after are these kind of irreducible primitives or rather if they're parameters, he says, with respect to a theoretical level of abstraction. Right. So this almost kind of gets back to the metaphysics of it. You know, are they irreducible primitives like particle physics? Right. Or, you know, whatever the basis is of all matter or something?
[00:56:16] Speaker A: Yeah, I mean, I guess I like the idea that there are parameters in a theoretical framework in part because I don't think we can ever get out from behind our theories, be they implicit or explicit. And this relates to this idea of kind of learning everything from the bottom up. So I agree that a good way to think about this is we have some kind of basic assumptions about how minds should be chopped up.
And then what we're doing here is basically saying, now let's go in and name, given that strategy for chopping things up, let's name the parts that we have chopped up.
[00:57:00] Speaker B: Okay, so before we move on and describe a little bit more like the actual approaches that you've taken to this just broadly, I'm wondering how much progress actually depends on getting the ontology right. And of course, embedded in that is like, how do we know how right it is, for instance?
[00:57:20] Speaker A: It's an interesting question.
I think it does simply because in part, in other sciences, we think that moving towards a more accurate ontology has been associated with more effective outcomes in those sciences. And so in this case, if our goal is to come up with mechanistic models of how brains give rise to mental life into action, it seems that if we're not chopping up mental life or behavior in a way that is truly reflective of the mechanisms that generate it, then we're going to be fundamentally limited in how well we can do. So if we were to have, I use this thought experiment sometimes what would have happened if the phrenologists had gotten their hands on fmri? Um, and they did the answer it.
Sorry. Yeah, the faculty psychology fredologist gall. Right. If they had, if they had asked, you know, can we now use FMRI to map our quote unquote mental organs like phylo progenitiveness and suavity, you know, onto the brain? You have to know that they. It's not like they would have found nothing. Right. They would have found something. And that doesn't mean that their way of chopping up the brain is correct. In fact, they probably would have found something similar to what we find, which is that there's a ton of different supposedly distinct psychological functions that all give rise to very similar patterns of activity in the brain. The anterior cingulate is activated in something like a quarter of all psychological or a quarter of all neuroimaging studies.
So clearly we have not learned much, I think, about what that area does in terms of its ultimate function, at least from the larger body of that work.
[00:59:17] Speaker B: Yeah. Okay, there's a trap question. I just want to make sure that there was a good reason for us to go ahead with a cognitive ontology. So like I was mentioning before, you've taken, well, multiple approaches to this. Broadly you've taken a top down approach and a bottom up approach, but even within the top down approach, by top down I mean you've sort of started with our theories of mental functions and our names of those mental functions. Kind of started with psychology, I suppose you could say, and used that method to break down the ontology and sort of in parallel, I don't. Maybe you can discuss whether it's in parallel or more recent, but there's this bottom up approach where you start with observations and you correlate observations in tasks and in surveys, you correlate your way to developing a new, to generating an ontology. So I'm just going to ask you to take us because you already talked a little bit about the top down approach, but I'm just going to ask you to maybe describe a little bit more both of those approaches and what you've found regarding, you know, our current ontology, which you've already said surprised you.
[01:00:26] Speaker A: Yeah. So as I mentioned, the work that we've been doing based on the kind of top down ontology models to look at how well can we predict brain activity based on those models.
We're far away from being able to predict all the brain activity, but we certainly do much better than I would have expected. Suggesting to me that at least in part, there's something right about those models. And then now the question is going to be, where does it break down? So, yeah, the work on data driven ontologies is really more recent. It started with a student in my lab several years ago, Ian Eisenberg. And I had been walking around with ideas about trying to do kind of data driven ontology development for a while, but none of my students would ever get very interested in it. And in part I thought it was too dangerous of a project to give to a grad student, but Ian jumped on it and really wanted to do this project. And so, so, and it was in concert with a set of ideas that we were just developing with some collaborators around and sort of driven by interest from NIH in developing an ontology of self regulation. Right. So self, quote, unquote, self regulation is a term that's used in many different ways in many parts of psychology. You know, people in my part of psychology might talk about response inhibition or delay discounting being an aspect of self regulation. People in social psychology might think about self control, or people in health psychology might think about impulsivity. So there's lots of different ways that this gets cashed out. Sometimes in cognitive tasks that measure reaction time and accuracy, sometimes in self report where I answer questions like do I make impulse buys at the grocery store?
And so we obtained some funding from NIH to go after this question, specifically in the context of self regulation. So what we did was generated a battery. We sort of looked across psychology and said as broadly as we can, can we pick a bunch of different measures that are all thought to kind of index different aspects of self regulation from different standpoints? And basically it was like a 10 hour battery. And so we got a little over 500 people to complete this 10 hour battery, all online.
And so we had data from them doing lots of different tasks. And then we basically just took this very standard approach from psychology for a long time of using multivariate analysis, like factor analysis, to look at the structure in the data. So it's sort of like an unsupervised approach to looking at the structure in the data. And there's a lot of structure in the data. So, for example, we had several different measures of how much a person discounts future rewards. Those are all highly correlated with one another. We also had several measures of how well you can inhibit a motor response, which are also correlated with one another and not really correlated at all with the measures of delayed discounting.
[01:03:22] Speaker B: But in that analysis, don't you have to tell the algorithm how many clusters to create.
[01:03:30] Speaker A: Yes.
And this turns out to be the trickiest part of this.
[01:03:37] Speaker B: Well, because then you're sort of defining how many mental categories there are, right?
[01:03:42] Speaker A: Yep, yep. And so in part, this is why I think I've become a lot less enthusiastic about the idea that we can sort of just use data to infer the joints in the system. So there's a. You know, in some ways this is very similar to the clustering problem in machine learning. So there's a paper by Ulrich of von Luxberg and colleagues from a few years ago called Science or Art, and they basically outlined this idea that there's no way of defining a correct clustering solution for any data set. That the decision about which clustering solution is best has to depend on the end goals of the researcher. So, for example, in our work we use Bayesian information criteria and BIC to determine the quote, unquote, optimal number of factors in our factor analysis.
But using that particular criterion makes particular assumptions about how much we want to penalize parameters versus sample size. We could have used aic, we could have used cross validation, and those all give us different answers about the optimal number. Now, fortunately, the story that one might tell is not that different. Right. If I get a cluster solution with say, or a factor analysis where I say there's 20 factors as opposed to five, if you look at the 20, you can usually see that, well, they all kind of emerge from the five. It's not like they're telling you something completely different. They're giving you kind of a more detailed view of what you would have, of the kind of the lower dimensional view you would have gotten with fewer. But nonetheless, it tells me that this idea that you can just kind of look at the data and have the structure emerge without any sort of preexisting theoretical framework is just an untenable idea.
[01:05:43] Speaker B: Well, it also makes one hesitate to speaking of ontology. And I know this isn't about the metaphysically real things in the universe, but then if you can tell the same story with a cluster of five versus a cluster of seven, what you want to be able to say, and even like moving forward, relating brain to mind, let's say, let's say they do equally well the five cluster model versus the seven cluster model, you'd still feel kind of hesitant to believe in the mental function that you're purporting in those clusters, right?
[01:06:17] Speaker A: I don't know. I mean, I think clearly one can. There's an argument that the structure in the data to some degree has to come from the mechanisms that are generating the behavior. Right. And the question is, to what degree do you attribute the structure in the data to the kind of fundamental joints versus, for example, the particular choices of tasks or. So, for example, one of the big distinctions that we see in our data is that behavior on self report questionnaires is pretty much unrelated, uncorrelated, with behavior on cognitive task measuring reaction time and accuracy.
And that might be that they're reflecting fundamentally different types of psychological functions. It might also be what people in psychometrics call methods variance. Right. That it's really something about the way that you're measuring the things that's causing those correlations. So for example, you know, people differ in the degree to which they want to, you know, present a positive view of themselves or not. Right. And so that's going to cause all the self report things to covariates, to covariate with one another. Right. And not with the reaction time tasks. There's lots of stories like that one can come up with.
[01:07:38] Speaker B: Yeah, I interrupted you, I think, talking about what you actually found with this bottom up approach.
[01:07:44] Speaker A: Right. So what we found was that we certainly see that there's interesting structure in the cognitive tasks and in the self report. And then we wanted to ask basically how well do those relate to the things out in the world that we think are associated with self control? Things like smoking or overeating or success in life in some sense, like household income and education level and things like that. And basically what we saw was that we could, using a cross validated out of sample predictive model, we could predict pretty well those various measures of real world outcomes using the self report measures. We could hardly predict anything using the cognitive task measures.
[01:08:38] Speaker B: That's crazy.
[01:08:39] Speaker A: That's probably the most important finding. I think from that work, which throws a bit of water on when I and many of my colleagues in cognitive neuroscience write grants, we say, hey, we're going to study response inhibition because it's so important for addiction.
And these data suggest that if it is, it's really weakly important and it's.
[01:09:00] Speaker B: Actually more important to listen what a person believes about themselves.
[01:09:04] Speaker A: Yes.
[01:09:06] Speaker B: So where does this leave us?
What's the current state and your current thinking about the cognitive ontology?
[01:09:14] Speaker A: I mean, I think that regardless of the bottom up stuff, I think that there's still insights to be gotten from the top down analysis. And I think in part it points to our need to be much more precise in defining exactly what it is that our cognitive tasks are measuring.
So on the one hand, I'm optimistic that we can still make progress there. On the other hand, I think that I've become convinced that ultimately an ontology that's written down in words is probably never going to be a particularly powerful ontology compared to one that's written down in some kind of computational language.
And then the question becomes like, what does that computational language even look like?
[01:10:01] Speaker B: Well, that's a good question. I just immediately a graph network, of course, popped into my head, but. So do you have an idea?
[01:10:08] Speaker A: I can't say that it's a problem I've been struggling with a lot in the last year. I gave a talk, I guess a couple years ago now at this ontology conference, this philosophy conference, where I kind of first started thinking about this particular issue. And I can't say I've made great progress. I mean, clearly the kind of insights that we're getting from artificial neural networks has provided at least food for thought, if not kind of a fundamental language. But I still feel like there has to be some kind of way to talk about this that accurately describes in a low dimensional way. You know what, going back to that earlier discussion, like what is it that this particular.
What is it that the computation is, that's being done by a particular circuit or area or network? But I just haven't been able to kind of pin down exactly what that is yet.
[01:11:09] Speaker B: So you have a kind of a vision then of having a, I'm going to say computational language, even though it's not language, but a computational ontology, let's say a formal computational ontology that when it's used, will eventually be able to apply, whether it's new terms or existing terms used in reference to the particular ontology that low dimensional description we will have. Even though the actual ontology will sort of be beyond our description?
[01:11:41] Speaker A: Yeah, I think that's right.
Well, I guess I don't know if beyond our description is the right way to put it. I think that it'll be at a, a level that's necessarily imprecise because it's a generalization. Right. It's a low dimensional approximation of the higher dimensional model, which is a low dimensional approximation of the actual thing. Right. So there's all these, there's just like layer cake of approximations. But the question is, is there utility in having this very high level approximation? Maybe there is, maybe there isn't.
Sort of like statistical mechanics. Right.
It's a low dimensional description of all kinds of crazy stuff going on, but it's still useful for answering some kinds of Questions?
[01:12:29] Speaker B: Yeah, I've started thinking more and more about these types of things as attractor states in dynamical systems theory and even concepts, mental functions. Right. Or let's say mental experience. Like pain, for instance, when you ask Jim what his pain is and you ask Sally what her pain is, they're not going to have. It's not like they have the same thing, the same actual thing going on. But we use the term pain, but it's almost like you could. Pain is like this attractor state that can vary in its actual location, but within a realm. And I wonder, going back to the mental functions and the clustering aspect of it is if it's good enough to just have an attractor surface that all these, whether you divide it into 12 mental functions or three, whether it's sufficient to say that it's within this sort of attractor state is that way off base.
[01:13:19] Speaker A: No, I think that the whole question of how we bring ideas from dynamical systems into our understanding of cognition is really important.
Historically, there's been this, again, I think, problematic dichotomy between the people who do dynamical systems modeling in psychology who've basically try to say, oh, there's no representations in the brain, that we just need to do this dynamical system description, and then the people who want to build mechanistic models who say, oh, the dynamical systems theory doesn't tell us anything about mechanism.
Actually, there's a grad student in my lab right now, Grace Huckins, who's working between myself and a couple of philosophers who's become really interested in this question of how do we.
Can we talk about the idea of dynamics as being explanatory? Right.
[01:14:17] Speaker B: Yeah.
[01:14:18] Speaker A: Can we learn anything about a system from these kind of dynamical systems analyses beyond just something that's a description? So I think that there's a lot of, you know, she doesn't have any sort of results to show for that yet, but she's working on it. Yeah. So, you know, over the next few years, I think we'll see something emerge from that. You know, I got pulled. So I, you know, until a few years ago, I never really thought about dynamical system stuff and then had a postdoc in the lab, Max Schein, who's now faculty in Australia, who started kind of reading and thinking a lot about this sort of stuff and kind of dragged me kicking and screaming into it. And I think that there really is something there. I think that one of the real challenges is trying to figure out how do we bring together these ideas from dynamical systems and from network neuroscience more broadly, and ideas from computational neuroscience that in order to come up with a unified framework for thinking about how we describe the function of brains.
[01:15:24] Speaker B: Maybe I'll have Mac on at some point.
I wanted to. There's too much to talk about. So, you know, there's this paper from last year in Nature Neuroscience.
Human cognition involves the dynamic integration of neural activity and neuromodulatory systems, which looks like great work, which you're the last author on, so. But maybe I'll have him on soon to talk about that stuff. Okay, so, obligatory question here. You know, we think about mental functions and dividing them up. I got into neuroscience in graduate school with the high aspirations of understanding consciousness, and now I roll my eyes when I say it. Are we anywhere near? Given the background of your work on cognitive ontology, do you see any promise getting closer to approaching what consciousness is and how it would even fit into a cognitive ontology?
[01:16:12] Speaker A: I'm sorry, I'm still rolling my eyes. Let me stop that.
I think if somebody could come into the cognitive ontology and write down what the hell consciousness means, then that would be a great first start.
[01:16:25] Speaker B: How many clusters does consciousness have? That's a good question.
[01:16:28] Speaker A: Right.
[01:16:29] Speaker B: Okay, well, so what is your take? So I mentioned Buzaki's what he calls his inside out approach. And this is taking not without, it's not strictly bottom up or data driven. I mean, because as he admits, you always are working under a theory, so there's always that sort of top down influence. But you know, his idea is that neuroscience, unlike many other fields, has not developed its own vocabulary, its own, not even necessarily new words, but its own conceptual framework like other mature sciences have. And his idea is to take what we have found at let's say, the implementation level, like oscillations and different patterns of oscillations and on and on, and use this inside out approach to develop concepts to maybe influence and change some psychological concepts to change the ontology almost. So I'm wondering about your thoughts on that and I'm not sure I described it well enough for you to even comment on it.
[01:17:35] Speaker A: Yeah, I read parts of his book.
There's a lot to like about the book, I think. Obviously he's done amazing work on, you know, understanding dynamics of neural systems and their role in behavior. And I really like the focus of the book on action. Right. On situating, I think many of us still have this kind of idea that kind of stuff comes into the eyes and then goes forward and action is like the thing in the end. And kind of really kind of framing us as being embedded in these kind of action perception cycles. And there's arguments in the book about the primacy of action I'm not sure I buy. But certainly the importance of action and of our embeddedness in the world I think is a really important one.
[01:18:24] Speaker B: You see the brain as an information processor, not as a behavior producer. Right.
[01:18:29] Speaker A: Primarily, I see it as both of those things. I guess I think they're different views on the same thing. Right. That I think it processes information in service of generating behavior. And part of that generation of behavior is about generating the appropriate perceptual signals so that we can assess our predictive abilities and so on.
[01:18:52] Speaker B: It's very inclusive of you. Okay, very good. Sorry.
[01:18:55] Speaker A: I grew up in Lutheran, so I try to be very ecumenical.
So.
So first I would say that I think that I agree with the commentary by David Poppel and Adolfi that Buzaki's philosophy of science is kind of broken. He basically says, I think that we need to free ourselves from prior assumptions. And if we basically can look at the data and somehow this new taxonomy of mind will emerge. It's never clear to me from the book exactly how that happens because all the things he's talking about are kind of things that certainly memory and spatial navigation. These are all things that people were talking about well before anyone ever measured a brain.
[01:19:44] Speaker B: Well, he does the same thing in his talks as you did in your blog post and as you've mentioned in multiple of your talks, is using William James table of contents, for instance, to talk about how old these concepts are. And his take is that. That you both have the we need to revisit these concepts take on it, but from different philosophical vantage points, I suppose.
[01:20:06] Speaker A: Yeah. And so I agree with him on that. Right. That we need to.
I think his strategy seems to be basically, let's throw it all out and then try to kind of build up from neural data some new functional description, quote, unquote, without free of prior assumptions, which I think is just. That's a broken philosophy of science. You just can't do that. So if you think you're working without philosophical assumptions, then basically you just have implicit philosophical assumptions that you haven't examined.
[01:20:42] Speaker B: In fairness to him, he's not here to defend himself, so I'll defend him.
He was at pains when we talked before to. I don't know if backtracked to explain that that is not his actual. That he does have theoretical assumptions and that we all work from those. And he does acknowledge those. So it's somewhere in between. It's being able to acknowledge them and throw them out eventually, but not go just from the ground up. I suppose. But yeah, I won't defend him anymore.
[01:21:08] Speaker A: No, that's fair. I'm glad to hear that. I mean, I think fundamentally the issue that I have is that I don't see if somebody could show me an example of how this stuff works for something beyond, you know, these relatively, you know, something beyond stuff that rats do. Right. Like you know, behaving in the world on relatively simple types of tasks. Spatial navigation, you know, spatial memory, that sort of stuff. You know, I'm interested to see how this could work for understanding say, you know, self control or economic decision making or you know, very much higher level types of cognitive functions that I think are going to be very challenging to kind of have emerge. Especially if you can't. Even if you could study humans with all the tools you can use to study rodents, I don't think they would emerge.
But we can't. And so that makes it even harder with him that this is a framework that needs to be tested and that's what we're going about trying to do.
[01:22:12] Speaker B: Yeah, I mean this is the big goal, it's the big dream. And this is what the cognitive ontology is all about. And this is what everyone wants, Right. To bridge the brain and mind.
Okay, well switching gears here.
So a self driving car probably doesn't need to fit into any cognitive ontology.
Or does it? I mean, does a cognitive ontology matter for building AI? And I know that this is loaded because it depends on what AI you want to build, of course, but how do you see, and we've already talked about the deep learning aspect of AI. How do you see a cognitive ontology and understanding the linkage between our brains and minds, does that matter for AI?
[01:22:59] Speaker A: So I'll speak to kind of artificial general intelligence, kind of the most expansive type of AI. And I think obviously just because evolution built our mind in a particular way doesn't mean that that's the best way or that's the only way to build a system to solve the problems that we solve in the world. Right. So I don't know that necessarily that you have to know anything about the ontology of the human mind in order to effectively build an AGI system. I think that the place where it probably becomes really useful is thinking about what are the cognitive abilities or the cognitive tasks that humans can solve. Right. Because you want to, if you're going to build a Self driving car, you need to basically know what are all of the things one has to be able to do. What are all the functions that a system needs in order to effectively engage in that repertoire of behaviors out in the world.
And so there I think we just kind of thinking through what are all the, the things that one needs. So obviously working memory is going to be important because you're going to need to probably keep track of where all the things are around you.
Episodic memory might be important because you need to remember, hey, last time I drove through here, some kids ran out in front of me, so that might happen again. I think that one can certainly get clues from cognitive ontology, but I think it's more like the, the task ontology than the function ontology. That's probably more important for the people doing the building.
[01:24:38] Speaker B: Do you think that building in the task ontology, then the mental function ontology would naturally occur or would that depend on the underlying architecture of the system, for example?
[01:24:55] Speaker A: Yeah, it's an interesting question in part because we rely on people being able to talk about things to get at some of these underlying functions. So I think that one could probably infer. It sort of gets back to our discussion earlier.
Let's take a really complex take like AlphaGo or something like a complex deep reinforcement learning system.
You can almost certainly put labels on parts of that system that are functional labels, right? I don't know the model well enough to know what those would be, but you almost certainly could chop it up and say these are going to be.
This is the retina, if you will.
This is the thing that's computing the prediction error or doing the exploration or whatever those things might be.
And that might or might not be useful for the person building that system. But I mean, it is interesting that at least some of the recent work in reinforcement learning has been taking basically episodic memory and building it into these deep reinforcement learning systems. So you might think that that's. I mean, I don't know where those intuitions came from, but I think they in part came from knowledge of how human brains work or how mammalian brains work.
[01:26:19] Speaker B: So thinking about the ontology and how different cognitive tasks and the related mental functions, there's this huge overlap. And you might have one cognitive task that employs four different mental functions, and you might have one mental function that applies to 12 different cognitive tasks. And then there's that, I'll use the word emerges. You have these higher functions that whatever the function is, we give it a name, emerges through the interaction of these different systems. Right. These different lower, fine, finer grained mental functions, let's say, might it be necessary then for an AGI system, let's say, to be put together in such a way that these lower level, more fine grained mental functions are interacting in such a way as to dynamically where a higher mental function would be an emergent property of these lower level interacting functions?
[01:27:20] Speaker A: Yeah, I think it's reasonable to think that that could be the case and that something could be learned from computational neuroscience and psychology that would help build those things. I'm not deep in the deep learning world, so I don't know to what degree those sort of insights have actually come to pass. It was like a reasonable strategy.
[01:27:40] Speaker B: In the New Mind Readers, I'm going to read a quote from you.
You say I was in graduate school, and you already mentioned this a little bit earlier. I was in graduate school in the early 1990s and had heard lots of hype about FMRI, but I wasn't available. But it wasn't available at the University of Illinois where I was a student when I moved to Stanford as a postdoctoral fellow in 1995. I had not initially planned to do FMRI research, but I got pulled in by the excitement of this new technique just as a career type question. What do you take from that regarding being pulled in by a new exciting technique? And I'm thinking about deep learning in particular and deep reinforcement learning and all the hype, the revolution, and I'm using air quotes of AI that has recently happened. Would you be pulled in?
[01:28:34] Speaker A: It's an interesting question.
I mean, I think I'm cynical enough in general that if I were to be pulled in, I'd probably end up being one of those internal critics, kind of like I've been in the FMRI world.
So that actually wouldn't surprise me. And I think actually some of the really cool work that's going on right now, even though it's, I understand it at best at kind of like a storytelling way, is a lot of the theoretical neuroscience work that's trying to understand why is it that neural networks work well or don't for particular problems from a fundamental theory standpoint. And so I could see getting pulled into asking those kinds of questions.
[01:29:16] Speaker B: Okay, yeah, I like that. Getting pulled in and just to be a nuisance.
[01:29:21] Speaker A: Exactly.
[01:29:22] Speaker B: To use the techniques and be a nuisance. Russ, have we missed anything about cognitive ontology that you wanted to touch on? Because I have other sort of general and career type questions for you.
[01:29:33] Speaker A: Yeah, no, let's move on.
[01:29:34] Speaker B: Okay. So you're productive. In fact, you've written on your blog post multiple times about your productivity stack, and people are impressed that you somehow maintain a career fixing neuroscience and psychology, plus doing neuroscience and psychology. What's the secret? How do you maintain such a healthy balance in your approach and your productivity while still being very productive?
[01:30:02] Speaker A: You're assuming that I actually maintain a healthy balance.
No, actually, I think I do.
[01:30:08] Speaker B: I know you eat a lot of brisket, and I know that's not healthy.
[01:30:11] Speaker A: Oh, it's very healthy, actually.
So I think the short answer is just that I really love a lot of what I do. And so, you know, I don't really consider it work. Like, I wake up in the morning and I'm like, I really want to go, like, do that analysis or, you know, go read that paper. And so, you know, I do work a ton, but I think the thing that helps me maintain the balance is having pretty strict rules about not letting my academic pursuits negatively affect the other parts of my life. Right? So, for example, I refuse to let work keep me from sleeping. If there's something that has to be done, and the only way it's going to get done is if I stay up all night, then it's just not going to get done. Right. And also refuse to let work get in the way of exercise or time with my wife or, you know, practicing guitar. I, you know, I feel like these are. There are things that one has to do to remain, you know, reasonably balanced in one's life. And I certainly, you know, occasionally I'll, like, spend. Especially these days, you know, where everything's happening on Zoom. You know, I'll end up spending eight hours in a day in my chair, and I just feel trashed afterwards. And so that's about, as, you know, I try not to even let myself do that. Obviously, sometimes one can't help it.
[01:31:28] Speaker B: Got to go on podcasts and stuff. Exactly how important is it that you have this innate drive. I don't remember the term that you used, how you described yourself earlier, if it was nerd or just computational, intrigued person. But you have this sort of innate drive toward the analytical side of things and the computational side of things. And there's almost an antithesis between that and sort of just pondering the higher questions, Right. And, you know, what is mental function in general? So, I don't know, you seem to have this really nice balance. Do you think that that is really important to have this passion for the analytics of things with being able to ask the higher questions?
[01:32:14] Speaker A: I mean, for Me, it certainly has been. It's interesting. I minored in philosophy as an undergrad and actually spent in grad school. Even though I was in grad school for cognitive psychology. I actually spent a lot of time reading philosophy of mind, philosophy of science sort of work. And I think that it's probably obvious that I've done that because I think it's kind of infected the way that I think and write. So for me, I think that there's a lot to be said for having a mixture of those different ways of thinking about things in general. I think that people who think, obviously there's room for lots of different types of people in science. We need the people who are going to really be very hard nosed, focused on a particular question, digging in as deeply as they can, building really detailed theories.
And then I think I just constitutionally couldn't do that.
I think we also need people more like me who really look very broadly, try to bring together ideas from lots of different sources.
It's been successful and fun for me.
[01:33:27] Speaker B: Yeah, whatever you're doing, it seems like it's fun and rewarding indeed. So you have been impressively productive and I assume everything has gone just perfectly for you throughout your career. But if it hasn't. Oh, okay, good. I'm wondering if, I mean, have you ever felt disillusioned or have you ever had a major failure, you know, for whatever reason? And if so, I'd love to hear about it and how you overcame it.
[01:33:56] Speaker A: Well, you know, yeah, I felt disillusioned at a lot of points.
I mean, in some ways I think I've, you know, because I'm a cynic, I'm kind of continually disillusioned.
[01:34:09] Speaker B: That's your base.
[01:34:10] Speaker A: Base, exactly right. Or maybe I just look for, you know, I'm like searching for disillusionment, but I guess, you know, my, my strength. So, you know, for example, I've been, the last few years, I've been really deeply disillusioned about the way that everyone else have done FMRI studies since I started doing it back in the 90s, both because of all the methodological issues, the analytic flexibility that basically allows pretty much any study to find a positive result. But more importantly, because I think that even if we did the methods right, I think the strategy that we've been using wouldn't be able to actually answer the questions we want to answer.
So what I've done is turn that into trying to figure out how I can actually do some work that tries to address the problem. So, you know, on the analytic flexibility side, you know, we've, I've talked for a while about, you know, the fact that, you know, there's, there's so many different ways to analyze an FMRI data set. And we've known for a long time that these can lead to different results, but we didn't really know kind of, you know, to what degree that actually has impact in the real world. So we did a study last year that came out earlier this year, we call it the NARP study, where, you know, basically we had 70 different groups analyze a real FMRI data set, test a set of hypotheses and tell us basically what they found. And we found really kind of a disconcerting amount of disagreement in what they found.
And we dug in a lot to try to figure that out. But that was a step to. That was really inspired by my disillusionment and then also the work that I did for that project. So I wrote most of the analysis code for the project last summer, before this last one.
[01:35:57] Speaker B: Wow.
[01:35:57] Speaker A: And it was really that experience that kind of spurred me to become much more interested in software engineering practices that I've continued with. So I think I try to overcome disillusionment through action.
[01:36:09] Speaker B: Had you not written code in a while and that's why it was a bad experience?
[01:36:12] Speaker A: No, I mean, in some ways the thing I like most about this job is the fact that I still get to write code pretty regularly. So I started coding when I was in high school. Back when you saved your program to a cassette tape.
[01:36:31] Speaker B: Oh, not even a floppy?
[01:36:33] Speaker A: No, no, no. I had a TI99.4A and I programmed in BASIC on it. So I've been programming for a long time and really enjoyed it, but had never really. You know, I'm totally self taught. I've taken like one CS course in my life and so there's been a lot of, you know, a lot of learning to do around that. But it was really kind of like, you know, just trying to.
What happened was, you know, this is a project on reproducibility. And I was like, well, we kind of need to make this as reproducible as possible. And so I spent a lot of time trying to think about like what is the best when talking with people, trying to figure out like what is the best way, way to make this thing as reproducible as possible. I still need to write something about exactly what we did. It's kind of like there implicitly in the method section, but I haven't written anything about it.
[01:37:21] Speaker B: Well, what about other I mean, have you been disillusioned in your career? So what you're talking about is almost like a scientific disillusionment. Right. But have you ever thought, oh, I shouldn't go on because the field is so rife with difficulties, or didn't think that you had the chops or something like that? I doubt that's the case.
[01:37:44] Speaker A: I always feel like I don't have the chops.
[01:37:46] Speaker B: I think the way you approach things, it seems like, is the way that the ideal hypothesis approach to science is that you seek to fail. That's the way that you seem to approach science.
[01:37:58] Speaker A: Yeah. It's not exactly the most psychologically healthy way of dealing with life, but I think it's actually pretty effective for science.
[01:38:07] Speaker B: Seems to be. How would you. Lastly, Russ, if you were going to begin again, do you have an idea of how you would start over if you were starting over right now, let's say, in grad. Thinking about going into graduate school or early in graduate school?
[01:38:23] Speaker A: I think the main thing I would do is wear earplugs at rock shows so that I wouldn't need a hearing aid in my 50s.
[01:38:29] Speaker B: You have a hearing aid?
[01:38:30] Speaker A: Yeah.
[01:38:31] Speaker B: Oh, man.
[01:38:32] Speaker A: Yeah, I have probably pretty substantial hearing loss, I think, both from going to way too many rock shows, playing in bands in high school, and then also I grew up in Texas. We shot a lot of guns as kids, and I don't remember wearing any hearing protection when we were shooting guns. So I think all those things have kind of blown my hearing.
[01:38:51] Speaker B: I'm from Texas. Where did you grow up in Texas?
[01:38:54] Speaker A: Outside of Houston. Rosenberg.
[01:38:56] Speaker B: Okay. Yeah. I'm from the Dallas area.
[01:38:58] Speaker A: Okay.
[01:38:59] Speaker B: Wonderful areas.
[01:39:02] Speaker A: Okay, so back to your question.
[01:39:03] Speaker B: Besides wearing the ear protection, which I think is a great idea, I have a friend with tinnitus because of the same reasons.
[01:39:09] Speaker A: Yeah.
So I guess the question is, am I starting again now, or am I starting again back in the day?
[01:39:15] Speaker B: Young Russ.
[01:39:16] Speaker A: Young Russ.
I'm not sure what I would do differently other than maybe being more disciplined about learning, particularly computational skills, because I've been really happ. Haphazard in how I've learned them.
I think so much of success is just luck and capitalizing on luck. And I was really lucky to. I was really lucky to end up at Mass General Hospital in the late 90s. I was really lucky to end up at Stanford at a time when FMRI was taking off. I've been really lucky to end up around various colleagues at the various places I've been. And I don't know that I'D want to change any of that.
[01:39:56] Speaker B: I went to undergraduate at the University of Texas at Austin. What do you miss about Austin? Anything.
[01:40:03] Speaker A: Barbecue.
[01:40:06] Speaker B: Are you making brisket these days?
[01:40:09] Speaker A: Occasionally. I don't smoke brisket very often just because it's such an ordeal, though I've started. Usually you would do it low and slow and then you're smoking for 16 hours.
But I've tried the hot and fast method and it's actually pretty good and you can get it done in a day, so that's not too bad. But I'll usually smoke either beef ribs or pork ribs or pork shoulder or things like that.
[01:40:32] Speaker B: See, I'm not sure I'm a true Texan because I think brisket is overrated, but I have had really good brisket. I've had really good brisket, but most brisket I've had is not good brisket and I think you can have bad ribs and they're still pretty good. If you have bad brisket, it's not good.
[01:40:45] Speaker A: I would agree with that.
[01:40:47] Speaker B: So Russ, this has been very fun, very enlightening. I really appreciate it and continue the great work that you're doing.
[01:40:54] Speaker A: Thanks very much. It's been great to chat with you.
[01:41:11] Speaker B: Brain Inspired is a production of me and you. I don't do advertisements. You can support the show through Patreon for a trifling amount and get access to the full versions of all the episodes, plus bonus episodes that focus more on the cultural side but still have science. Go to BrainInspired Co and find the red Patreon button there. To get in touch with me, email Paul BrainInspired co. The music you hear is by Then New Year. Find
[email protected] thank you for your support. See you next time.