Episode Transcript
[00:00:03] Speaker A: Initially it was regarded as just a byproduct, psychological byproduct of explanation in the sense of a feeling, a good feeling, or like indeed a popular image of the scientist who has this light bulb suddenly above his head. Oh yeah, eureka. Now I understand and be try to develop this benchmark and look at the abilities of the system to, for instance, answer counterfactual questions, answer all kinds of questions that you can see as a measure of understanding. But I'm still hesitant to say that it's real understanding.
Metaphors are used in popular science communication, but also in scientific publications themselves, and they function somewhat differently. But metaphors are everywhere.
[00:01:03] Speaker B: This is brain inspired, powered by the transmitter. Hank Direct is a professor of philosophy of science and the director of the Institute for Science in Society at Radboud University.
Hank wrote the book on understanding, literally. He wrote what has become a classic in philosophy of science. I believe it's recognized as a classic at this point. The book is called Understanding Scientific Understanding and it's from 2017.
Hank's account of understanding goes roughly like this.
To claim you understand something in science requires that you can produce a theory based explanation of whatever you claim to understand.
And it depends on you having the right scientific skills to be able to work productively with that theory.
For example, making qualitative predictions about it without performing calculations is one of the criterion that's often pointed to.
So understanding in this sense is contextual and it depends on the skills of the understander. Okay, so there's much more nuance to it than that. So like I said, you should read the book. But this account of understanding distinguishes it from the philosophy of explanation itself or what counts as a good explanation. And it distinguishes it from other accounts of understanding which take understanding to be either in your mind, it's like something personal and subjective. That sense of, oh, it clicked in my mind. Eureka, I have found it. The light bulb over your head, as you'll hear Hank say, or simply the addition of more facts. Some accounts of understanding suggest that the more facts that you have about something, the more you understand it. So Hank's account of understanding is quite different than those alternative accounts. In this conversation, we revisit his work on understanding and go on to talk about how it touches many other topics, like realism, the use of metaphors, how public understanding differs, whether it differs from expert understanding, idealization and abstraction in science, and so on.
And because Hank's kind of understanding doesn't depend on subject awareness or things being true, he and his cohorts have begun work on Whether there could be a benchmark for degrees of understanding, to possibly assess understanding in machines, in artificial intelligence, and to use as a common benchmark for humans and machines.
[00:03:50] Speaker A: Okay.
[00:03:50] Speaker B: On a personal note, I love Hank's book and his work. It has affected my own understanding of understanding and explanation and philosophy of science in general. So again, I hope that you check out Hank's work, whether it's his book or some of his papers that I link to in the show notes@braininspired co podcast225. Thank you to the Transmitter for their ongoing support. Thank you to my Patreon supporters. If you want to learn more about how to support Brent Inspired and get more bells and whistles like access to the complexity discussion groups, the full archive of episodes and the full length of every episode I release. All right, enjoy our conversation. Here's Hank.
So you published the book in 2017, I guess your latest book, Understanding Scientific Understanding. After working on it and having published a book before that, working on it for many years, years. The philosophy of understanding and my sense is, you know, since that time, you've since been basically applying it broadly your conception of understanding to all sorts of problems and phenomena. Is that the right sense? I mean, you touch on a lot of things, but it's all sort of funneled through your perspective, your account of philosophical understanding. Do I have that right?
[00:05:17] Speaker A: That's correct. Indeed. Yeah. And there's.
Yeah, that came about, I mean, when I look back on it after I finished the book and the book was published, I had for a while I thought, okay, I'm fed up with understanding. You know, I've been working on it so long.
But that did last long. First of all, of course I was invited to give talks about it. And then I received this Lakatosh award for the book which gave me even more invitations and publicity.
And so I remained of course interested in it and also on the one hand just discussing, well, being involved in follow up discussions because it's. It grew as a topic in philosophy science. So there were more books published around the time and there were new discussions, especially for instance about this idea, this question of is understanding factive or the relation between understanding and realism? Can you only understand nature by means of theories which are true and which describe reality?
Became a big discussion after I published the book. There's only a small section in the book about this.
[00:06:40] Speaker B: You have to have expected that, right? Because once you put a wedge in, especially in philosophy, it just like everything falls down around it, right. And it seems to touch every potential subject and you can, you know, this is a my one hang up on philosophy, is that it. It becomes, at some point, semantics, and you can just argue till the end of time. Right. Which is one thing I like about your account of understanding is it's pragmatism, which we'll get to.
But. Yeah. So. Okay, well, did you ever.
[00:07:11] Speaker A: You're right. No.
So it touches on really a lot of issues, and it's a really fundamental question that relates to the traditional questions in philosophy of science, but can also be applied more broadly to other issues related to science and to science and society, even.
And that was also the approach I took. I wanted to take after the book.
That's what you asked about in the first place, to find. To apply it, to apply my views to new issues. And one thing I was interested in, and still am interested in, is this notion of public understanding of science. So the whole book is about how do expert scientists understand nature and the world and how do they communicate. But you also want to transfer understanding to the broader public.
So this idea of public understanding of science, it has the term understanding in it.
What is it and how can it be obtained? How does it work?
And, yeah, that was something I wanted to do after the publication of my book, and it was a bit of a coincidence, but in 2018, so that's already seven years ago, before. I was in Amsterdam for a long time at the philosophy department. But then I saw a job vacancy, an advertisement for a professorship, also in Philosophy of natural science, but in the Faculty of Science here in Nijmegen, where I am now.
And that was in an institute, which is called Institute for Science and Society, and it's part of the Faculty of Science, and it hosts philosophers of science and also sociologists and social scientists.
And so it was more geared towards studying the interaction between science and society. And that already had my interest. And, well, I applied for the job and they liked my approach and my application.
And then I became, yeah, I entered a different environment where I could pay more attention to those issues as well. Yeah.
[00:09:35] Speaker B: Well, we'll talk about your work.
You actually did a study on the use of metaphors in public understanding versus expertise, expert understanding. So we'll definitely get to that and we'll get to a lot of things.
Maybe what we should start with is kind of a. Just a broad overview of your account of understanding and how it fits, you know, into the philosophy of science.
And then, you know, maybe broadly right now, and then we can kind of get into, like, details when we talk about the specific examples of Your work, that's. That has come out of it.
So for. For a long time, the science of explanation was really like the main focus in that realm, right? It is. I mean, I don't have a full history of this. You know, we all stand on the shoulders of giants. Right. But it'.
From my perspective, it seems like you were like sort of on an island studying understanding, which is beyond explanation. Back in the day, there were maybe very few.
Were you alone?
How did that come about?
[00:10:40] Speaker A: Yeah, it's really interesting to see.
I mean, when I remember I started about with working on that around. Yeah, the turn of the millennium. So around the year 2000. That's maybe not a coincidence. No, I don't know. That's a joke. But for me personally, it was inspired by my PhD work, which I did in the 90s, 1990s, which was not on understanding, but on scientific discovery and on the role of philosophical. On what impact the philosophical views of practicing scientists had on their work. And I studied, especially physicists. I'm a physicist.
I did a master's in physics and I studied like, yeah, famous physicists like Maxwell and Boltzmann and Schrodinger and Niels Bohr and so on.
And I especially. Schrodinger turned out to be a really exciting figure with really fascinating views, very broadly educated, interested in philosophy and all kinds of things. You should read his biography. It's really interesting.
[00:11:52] Speaker B: His biography. Okay. I just recorded an episode with Dan Nicholson about. He's just released this short book on Schrodinger's what Is Life Revisited? And his whole thing. He's a.
He has a processual perspective on science and hence his whole thing. He brings it all back to Schrodinger, the reason why, why modern biologists and neuroscientists and all of us have this definitive mechanistic view of how. Of explanation as what counts as a good explanation. And he blames Schrodinger, essentially. And it's a great. He does a. I recommend the book to you. Like, if you're interested in Schrodinger, you should. You should go read Dan's book. I'll send it to you after this.
[00:12:31] Speaker A: Please do.
[00:12:31] Speaker B: To check it out to see what he. But it was all about what is life? And you're probably more interested in the quantum physics.
[00:12:36] Speaker A: No, that was later, of course, when he wrote that. It was in the 1940s. And I just studied his work in the 1920s when he was dissatisfied with quantum theory and came up with his alternative wave mechanics. And he really emphasized understanding and the intelligibility of theories. And he thought quantum theory is not intelligible. And he believed that a theory should be visualizable, that only if you can have a picture in mind, a space time picture of how an atom looks like, what its structure is, that then you can really have understanding. And that it is the purpose of science and of physics to provide such understanding. And that was difficult in the course.
And I read about this, had his discussions with Pauli and Heisenberg and Bohr, and I used it for my PhD thesis just to see what the different philosophical assumptions and ideas were behind that. But then after I finished my thesis, it was actually one of my supervisors who said, well, maybe for a follow up project, a postdoctoral project, you should focus on Schrodinger and on these fields intelligibility. And that's when it started. And so it was really something that just came was a coincidence or grew out of my personal, my work there. And then it turned out well then indeed in the years after that I started trying to develop this more general account of understanding.
In the beginning, yeah, I felt indeed not like I was the only one. But it was really hard to convince other people, for instance, reviewers of papers, that this was an interesting subject. So it seemed that philosophy of science was not ready for it yet.
[00:14:32] Speaker B: Well, at the time there were all sorts of arguments and debates in terms of the philosophy of explanation.
[00:14:40] Speaker A: Right.
[00:14:40] Speaker B: And was it, I'm not sure if uninteresting, which is what you're saying, that people found it uninteresting. But I mean, so traditionally, right, like there are. You kind of divide understanding, our understanding of understanding into like three kind of camps, one being like the subjective sense, the feeling of understanding, which I would imagine is sort of the layperson's view of what understanding is. Then there's understanding as like factual knowledge. You just add more knowledge, that means you have more understanding and then, and then yours, which we should explain relative to those. But what I mean was, you think people were just hung up on the fact that like, well, you, you, you can't do this with understanding because it's just a sense, the sense of the feeling that we have.
[00:15:28] Speaker A: Right.
[00:15:29] Speaker B: The eureka moment. Ah, I understand, you know, or was was it that kind of pushback or.
[00:15:34] Speaker A: Exactly, yes, totally. Right. And actually the division you make is also the way I now sometimes present it in lectures or when I teach about it, that you have these two ideas. Either it's just a feeling or it's just knowledge.
But what I'm trying to get across and Try to argue for is that it's more than that. And then I developed that in terms of, for instance, skills. But initially it was by the traditional philosophers of science who were working on explanation. It was regarded as just a byproduct, psychological byproduct of explanation in the sense of a feeling, a good feeling, or like indeed a popular image of the scientist who has this light bulb suddenly above his head. Oh, yeah, Eureka. Now I understand. Yes, right.
[00:16:27] Speaker B: These people like Hempel.
[00:16:29] Speaker A: Exactly. Yeah.
[00:16:30] Speaker B: Okay.
[00:16:31] Speaker A: So Hempel was, of course, the most.
Started the debate on explanation and had this influential covering law account which has this logical analysis of the structure of explanation. And that was. Yeah. Dominant or even universally accepted in the 1950s and 60s. And then some criticism to it was offered and then you had alternatives, like causal explanations, particularly.
But all these philosophers still follow. Temple is saying, okay, understanding is.
These explanations maybe give us understanding, but that is more the subjective feeling. It's not. For. It's not relevant to them. It's the outcome. And it's not relevant for the philosophical analysis and also not for the justification of it.
And so.
[00:17:24] Speaker B: Oh, right. Because it's an outcome. Like you.
Who are these gatekeepers? You gotta name names. Like, who was preventing you from.
[00:17:33] Speaker A: Ah, me. Well, I don't know. Anonymous reviewers.
[00:17:37] Speaker B: Yeah.
[00:17:38] Speaker A: Yeah. Okay. What do you mean?
[00:17:40] Speaker B: Yeah, that's what I mean. Like, you know, I mean, when I mentioned Hempel, I imagined Hempel writing to the editor.
This is unpublishable, you know, this is not enough merit here.
[00:17:51] Speaker A: Hempel died in 1997, I think. So maybe it was indeed after. No, Yeah, I don't know.
I remember in the early phase when I was maybe also not well prepared or maybe my views were not so well developed that I also sometimes got very aggressive reactions when giving talks, especially people who were really focused on the logical approach, they found this a kind of too vague and not something that you could analyze in a purely logical way.
[00:18:34] Speaker B: I'm sorry to interrupt. Yeah, I was going to say, and yet pragmatism, I mean, there's a rich tradition in pragmatic approaches in philosophy. So, you know, assume. I assume that you were sort of. Maybe you weren't rooting it in that early on, maybe because you were still green or something or, you know, but those were, you know, pragmatism is coming back, I feel like. Thanks to people like you, actually, I would imagine.
[00:18:58] Speaker A: Yeah, right.
That's maybe too much honor. But it's. Yeah. In this discussion. Yeah. I mean, it's coming back and it's Coming back on many fronts. And there are of course, very. People like Philip Kitcher, who has done a lot in that respect, but also more closely to my work, like Hassock Chang in Cambridge.
So it's. Yeah, it's all over the place perhaps now. Yeah. And what. But what. I also don't want to forget to mention that with hindsight, so I felt it's like 25 years ago, I felt indeed like I was exploring something that was by many people or by most people regarded as a dead end or not interesting. But with hindsight, it turns out that I was not the only one. There were more people like it was in the air or something like that in philosophy of science, but also especially in epistemology.
I wasn't so aware of what happened in epistemology at the time, but later the fields came closer together and you had people like Katherine Elgin from Harvard who is trying to cross the divide or had to bridge between epistemology and philosophy of science. And it turns out that also around that time, epistemologists were also discussing understanding, at least paying attention to it.
[00:20:30] Speaker B: So then what is your account of understanding? I know that you're probably sick and tired of explaining this, of describing it, but I set you up a little bit by talking about historical perspectives, the sense that you understand something, the subjective feeling. It's not that in a sense, your account of understanding is more. It's borderline behaviorism. And we'll get to that too when we talk about how artificial agents can understand within your account of it. Yeah, but so maybe describe, based on what I just said, you know, is your account. Is it. Is it behaviorism all the way? And maybe answer the what understanding is and scientific understanding, I guess in this case, with respect to that, that.
[00:21:20] Speaker A: Yeah, so it's indeed, it's restricted. I restrict myself to scientific understanding.
That's important also.
[00:21:29] Speaker B: Why is that important?
[00:21:31] Speaker A: Well, because you have all kinds of understanding.
I mean, because in my account, for instance, it's based on the idea that you have phenomena that you want to explain and you need theories to explain it. And I'm not going to claim that this is how understanding in general works.
It's good that I already mentioned that quantum theory debate in the 1920s where I got my inspiration from. Because that's also where I started to develop this idea.
In fact, I was even inspired by kind of a definition of understanding given by Heisenberg in his paper on the uncertainty relations. And that later became my criteria for the intelligibility of theories, which is kind of a test to see how or whether a theory is intelligible to scientists.
And so the basic idea is that if you want to understand a phenomena, as all scientists want to, you construct a model that accounts for the phenomenon. And the model is something that is in between the theory and the phenomena, that gives an idealized picture of the phenomena in terms of the theory or to which you can apply the theory. This is, by the way, an idea that I borrowed from philosophers like Nancy Cartwright and Mary Morgan. So it's about modeling, not about understanding. But creating such a model provides understanding. And my claim or my thesis is that in order to create such a model, so to obtain or to produce the understanding of the phenomenon, you have to have an intelligible theory. So it's always with a theory in the background, or theory has to be there to be applied to the model, to the phenomena, and that theory has to be intelligible to the scientists. And that sounds like kind of a trivial thing. But that was in fact, in this debate between Schrodinger and Heisenberg. This was already the bone of contention. Is quantum mechanics theory, theory, can it be intelligible or not? So Schrodinger says, it can't be because.
[00:23:45] Speaker B: It'S not visualizable, because it's all matrices. Is that correct?
[00:23:49] Speaker A: Because it's all. Exactly, yes. So the original theory by Heisenberg and others was indeed purely mathematical, very abstract, all matrices, et cetera. And Schrodinger had this kind of this philosophical idea that we necessarily have to visualize in order to understand. That's something like even Immanuel Kant could have said that, like, it's our form of understanding, visualization.
And I turned this into an idea. I turned this into a more pragmatic thing, like visualization is a tool for understanding.
We can use a theory. If a theory is visualizable, then for many people, that makes it easier to use it.
Visualization is a tool to apply it to models and to use it and to develop new ideas. And so this idea of intelligibility as something that has to do with the use of the theory, that became the core of my account. And that's also why it's a pragmatic account and also immediately, I think, a contextual account, because it depends on the context.
It's not for every scientist, for every human being, the same tools are working well. So it depends on what skills you have, what background knowledge, et cetera.
So these are the core ideas of my account of understanding.
And I developed it also with the historical So I arrived at it by looking at the history of science and by looking at the changes in the course of the history of science, especially physics, in what theories were regarded as intelligible and which ones were regarded as unintelligible. And that was something that.
My PhD thesis was also in, history of philosophy of science. So this historical approach has always attracted me. And I think you can learn a lot about science by looking at its history, and especially that you see that there are these changes, these contextual variation, and that is what I hoped to catch in my criteria, in my general account. And there were tensions already almost immediately, because on the one hand, I pretend or I hope to formulate a very general universal view. On the other hand, I want to acknowledge that there is this variation and it differs across disciplines and across historical.
[00:26:43] Speaker B: That variation is part of the universal view.
[00:26:45] Speaker A: Exactly. Yeah. So that's possible. Right.
[00:26:49] Speaker B: So how do you. When you're going back into the historical accounts, you have to infer what's intelligible. I guess my question is, like, how do you. You have to read between the lines about. About how people were writing about theories and phenomena and explanations to decide. How did you decide what counted, what people thought was intelligible and what was not intelligible to whom? Based on the science. You know, based on the skills of a scientist writing about a certain theory, it may or may not seem that they found it intelligible. How do you figure that out?
[00:27:23] Speaker A: When looking at actual historical cases, you mean?
Yeah, no, that's not easy. Of course. First problem is.
Well, you start, of course, by looking at discussions like the one between Schrodinger and Heisenberg. And luckily in the past, scientists were more inclined to also include philosophical reflections or even personal remarks in their papers. Because, for instance, there's this footnote in Schrodinger's paper where he really complains or even accuses Heisenberg of coming with a very unintelligible theory.
That's nice.
In the time of. For instance, in.
I have a chapter in my book on Newton and Huygens and gravitational theory and debates about that. And there you also have letters and correspondence, and there's more. You can see things there.
[00:28:25] Speaker B: So it's literally people saying, like, I can't make heads or tails of what you're saying.
[00:28:29] Speaker A: Exactly. Okay, yes, that happens, of course. But you have to be careful from the start.
For instance, Heiligens wrote in French and Schrodinger in German.
And when they use certain terms, you have to be careful. What do they mean with It. But the second more important problem maybe you refer to that is because you mentioned skills. You can read what they say about their own or other people's theories, but claims about where they skilled.
What were Newton's skills, that's a bit more difficult. And I must confess. Or confess. Yeah, I think that's really hard to assess. And it's sometimes to prove that someone did or did not have skills.
[00:29:27] Speaker B: But if you're Newton, you have to have a certain level of skill to.
Well, at least mathematical skill, you know, to write the Principia. Right.
I mean, well, yeah.
[00:29:39] Speaker A: Anyway, now that's an interesting case, Newton versus Huygens. And actually so Newton of course could.
He had all the skills to develop his theory and he published the Principia late in 1687 and Heyerhutz was a bit older, he was unhappy and he criticized Newton's theory of gravitation. But of course he had also the skill he could understand.
He understood Newton's work mathematically. So his objections.
Huygens has made remarks about the unintelligibility of Newton's theory.
I don't know exactly.
Yeah, he.
What terms he used in French. But it's something that it's impossible to comprehend or something.
And that it's clear that Huygens has something else in mind. It's not that he couldn't use that theory, he just couldn't accept that nature was like that. It was more a metaphysical unintelligibility. And in that chapter I actually also.
This case inspired me to make a distinction between metaphysical and intelligibility and scientific intelligibility.
But I argue that the two can interact and there can be overlap between the two. So you can use. Huygens was very metaphysically conservative. He was a Cartesian and could give up the Cartesian world picture of the mechanical universe. And so gravitational forces didn't fit into that picture.
But Huygens, so you could say that was purely dogmatism. But his metaphysical convictions also.
He used a Cartesian metaphysics in his own work as a tool for understanding. He also used it productively by coming up with new theories. So for instance the wave mechanics and really important achievements. So there's a kind of interaction between metaphysics and science in that way.
[00:31:57] Speaker B: Well, so it's people's metaphysical hang ups positions often that lead to their criticism of other people's theories. And while you were talking I thought so criticism is way easier than creation. You'll agree it's way easier. But often the person being criticized one defense that they can, that they can use to come to their rescue is the, well, the critic doesn't understand my theory, then they're not even criticizing it on the right grounds.
So, you know, what is the distinction? Like, what role does criticism, what is my question? Like, what I'm trying to, I'm trying to think of how criticism fits into, like understanding, right? Because in, in science you have these peer reviews, right? And so you're, you're at the mercy of peer reviewers. And if they don't understand your theory, if it's not intelligible to them, or if they have a metaphysical hang up that they, you know, well, this can't be because I have a mechanistic understanding and they're not even talking about mechanisms, so therefore I object on ontological metaphysical grounds. Right? So that would be a metaphysical intelligibility problem from your perspective. But they might not understand the details of the theory as well.
So anyway, so just the thought occurred to me, man, criticism is easy. This thought recurs to me all the time and generation is hard. So.
[00:33:26] Speaker A: Yeah, but I think your point about criticism is, yeah, it's important and it's interesting to relate it to.
I don't think I have explicitly discussed that somewhere, although I'm reminded of this one paper and I think you have seen it because one of your questions was also that paper about perspectivism and the different perspectives in neuroscience. And that's actually not my own work because I had a PhD student, Linda Holland, who did that work. But we published, we wrote those papers together and also with neuroscientist Benjamin Dukar.
And there, in this case, there is the dominant electrical bioelectric paradigm with the Hodgkin Huxley potential, or.
[00:34:20] Speaker B: Yeah, this is the Hodgkin Huxley account of action potential.
[00:34:23] Speaker A: Action potential, exactly. Yeah. And then there are these phenomena like thermal expansion, so thermal and mechanical phenomena that, that are unexplainable. And then more recently, there's an alternative thermodynamic paradigm or perspective.
And here the point of Linda's paper is that if you look at it at these perspectives, these different perspectives through the lens of understanding, not through the lens of truth and realism, like what is the truth? But more as different perspectives that can advance our explanatory understanding of the whole system of this domain, then you see that those thermodynamic perspective, the people who adhere to that perspective that they were able to criticize, had they found certain assumptions in the traditional approach.
Like they had something had to do with, I'm not into the details, I.
[00:35:29] Speaker B: Can tell you, because I'm fresh off reading it. So that the assumption that Hodgkin and Huxley have to make in their account of the ionic conductances being responsible for the action potential is that the membrane has a constant capacitance.
[00:35:41] Speaker A: Exactly.
[00:35:42] Speaker B: And the people who come from the thermodynamic statistical mechanics perspective say, well, that assumption. Most people don't even think about it, but it's still an assumption. And if you look at the thermodyn properties of the membrane, especially for nerve signals propagating along the membrane, you can account for it. The capacitance. We don't know that the capacitance is constant. And there are properties of the membrane that it swells and stuff with ionic flow that it's likely that the capacitance does change. And so that's like a direct criticism of the Hodgkin Huxley account. I can't believe I just rattled that off. I mean, it's. I'm driving right at once. But that was, like, interesting to me because, like, oh, my God, like Hodgkin and Huxley is like the basis. You know, it's like the first thing you learn in neuroscience is, oh, look, they figured it out. This is the example, the quintessential example of good neuroscience. And then I. Along comes your paper, and I didn't know about the thermodynamic account of nerve, you know, signal propagation. And I'm like, oh, my God, everything is up for grabs still. Nothing is safe. So anyway, nothing is safe.
[00:36:52] Speaker A: Well, that's science.
Science, yeah. So that's always about criticism.
[00:36:59] Speaker B: Yeah, yeah.
[00:36:59] Speaker A: And another thought I had when I. When you brought that idea of criticism up is that, on the one hand, criticizing something purely for its being at odds with.
So being at odds with your accepted metaphysics is, in a way, dogmatic. Right. You don't want. Just don't want to.
You are a determinist, and quantum physics.
[00:37:26] Speaker B: You don't even want to.
[00:37:27] Speaker A: I don't even want. Yeah, and that's what, in the case of gravitation, that's what Huygens did, unfortunately. Well, he was already somewhat older, so maybe we can forgive him, but that's not a productive way of criticism. But on the other hand, these debates about understanding intelligibility from different perspectives, for instance, can give rise to productive criticisms, like in that case of the action potential and more generally in other cases.
I am now thinking of a passage by Ernst Mach. So Ernst Mach, I think I also discuss it somewhere in my book. But Ernst mach was the 19th century positivist philosopher of science.
I think it was also in relation to that gravitation history of gravitation episode.
And he argued, because he was an extreme positive, he just believed in, like the facts said and the observable facts and the phenomena and the rest was just theories or just instruments. And we don't have to worry about whether they are true or not true, et cetera. It's just about measuring and predicting, not even about explaining. Science is not about explaining things, but only about predicting it and about measuring the facts.
And he has a book where he discusses this history of Newton and debates about intelligibility. And then he says something like that it's all nonsense and we should just get rid of it. Because if something is unintelligible at some time, like gravitational theory, action at a distance theories were in Newton's time. People will get used to it and then at some point they will accept it and then they will regard it as intelligible. It's just a matter of getting used to it and there's nothing more to it. So we should not use intelligibility as an argument for or against the theory. It's just a matter of getting used to it.
And on the one hand, I agree with it in the sense that there is this change in history. We see indeed that gravitational theory, that action and the distance theories were not accepted in the 17th century and then after a while they became accepted.
But there's also a more productive, positive.
Aspect of intelligibility, namely that discussing it, discussing it.
On the basis of that, you can criticize each other's theory and you can also be creative.
So the metaphysics of Huygens had a negative part and also a positive part. It can also inspire you.
So the views on intelligibility are not just a passive reaction, like the feeling of understanding that you get afterwards. It's also something that is positive and, yeah.
Guides scientific research. That's.
[00:40:45] Speaker B: Yeah, it's almost the familiarity account. Like, well, you'll get used to it and then it will feel familiar and then you kind of come in.
[00:40:54] Speaker A: That's.
[00:40:57] Speaker B: Yeah, that's kind of. It's almost like a negative way to look at it.
[00:41:00] Speaker A: Sure, yeah.
[00:41:01] Speaker B: On the other hand, I was going to ask you about this later and I'll ask you now. I have this in the back of my mind. Always there are concepts that come up that you have to learn, new concepts that you have to learn.
[00:41:13] Speaker A: Right.
[00:41:14] Speaker B: And they feel very unfamiliar. You're very uncomfortable with them, they feel very unintuitive.
And then over time, if I read the same term in a paper, all right, let's say I'm trying to think of an example in neuroscience, we could use manifold or something, right?
Like a term. And the first time you read it, you think, I don't get that. And sheerly, by exposure to the term over and over, you become. You feel more familiar with it.
But in a sense, you may be tricking yourself into thinking you understand it, when in actuality you're just used to reading it. And you kind of develop this picture over time. But there's no way to know if you're. If you're.
Eventually you question, do I really understand that concept? And then you realize, oh, no, there are things. Maybe I don't understand it the way I should. So there's that phenomena of familiarity which bothers me.
[00:42:10] Speaker A: Yeah, that's true.
But I would say understanding has to do with using.
So understanding concept has to do with using the concept. And if it's indeed the case that you just read text and you encounter the term all the time, and maybe it was explained in the beginning of the book, and then at some point you think, oh, yes, I have seen that many times. I understand, I can visualize it. You can't do anything. Yeah, but if you can't do anything with it, then you still don't understand.
But usually if it's a good book or if it's a good lecture, or however you are exposed to the term over and over again, also develop something that you can do with it in your mind. Right. To argue with it.
And that's sometimes very difficult to make explicit what it is. Exactly.
I'm now reminded when you mention this example, I'm thinking of myself as being educated as a physicist. It's a long time since I really read real physics or history of physics, but I still have this feeling when I hear a lecture or hear something that I have a basic idea of what it's about and about the basic concepts and however difficult modern physics is nowadays. But when I hear about biology or neuroscience, it's a different way of thinking with concepts that I. It did maybe exposed to over and over again.
I remember pathway. You know, biologists were always taken. There was a time when I was interacting with people or reading things, but I never really got the idea of what the pathway was or manifolds. Manifold is also something in mathematics, of.
[00:44:07] Speaker B: Course, but it's probably related from dynamical systems theories. But, you know, there's in philosophy, the term representation, right.
I mean, that's a big one. We don't. Let's not. Because we could spend apparently a thousand years talking about representations. But you know, people understand representations differently. But they can, as long as they can use the term in within what they're doing, within their research program to help them get a foothold on what they're doing. That's a sense of.
Of understanding, even if it's from a different perspective. Is that one way to think of it? Yeah, thinking about that phenomena, the familiarity. I mean, in a sense there probably is, you know, reading the same term over and over. You are gaining some bit of understanding. Which reminds me that, you know, in your work you discuss how understanding is not an all or none phenomenon, but is actually, you know, on a gradient. Right. You can have partial levels of understanding. And I want to make sure that we discuss your work on developing a test of assessing understanding in artificial agents.
So I want to make sure that we talk about that before we move on to other topics perhaps.
So one of the reasons. So earlier I used the term behaviorist, right. In some sense, like your account of understanding is behaviorist in the sense that it doesn't care about the. There's no mental representation necessary for understanding.
There's no.
It doesn't.
Your account of understanding doesn't care about the implementation or the mechanics internal to what is being is doing the understanding. So in that sense, there's no, no qualms with declaring that an AI agent, that some artificial agent understands something as long as it has that pragmatic aspect of demonstrating the skills necessary to use the information that it is assessing in some way.
[00:46:19] Speaker A: Right.
[00:46:20] Speaker B: And one of the things that you've been working on recently is developing kind of an operational test of assessing understanding in artificial agents. But this is also to sort of bring it in line with understanding in humans. Right. So this, this same sort of test should be able to be used in humans and in AI partially also just to bench used as a, you know, to develop benchmarks of AI understanding.
So tell me a little bit more about that. Like what? So obviously, you know, AI has exploded over the course of your career. So is that what led you to, to this work? And then tell me a little bit about, you know, what you're doing here.
[00:47:04] Speaker A: Thanks. That's indeed important part of what I'm now doing in my research right now.
I don't have as much time as for research as I used to have in the past because of my more administrative responsibility, teaching and so on. But I'm doing this, and I must immediately say that I do it in collaboration with others. So it's a couple of years ago, here in the faculty of science, they wanted to stimulate interdisciplinary collaboration, interdisciplinary research.
And that's when I started collaboration with a physicist and a computer scientist.
And we got some kind of seed money for starting a project on understanding in machine learning and AI systems. And that's where this started. We hired a postdoc postdoctoral researcher that was Christian Barmo. He's the first author of most of our, I think, all of our publications. We have three publications right now.
And so he did a lot of work in this.
[00:48:09] Speaker B: Is he a philosopher or scientist or.
[00:48:11] Speaker A: He is a philosopher. He's now moved to Ghent. He's in the philosophy department as a philosopher of science.
But he wrote a thesis on mechanistic explanation. Actually.
[00:48:25] Speaker B: How many theses have been written about mechanistic explanation? Yeah, yeah.
[00:48:31] Speaker A: But he was also knowledgeable about AI and interested in it. And so we hired him for this project and that's now taking up a big part of his research. And he's still collaborating with us.
And the interesting part, also the good part, but also sometimes the difficult part of doing this collaborative research is, of course, that there are different perspectives and that you have maybe also different views, especially philosophy. You can have, you can disagree easily.
And so it was, yeah, it was a challenge, but we had some nice results.
Before I started this project, because you asked me, how did I enter, how did I start this?
I was already sometimes asked when, when giving a talk, a lecture on my book or in my work on understanding, sometimes in the, in the audience. It was before the AI hype, of course, but then sometimes people ask, yeah, but what about computers? And you, you know, scientists use computers and what about.
Suppose that the computer generates an explanation, does it give us understanding?
That was a question. It was comparable to the question of, hey, you have these computer generated proofs in mathematics that we, as humans, we cannot oversee if they are so complicated and some kind of conjecture is proved, but no individual mathematician can ever oversee the whole proof. So is it still giving us understanding of why this conjecture is true, false?
[00:50:17] Speaker B: And that's different than the, the Turing Test, which, you know, was supposed to solve, you know, whether not whether computers can think, but whether computers can do things that satisfy some criteria to which we ascribe thinking? Right, but, but, yeah, so. So in a sense like that, that what about machines? Has been in the air since Turing. So of course you were getting questions before.
[00:50:37] Speaker A: Yeah, of course.
Well, Indeed. And I.
So John Searle, he died two months ago or so.
The philosopher who invented this Chinese room experiment. Thought experiment.
I read it was a coincidence, but I encountered a small book with lectures by him just a couple of weeks ago from 1984, so 40 years old. And he discussed there also minds and machines and can machines think and explain the Chinese room argument. But when I reread it or read it, I read those lectures for the first time.
I was. So it was like more than 40 years ago and we're still wrestling with this same question. And he already, Yes, there will surely be many advances in computer science. And AI had. The term artificial intelligence was of course invented already in the 50s. So it.
[00:51:43] Speaker B: He was talking about his was like symbolic AI. This would. He was referring to.
[00:51:46] Speaker A: Of course, that's different. Yeah, but he. But he said that we can.
That there will be great advances, probably, but this bridge between the syntactic and the semantic and the understanding will never be there because we will never be. We can never bridge that gap, you know.
[00:52:04] Speaker B: Oh, because you have to have meaning to have understanding.
[00:52:07] Speaker A: Yeah, yeah, that's the point.
And that is actually still an issue, I think, also in the current debates about large language models. And do they really understand or are they just parroting and just predicting the next word? And I must say, I myself am, but I'm not totally sure. I mean, I'm hesitant about it.
So it's true that in these papers and in this project with the benchmark paper, we take this behaviorist approach and we try to develop this benchmark and look at just the abilities of the system to, for instance, answer counterfactual questions, answer all kinds of questions that you can see as a measure of understanding.
But I'm still hesitant to say that it's real understanding. And in that sense. Yeah, yeah, yeah. And because you say you just called my approach behaviorist.
[00:53:20] Speaker B: Well, but yeah, it can be construed as behavior. It has similarities to behaviorism.
[00:53:25] Speaker A: Exactly. That's true. But I do of course also include this idea that the scientist or the agent needs to have an intelligible theory for their understanding.
And the intelligible theory, when I wrote the book and when I did the research, I wasn't thinking of AI at all. I was just thinking of human scientists, and especially looking at the history, seeing scientists as not some kind of rational systems, but as historical actors that are in a historical context and that are human beings with all their cognitive limitations and so on.
And that was to be also in the background of the whole Project showing in a tradition, of course, of historians and maybe social sociologists or science, showing that science is a human enterprise and that in some sense it's only human.
This would then maybe almost by definition preclude artificial understanding. Because you would say, okay, it's just other humans who have brains or who have minds and who can represent, have, maybe have mental representations. Theories can be construed as mental representations, of course, not necessarily, perhaps, but that they can have a real understanding. Yeah, I don't know, maybe what was naive or I didn't think of behaviorism or being in one or another camp with respect to philosophy of mind or cognitive science or so. So when we started thinking about artificial understanding and about the question whether AI systems also can have understanding, we tried out this approach of just measuring understanding, indeed, by means of these different levels corresponding with the different types of questions that the system can answer and just then assume. And also argued, we called it explicitly behavioral in the paper and just say, okay, this is.
If this is how you can measure understanding. And so why distinguish between a computer and a human being who answers questions?
Well, John Searle would not have approved of it. I guess that's how we did it. But personally, I'm part of the team. And I mean, but personally, I'm not sure if it's totally in line with my own.
We give up something. We give up something that is quite crucial in my own account, namely this idea. Idea that you need theories.
And that's also something that you can of course, do further research on and that philosophers of AI are doing research on. And I actually, I'm also collaborating with a German philosopher of science, Florian Boger, who's doing a lot of work on AI. He has a big project on that with PhD and postdocs, and he has written about thinking about whether AI systems, well, whether they do deep neural networks, whether they do actually represent them, whether you can see whether they have concepts. For instance, he says, well, they don't.
Maybe they don't have concepts, but they do have something.
You can ascribe something to them, which he calls functional concept proxies something that is a proxy for the concept that functions as a concept.
[00:57:23] Speaker B: And you think that. So the question is whether that could also be true for theory, to have an intelligible theory.
[00:57:32] Speaker A: Yeah, so I mentioned this because that would be, of course, further research on whether indeed we can locate or we can argue that AI systems also have theories in some way.
[00:57:49] Speaker B: So you might not have to give that up.
[00:57:51] Speaker A: Maybe not in the end, but what do you think? I mean, you're cognitive for the neuroscientist. I mean, this analogy between. That's what.
Well, some people in my group, like Christian, for example, I think is more outspoken on this. And they simply compare the brain to.
Of course, the whole idea of neural networks and deep neural networks started out with a comparison with the brain and the idea that the brain also functions in this way. So if human brains can understand and can represent, have theories, then maybe the deep neural networks in AI can have it too. And we don't know whether it's in AI.
We don't know where these theories are located in human brains as well.
Now, it's an amateur. I mean, I don't know a lot about that.
[00:58:46] Speaker B: Well, I mean, this is a philosophical problem, right? So to say, like whether I, whether I, an organic being, have a theory that's an intelligible. That's intelligible to me. I mean, then you have to examine what that even means. Right? So it's odd to say, like, a system has a theory that is intelligible.
And this is where, in some sense, your account of understanding is somewhat deflationary, if I'm using that term correctly in that.
So I, I think even I am still hung up on the idea that there is some subjective aspect to understanding in the sense that, well, if I'm going to assess my own, whether I have, quote, unquote, have a theory that it's intelligible to me, well, that, you know, that's like a judgment call on my part. And in that sense, it's kind of subjective.
[00:59:44] Speaker A: Right.
[00:59:46] Speaker B: And so it's. I think that there's that old aspect of the mental, the mental aspect of it that I can't somehow let go of that then I don't impute, like into machines or whatever.
However, so it's kind of a loftier notion of understanding if it's an organic being. But if you go with the deflationary account of it, I'm fine with ascribing understanding to a machine. It just, it's less special. It means a different thing than I'm used to, than my, than my concept of understanding, if that makes sense.
[01:00:21] Speaker A: Yeah, no, I can see that that's the way. A way to go. And so I didn't.
[01:00:28] Speaker B: But, but you're like, you know, there's nothing like. So, for example, a Chalmers zombie, David Chalmers zombies, they could have understanding because it doesn't depend on consciousness, doesn't depend on, on subjectiveness. Right.
And yet I think that if we were asked if we did like a survey, well, would a David Chalmers zombie, after we explained what that was, you know, do they actually understand that they are coming to. Well, if they're like a real zombie coming to eat you, do they understand that they need the nutrition or whatever? Or are they just acting like a machine?
[01:01:03] Speaker A: Right, yeah.
[01:01:05] Speaker B: And the fact that we can even say that, that are they acting like a machine, operating without an understanding, that betrays our common sense, language, lingual use of what understanding is.
But I'm surprised that you mentioned that you were hesitant to go down that road and ascribe understanding to machines. But in your account it's possible.
It just means that understanding takes on a different sheen, I think.
[01:01:33] Speaker A: Think. Yeah, yeah, yeah, yeah, yeah. Now. So I think the hesitancy also comes from what I, what I explained earlier, that I have this.
When working on understanding, on scientific understanding, writing the book and the papers, et cetera, doing the research, I was really, I have this idea in mind that it's human beings who practice science. And, and when you want to understand what science is and what scientific understanding is, you have to look at the practice.
You shouldn't stay in your armchair and try to rationalize and try to construct some kind of system.
Maybe that is formally correct, but it has nothing to do with reality. And reality, real science, is about human beings being in labs, doing experiments, being in behind the desk, doing calculations and thinking and being creative, etc. And that is some kind of.
[01:02:38] Speaker B: Yeah, it's active.
[01:02:40] Speaker A: Yeah, it's active. And in that sense, maybe I still am hesitant indeed, and have a bit of a difficulty with saying, okay, it could also be replaced by computers.
[01:02:55] Speaker B: Well, what are some other accounts of machine understanding then? How does yours differ? So I'll say just explicitly, there are like three kinds of questions that are used to assess understanding of machines. One is called what questions? And these assess the ability to does the machine have access to the right information that would even garner an explanation?
Right.
The other is why questions, which is, you know, can the machine or the artificial agent build explanations of the phenomenon of interest? And then the third is the counterfactual, a la Woodward. Right. In causal explanations, what if it were different?
Can you qualitatively explain how things would be different or what would happen with regards to this phenomenon if things were different in this particular way? So those are the three kinds of questions that are used to assess the understanding in machines. And of course, then you have to judge the answers to those questions, which is probably a whole nother issue, right?
Because they're not mathematical. The answer is not 42 to any of those questions.
[01:04:14] Speaker A: Yeah, so this is just in the data framework. Yeah, yeah.
[01:04:17] Speaker B: And so, so what other like are you going up against other. You know, because people are interested in, in the capacity and the abilities of machines and have been especially since AI. The AI boom or whatever.
So are you like, when you're writing about these things, I know you're, you're referencing other accounts of, of understanding. I mean you even referenced Turing a little bit.
But then are there other accounts that you sort of disagree with or have thoughts about relative to the way that you're approaching this?
[01:04:50] Speaker A: First of all, when it comes to the basic starting point and philosophical assumptions, I think there's many philosophers who just would disagree because they argue that you cannot ascribe understanding to a machine. Right.
[01:05:10] Speaker B: So the metaphysical unintelligibility disagreement.
[01:05:13] Speaker A: So in that sense we already take somewhat, I think a somewhat eccentric position.
And the idea of developing benchmarks is. Yeah, so that's, there's a whole, well, not tradition, but there's already many benchmarks for linguistic understanding of machines and all kinds of understanding. And so this is just to add to it, to see whether we can also develop a benchmark specifically for scientific understanding and then in particular for physics. Because that is just the area that we as a group are familiar with and that we want to focus on.
And in kind of a follow up paper that is not yet published, but it's on the archive which we wrote with also a couple of master students working on this.
We build upon this idea in the paper that was published in Minds and Machines and we go into more detail in articulating the kinds of questions that that could be asked in classifying the questions. And we want to, we have a website and then we invite.
So the idea is that experts judge the answers to the questions. Right. So we invite people to submit questions and to come up with answers and to build a kind of collection of question answers that then constitutes the benchmark.
And then there are also some questions are indeed the easiest way. So as you say, having a calculation that just gives you a number or having a multiple choice question that gives you just one right answer, that's easy. Of course, that's easy to use, but it should also allow for more open questions or other tasks or questions that, that the system should be able to answer.
And yeah, that's more difficult.
And so we're in the process of building that actually.
[01:07:32] Speaker B: Yeah, I mean classically in machine learning, like, you know, benchmarks rule the land. But they're always percentage accuracy. Right. So it's always an accuracy measure when there is a clean, crisp right or wrong answer, like a multiple choice or whatever.
And then there's this problem of Goodhart's law, where as soon as you create a target, the benchmark becomes obsolete. It becomes a bad metric for the thing that you're assessing because you make a target and then all efforts are focused on passing the test, and then it doesn't, you know, and then. So that means it's a bad metric of what you're actually assessing.
So you have to think about those issues as well. But so you're trying to operationalize this sort of understanding, which you probably never thought, oh, how do I quantify understanding in this way? Because you, like, at least in your book, you don't really talk about quantifying the understanding. I don't believe. But it's really just the conceptual bedrock of the concept of understanding.
[01:08:37] Speaker A: Right, yeah. So that's going back to.
To where it started, to my book.
You mentioned earlier that I do acknowledge that understanding comes in degrees and that in that sense you can measure it.
But initially, when I started working on it, and also this is, I think, reflected in the book, I'm not really elaborating on that, and I actually talk about understanding a certain.
In a way that either you have it or you don't.
In the process, it dawned upon me, or it became obvious to me that of course there are degrees of understanding. Also, if you look at the history of science, for instance, this is quite provocative, but I'm willing. It's not in my book, but in another paper, later paper, I made a study of this, the chemical revolution, with the transition from prologiston to oxygen theory, Laperge's oxygen theories. The modern theory, modern chemistry started there. Right. And this old phlogiston theory in the 18th century is now regarded as. Well, chemists haven't even heard of it anymore. I sometimes ask chemistry students, they never know what it is. Yeah.
And philosophers say, okay, that's a prime example of something that doesn't exist. So you. It cannot give you understanding a theory, but phlogiston can't give you understanding because logistics doesn't exist.
[01:10:13] Speaker B: And you say, yes, it can.
[01:10:15] Speaker A: Okay, exactly. Yes, indeed. At least in the context of the 18th century. And maybe it can even give us some understanding. Now, that's maybe the provocative part.
But still, of course, I immediately admit that La Roche's theory and oxygen theory gives you more understanding than Flaugistian theory. So you have to be Able to compare that and there is a degree and then that means that you have to have a measure of understanding. But again, had an R.
As I also, I'm not, I don't like the more formal approaches to philosophy, or maybe it's not my skill or I'm not into that. So I defer.
Tried to develop that on my own. But there are others who have done so and so there are papers about degrees of understanding.
[01:11:14] Speaker B: Oh, so were you kind of reluctantly roped into this agential understanding track?
[01:11:20] Speaker A: By the AI thing you mean?
[01:11:22] Speaker B: Yeah, yeah.
[01:11:23] Speaker A: No, not at all. No. But no, because I wanted to know, I wanted to think about it and I also wanted to explore it. But after I discovered or I felt that there was a tension. But I didn't want to.
Reject the idea that AI systems or machines can have understanding. But then at some point I thought, okay, I still don't know.
But maybe what is important also is that that's at least my feeling that. The question with which you started in this project on machine understanding, the first question was we wanted to address or to answer was what can machines understand? So can machines think? Can they understand?
But maybe that's not such a interesting question after all, because the question is more how can we human scientists interact with machines? And what can machines, AI systems do for science?
And of course you can treat them just as tools, or you can treat them as collaborators, as agents who, whom we interact with and whom we communicate with.
Then it's maybe not that interesting whether they really possess or have understanding, because as you said before, what does that even mean?
Do they have it?
Then the behavioral dimension becomes more important.
What is the output of the machine? And, and is it intelligible to us as humans? And how can we together advance scientific understanding? Something like that.
[01:13:26] Speaker B: But I guess in the back of everyone's minds, right? So we're used to agents as other humans in our scientific pursuits. So in a journal club meeting or something, I'm interacting with other humans and they bounce ideas off of me, they criticize my thoughts and, and it goes back and forth. And so in the back of everyone's minds while they're using these agents as tools is like, wow, like that. I didn't think of that. How did it. Like what must that mean about what's going on with the machine? And then all of a sudden you have people ascribing consciousness to machines and. Right. So understanding is because it's like sort of deflationary and can be construed as behaviorist.
[01:14:08] Speaker A: Ish.
[01:14:09] Speaker B: In Your approach, it's less.
You could have something, understand, and you wouldn't ascribe to it, like pain.
[01:14:19] Speaker A: Right.
[01:14:19] Speaker B: So you could still turn it off and feel okay, but if it had, you know, consciousness, you would feel bad about turning it off or something.
[01:14:26] Speaker A: Right.
[01:14:27] Speaker B: So in that sense, it's like, it's okay that understanding is not only a human endeavor and you take a difference.
[01:14:36] Speaker A: From consciousness than other things. They.
[01:14:39] Speaker B: You do. Is that what you.
[01:14:40] Speaker A: No. You do. That's a question.
[01:14:41] Speaker B: No, I. I don't. No.
[01:14:43] Speaker A: Oh, yeah, Because, I mean, and now I'm maybe turning. Yeah. In the direction of the more optimistic, or however you want to call it. More. Less reluctant. If you. Indeed. Now we have ChatGPT, and everybody is apparently using it already. I'm using it sometimes, too, but.
And people are interacting with it as if it was just a human being. You would.
ChatGPT is always kind to you, and so you are kind. You say, please tell me this or thank you, etc. And some people can say, well, that's nonsense and that's crazy, but if you establish that interaction at some point, people will indeed just, well, ascribe feelings or consciousness or whatever to.
To the system they're interacting with. And then you might ask, well, what's the difference? I'm now talking to you and you're there on the screen. I believe, of course, that you're somewhere in a room far away and that you really exist and that also you're a human being and that you have brains and thoughts and conscious like me. That's also the other mind's problem. Right.
The skepticist can even question that.
So what's ultimately the difference between being skeptical of your consciousness and understanding or of the machines?
[01:16:12] Speaker B: I mean, for some reason, it's important to us that, you know, life versus non life.
[01:16:16] Speaker A: Right? Yeah.
[01:16:17] Speaker B: Like, I don't want to feel bad about turning a machine off.
[01:16:20] Speaker A: No. Yeah, yeah, yeah, yeah, yeah, yeah, yeah. But I will turn you. I can turn you off, but you will still be there. That's what I trust, that you will still be there if I just pull the plug out here.
[01:16:31] Speaker B: Yeah. Right. But see, you don't have that problem with me. You're turning me on, Hank. You're turning me on.
I wanted to go back to this idea because I don't think that we talked about this earlier, but you just mentioned it briefly, that I don't remember exactly what you mentioned, but that things can be wrong and still contribute to understanding. Right. You can get facts wrong, and it's still contributes to understanding. And I mean this goes back to the George P. Box. All models are wrong, but some are useful.
So in your account also, even if you're wrong about, even if it's factually incorrect, it can still contribute to understanding. However, like most large language models, for example, if we focus on them on the artificial side, they're wrong in trivial ways where like they're wrong in ways that are not useful, that are not pragmatic to advancing understanding because maybe because they don't have an intelligible theory from which they are delivering their messages, maybe because their next word predictor is based on this gigantic corpus of statistics, you know, of words, you know, tokens from which they statistically just produce things. So that doesn't require any sort of intelligible theory. And therefore the kinds of errors that they produce aren't errors such that like. Well, I get where you're coming from because I understand the theory and here's my answer. It's more like a trivial error, you know, saying red instead of blue or whatever.
[01:18:14] Speaker A: Right? Yeah, yeah.
[01:18:16] Speaker B: So I guess the question, you know, is, I'm not sure how much assessment you've done so far, but if they're wrong in trivial ways and therefore not contributing to the understanding, so would you want to then see them be wrong or critical in useful ways? Would that be the difference?
[01:18:36] Speaker A: Yeah, yeah, yeah.
I really like that question because it's something that gives.
Yeah, it's an idea that I haven't thought about yet and I haven't actually showed up.
I haven't related. This debate about models can be wrong, but useful or factive. So thanks, I will let you know.
[01:19:03] Speaker B: Oh please do. Let's collaborate.
[01:19:05] Speaker A: Maybe we can write a paper together. Indeed, that'd be great.
[01:19:08] Speaker B: That'd be great.
[01:19:10] Speaker A: Because indeed this debate about factive I called it, is understanding factive? So the question do your theories have to be true or, or approximately true in order to give understanding in that debate, I'm on the non factivist camp, so I think it's no problem if your theory is wrong. Flaugistian theory was clearly wrong. All theories are wrong, but also all theories are wrong. Newton's theory is clearly wrong. We can still use it to understand. So it's about the truth or the accuracy of your representations and your models. Right.
So strong idealized models can still be very useful, give you understanding.
And at this debate I haven't seen a debate like that in relation to machine understanding.
So that's why I think your suggestion is very original and also Productive, hopefully.
But because the mistakes that the LLMs make, it's not about mistaken representations because maybe they don't even have representations. It's just these weird mistakes they make in predictions. Like I'm thinking of examples where they cannot even do simple arithmetic calculations or they just give different answers they are not consistent of.
I remember one example.
Yeah, I think I read it somewhere recently about that. It was asked how many letters does the word 19 contain something? And then first it said the right number.
I don't know, this is eight or so.
And then the question was are you sure? And then it corrected itself and it gave a wrong answer. Things like that.
[01:21:13] Speaker B: Humans do that too.
[01:21:15] Speaker A: No, that's maybe true.
[01:21:17] Speaker B: Yeah.
[01:21:21] Speaker A: But when mistakes in the output in the predictions. So this is a mistake in the prediction, right?
In the output of the system.
And that I would say corresponds to a mistake in scientific understanding as practiced by humans. Would correspond to a mistake in the output in the prediction that the scientist makes on the basis of a particular model or theory. For instance with Newton's theory it could predict all the planets orbits and lots of phenomena. But then in the 19th century we had this anomaly in.
In the orbit of Mercury. Right.
[01:22:10] Speaker B: The precession of Mercury exactly where the.
[01:22:15] Speaker A: Einstein theory gave a right explanation.
And you can say okay, that's a mistake, a mistake in the output. But it was indeed, of course it gave rise to a correction of the theory.
It was kind of a consequence of the theory that there was something wrong with the theory and in the machines. Well, sometimes you can't find explanations why these mistakes are made. Right.
[01:22:43] Speaker B: Yeah, not just that familiar with the literature. I mean this is a cottage industry of people trying to figure that out.
[01:22:49] Speaker A: Yeah. And you have of course with pattern recognition and the backgrounds and so on.
But it would be really interesting to compare that. Indeed.
Yeah. Well, we will come back to the question, I think whether the networks or the machines have a representation.
[01:23:15] Speaker B: From.
Well.
[01:23:20] Speaker A: You have to analyze the cause of these mistakes.
The claim was that these are different kind of mistakes. That was your claim made.
[01:23:29] Speaker B: Yeah, yeah, yeah.
Well, thinking. Okay, kind of taking it a step further because I wanted to ask about, since we brought up like you know, having a model and all models are wrong because models are idealizations. Models are they necessarily abstract away details. And I kind of want to relate this to. So there's this like gradient of abstraction with models.
Is there an analog of a gradient of abstraction of understanding? How does abstraction relate to understanding? Because you understand, I don't need to understand all the atoms in My computer. I need to describe how a computer works, and I can describe it at different levels of abstraction.
Is there a relation between how abstract something is and the nature of that, that understanding? Is that something that you've thought or written about?
I'm sorry if I've missed it in your writings.
[01:24:29] Speaker A: No, I haven't, but I think I have written about idealization, of course, and idealization and abstractions are related, but it's not the same.
And with respect to idealization, you can say, okay, an idealization is also an abstraction because you abstract from particular. Particular details. You leave out details.
But in that sense, idealization is a kind of an abstraction.
On my account, idealizations are fostering understanding, are helping us to understand, because they increase intelligibility. If you have an extremely detailed account of a system, what's an example of.
[01:25:23] Speaker B: An idealization that increases understanding? Does one readily come to mind?
[01:25:28] Speaker A: No.
Well, in my book I have a historical chapter on kinetic theory of gases on models in the 19th century. And there I discuss. Well, you can think of atomic models.
Well, to start with, not representing molecules or atoms as just point particles or as just spheres, simple spheres that collide and only look at their masses and their motion, so position and velocity and so on. That's an abstraction. And it turns out that these molecules are more complicated. But also, if you represent the solar system as just in terms of. If you make a model of it, even a material model, where you represent, where you have all the planets as spheres that maybe differ a bit in size, but they are perfect spheres and they are moving around the sun in ellipses, then it's an abstraction because the real planets are much more complicated. But they are, of course, it's a very good abstraction, because the solar system is unique in the sense that it's so isolated from the rest of the universe and it behaves almost perfectly according to the laws of Newton, without any disturbances and so on.
So that abstraction and the details of the planets that we are walking around on Earth, et cetera, and that there are mountains and there's water and seas and oceans, that's all irrelevant to the behavior of these planets on the scale of the solar system. But it's still an abstraction because you leave out all those details.
But you were asking about a gradient or a scale.
I don't.
You have this famous quote by Einstein who says, well, scientists. I don't know it exactly by heart. It's somewhere here on the wall, actually, in the building in German.
Scientists have to make things as simple as possible. But not simpler. You can also maybe idealize too much.
It's not always say you can also leave out too many things, but on my account it would be the pragmatic value of the idealization that for us it's easier to use these simple models to predict our behavior.
[01:28:19] Speaker B: So can you map that directly onto the pragmatic value of understanding?
I mean, or are they.
[01:28:25] Speaker A: Well, of the pragmatic nature of understanding, I would say.
Right.
[01:28:29] Speaker B: But I'm thinking like there's levels of abstraction. Are there levels of understanding? Because it's on a gradient.
Gradient is like more and less.
But in terms of abstraction, you can think of emergent properties at different abstract levels. So they're same kind of like emergent properties in the phenomenon of understanding is I guess the question.
[01:28:52] Speaker A: So levels of understanding. Yeah, yeah. Okay. So that you have a higher level understanding of understanding different something.
[01:28:59] Speaker B: I don't.
[01:28:59] Speaker A: Emerges. Yeah, yeah.
[01:29:01] Speaker B: Some sort of. It doesn't have to be like an emergent property of understanding, just something similar in terms of, you know, you get more and more abstract. You can condense your explanation, you know, and not. Not track all the molecules in a gas. Right. And you can say something about the statistical properties of the molecules. That's useful.
Or the solar system. Right. Something about the mass of the earth without talking about all the mountains, you know, the actual shape of it, and just treat it as a sphere. And those are sort of emergent properties if they're useful.
[01:29:34] Speaker A: Right.
[01:29:34] Speaker B: And you can use short sentences to describe it in a useful manner.
But in terms of understanding the. You know, and you have to deal with the intelligibility of a theory and you're dealing with the theory and theories themselves have different levels of abstraction and idealization. So I'm just wondering if there's like a clean fitting of understanding into the different levels of abstraction. I don't know that my question is intelligible, unfortunately.
[01:30:00] Speaker A: Well, yeah, it's clear. Very clearly.
It's a clear question.
But I don't know, I'm not sure how to answer it. And I'm not. I still.
I'm not yet sure whether there's a difference between the degree of understanding in the sense of we have more or less like the flujacetion theory has less than the oxygen theory, etc.
And these levels you talk about.
So it suggests that there are different dimensions of understanding. Right?
[01:30:39] Speaker B: Ah, yes, that's a good word. Yeah, dimensions would be good.
[01:30:43] Speaker A: So.
[01:30:43] Speaker B: Well, at least that's something I wanted to ask you about. So I'm glad You mentioned it.
[01:30:47] Speaker A: Yeah, and I.
So so far I stick to this idea that there are degrees, but there are no, that is not really that you move from one kind to another kind of understanding. Not even. For instance, I have thought about this and also written about it in the context of this debate about public understanding of science. And like. Well, in the beginning we discussed it briefly and you also.
[01:31:27] Speaker B: Yeah, I was about to bring us back to that anyway, so this is great way to segue into that because I want to make sure we talk about that. So.
[01:31:32] Speaker A: Yeah, yeah, so that's because you can ask the question, you can say, okay, expert scientists, for instance, in physics, they have understanding maybe more or less of a phenomenon, but they have it and it's scientific understanding. And of course it's related to their skills of maybe mathematical skills or all kinds of skills.
When we are looking at public understanding of science. So in the sense of what can the more general public understand, the public, people who don't have the expertise and the skills that expert scientists have, then you might be inclined to say, okay, that's a different kind of understanding.
Because they can never be. It's not on the, on a scale or on the gradient from lay understanding to expert understanding. It's really different.
Or you might still argue that, well, it's very low grade understanding, but there's not an essential difference between the two so far. I'm inclined to the latter view. So that there is, that there are, there are similarities in the sense that skills are important and different skills, of course, but there is a kind of, yeah, there is a similarity between the different types of, between the different, between public understanding and expert understanding.
And that has to do with that truth.
Lay understanding to public understanding is not just knowing a lot of facts about.
That's how it was measured traditionally. Right. They did these surveys and they asked, well, how old is the earth? Or all kinds of facts, you know, kind of trivial pursuit thing like how much do you know?
But it's more than that. It's also a kind of reasoning ability and some kind of skill to a grasp of the whole picture and not of the individual facts.
And I think that in that sense it's not a totally different kind of understanding. And then coming to the metaphor papers, this is a paper, by the way, I did with a master student of mine, Marta Smedeka. It was a project that she did there. Our conclusion was from a small literature study and empirical study that, well, metaphors are used in popular sized communication, popularized scientific publications, but also in scientific publications themselves. And they function somewhat differently. But metaphors are everywhere and at the same rate. Right.
[01:34:51] Speaker B: Don't they appear with the same frequency?
[01:34:53] Speaker A: Exactly. Yeah, but in a different way. They are sometimes used. They can be closed or open.
And for instance, Schrodinger has started like a kind of communication metaphor, which gave rise to all concepts like messenger, DNA and translation and so on, which are still metaphors, but of course regarded just as technical terms.
But we concluded that the idea of, well, okay, for the public, we use metaphors because then we can.
Can tell the story in a kind of a metaphorical way that they understand because it relates to their daily lives and so on. That's, of course, a tool for public understanding.
[01:35:47] Speaker B: Well, and the idea being that when you describe something in terms of a metaphor to the layperson, they can actually sort of visualize, or can. Because they don't have a concept of the term what you're trying to describe, they have to relate it to something else, and that gives them some grasp of what you're actually trying to describe. Whereas you were about to describe, I think, the way that experts understand how metaphors are used and received by experts.
[01:36:16] Speaker A: Right, yeah.
And in that sense it is different.
But it's important to see that, especially in the process of discovery, that metaphors can also, for expert scientists, function as tools for understanding. They can make theories intelligible.
And visualization, for instance, is very useful tool for public understanding.
And mathematics is less so.
There was this famous thing, quote or statement that with every equation in a popular science book, you lose half of your audience.
And that's probably true.
But my studies of scientific understanding have shown that for most scientists, visualization is also an important tool.
Like Feynman, who was of course, one of the most brilliant physicists with some deep insight into mathematics, he also had a visual mind and his visual imagination played. It was extremely productive.
[01:37:30] Speaker B: But I should say, like, visualization is not necessary because there are people. I forget the term, but that people don't think. Not everyone thinks in visual imagery. Right.
[01:37:40] Speaker A: So that's the whole point. It's not necessary. That's indeed the point. That's exactly. Yeah, I agree. And Schrodinger thought it was necessary. I don't think it's necessary.
But. But we can at the same time see that many scientists, just like many ordinary people or many other people, use visualization and like visualization.
But the difference in the mechanistic debates about mechanisms in neuroscience and in life sciences, that's also. So mechanistic explanation is also coupled to visualization. Right.
So isn't it the idea that if you have a diagram of a mechanism, you have an overview of the system, it gives you more.
[01:38:35] Speaker B: Yeah, that's true.
I think of it more as, like, the story of how causality works.
But, yeah, you do. As long as. Yeah, there's all sorts of examples of like really big, big wire diagrams of nodes being connected right. By lines. And that's your account. That's a mechanistic account. Right. But I was going to say, like, for the paper on metaphor usage, what you call a closed.
When you use metaphors in the scientific domain amongst experts, it becomes. Its usage is what you guys call a closed metaphor in that it's no longer used used to relate the phenomena you're talking about to the thing that you're using as the metaphor. And actually it closes off because the metaphor is the technical term now and it's the thing that you understand as the phenomenon. So there's this slippage of meaning which, like Alfred North Whitehead called, could give rise to the fallacy of misplaced concreteness.
If you take it literally and stop understanding that it is a metaphor, which I think is a problem or a phenomenon. I don't know if it's a problem. As a judgment call with a lot of scientists is they forget they're actually using a metaphor to understand this thing because it is the thing to them that it is the phenomenon, when in actuality, it's always a metaphor.
But anyway, that's what the difference sort of was, is that it becomes a closed metaphor in scientific uses and is valuable for understanding, as opposed to the open metaphor when you're communicating to a broad audience, to lay people where they're. Actually, there's more like back and forth between the concept you use in the metaphor and the concept you're trying to convey. Whereas in the scientific domain, the metaphor is the concept you're trying to convey because it's become closed. Is that an accurate description?
[01:40:28] Speaker A: Yeah. Yeah. Excellent. Yeah. And I also want to mention, by the way, that this idea of closed versus open metaphor was. We didn't invent that because we built upon a paper by Susan Knudsen and she published a paper on this where she introduced that distinction. And in our research, we also found it and we connected it to understanding.
But you gave an excellent summary of that. And I.
It makes me also rethink perhaps.
So you might still see a fundamental distinction between the expert use of metaphors and the public use of metaphors. In that sense, we were talking about dimension or kinds or levels of understanding. So you might also use it to say that there is this.
That there is this. Yeah.
Difference. Right.
[01:41:34] Speaker B: All right, great. We have a lot of work to do.
That's the way it always goes. Well, so. Okay, Hank, well, thank you for spending the time with me. And as you can see, I have followed a lot of your work and I'll continue to do so because this sort of thing is like right in the nexus of what I'm interested in, as I was explaining to you off camera or whatever. But, but. So thanks for joining me and keep up the good work.
[01:42:01] Speaker A: Well, thank you and thanks for inviting me. It was a great pleasure to do this and I hope we can stay in touch. And I was really. I appreciate your interest in my work and I'm really impressed by your study of it because you have really read all.
[01:42:19] Speaker B: It's fear of failure. It's fear of failure that drives me to.
Anyway.
[01:42:24] Speaker A: Yeah.
[01:42:24] Speaker B: All right, well, thank you so much.
[01:42:26] Speaker A: Thank you.
[01:42:34] Speaker B: Brain Inspired is powered by the Transmitter, an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives written by journalists and scientists.
If you value Brain Inspired, support it through Patreon. To access full length episodes, join our Discord community and even influence who I invite to the podcast. Go to BrainInspired Co to learn more. The music you hear is a little slow, jazzy blues performed by my friend Kyle Donovan. Thank you for your support. See you next time.
[01:43:17] Speaker A: Sam.